Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat : adds test case for embeddingClientConversion #136

Merged
merged 9 commits into from
Jan 2, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ AIChatResponse sentimentAnalyzer(@RequestBody AIChatRequest aiChatRequest) {
return chatService.analyzeSentiment(aiChatRequest.query());
}

@PostMapping("/emebedding-client-conversion")
@PostMapping("/embedding-client-conversion")
AIChatResponse chatWithEmbeddingClient(@RequestBody AIChatRequest aiChatRequest) {
return chatService.getEmbeddings(aiChatRequest.query());
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package com.example.ai.controller;

import static io.restassured.RestAssured.given;
import static org.assertj.core.api.Assertions.assertThat;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.hasSize;
Expand All @@ -9,6 +10,7 @@
import com.example.ai.model.request.AIChatRequest;
import io.restassured.RestAssured;
import io.restassured.http.ContentType;
import java.util.Arrays;
import org.apache.http.HttpStatus;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
Expand Down Expand Up @@ -76,6 +78,26 @@ void sentimentAnalyzer() {
.body("answer", is("SARCASTIC"));
}

@Test
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, please review all test cases in this class and provide if there are any suggestions or improvements

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajadilipkolli, here are my suggestions and observations after reviewing the test cases in ChatControllerTest:

  1. Balance of Positive and Negative Tests

    • Currently, all test cases verify success scenarios (200 OK). Consider adding negative test cases to verify the behavior when invalid or malformed requests are sent—e.g., empty request bodies, missing fields, or invalid JSON formats.
  2. Additional Assertions

    • In tests like testChat, chatWithPrompt, chatWithSystemPrompt, or sentimentAnalyzer, you might add more assertions checking the structure of the JSON body. For instance, verifying each top-level field in the response to ensure your controller includes all expected properties.
    • For outputParser, besides checking the size of movies, you might verify each item meets certain criteria if relevant (such as ensuring the type is a string and non-empty).
  3. Edge / Corner Cases

    • In embeddingClientConversion, you test the expected size of the double array. Consider verifying whether the values are valid embeddings (e.g., checking if any non-numeric values appear, or if each value is finite), though this depends on how much your application needs to validate them.
    • For ragWithSimpleStore, you are checking a general text snippet in the answer. If the application logic is expected to do specific manipulations, consider verifying that the entire returned structure is well-formed.
  4. Descriptive Test Names

    • The names of the test methods are clear but you could prepend them with "should" or "test" to further clarify expected outcomes, for example, shouldReturnSarcasticSentiment() or testRagWithSimpleStoreProvidesValidResponse(). This especially helps when reading test reports or debugging.
  5. Parameterization

    • For scenarios like chatWithPrompt and chatWithSystemPrompt, if you plan to test multiple variations of user prompts, JUnit’s parameterized tests could reduce duplication.
  6. Maintainability

    • If logic in the ChatController grows more complex, consider refactoring common RestAssured setup steps into a utility function or a custom test extension. This way, you keep each test more “readable” while reducing duplicate code.

Overall, the tests are straightforward and effectively use RestAssured’s fluent approach. Implementing additional negative tests and refining validations will help ensure broader coverage and correctness of the API responses. If you have any further questions or would like more suggestions, let me know!

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, can you please help me with writing code to implement all above suggestions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajadilipkolli, here's some example code illustrating how you could incorporate the suggestions from my earlier review into your test suite. Please feel free to adjust any details to suit your project’s requirements.


1. Negative Test Cases

Below is an example test checking how the endpoint responds to a malformed request (e.g., an empty JSON). You can add more variations of malformed or invalid data as needed:

@Test
void shouldReturnBadRequestForMalformedChatRequest() {
    given().contentType(ContentType.JSON)
            .body("{}")  // Empty or malformed request body
            .when()
            .post("/api/ai/chat")
            .then()
            .statusCode(HttpStatus.SC_BAD_REQUEST);
}

2. Additional Assertions

For more thorough checks on JSON structure, you can use RestAssured’s fluent approach to verify multiple response fields. Here’s an example updated from the testChat method, checking top-level fields:

@Test
void shouldReturnChatResponseWithExpectedFields() {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest("Hello?"))
            .when()
            .post("/api/ai/chat")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", containsString("Hello!"))
            // Additional field checks, if the response includes them
            .body("timestamp", is(notNullValue()))
            .body("userId", is(notNullValue()));
}

3. Edge / Corner Cases

In embeddingClientConversion, if you want to ensure the returned values are valid embeddings (for instance, checking if they are finite numbers), you can add these checks:

@Test
void shouldReturnValidEmbeddingArray() {
    String response = given().contentType(ContentType.JSON)
            .body(new AIChatRequest("Corner case for embedding."))
            .when()
            .post("/api/ai/embedding-client-conversion")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .extract()
            .jsonPath()
            .get("answer");

    double[] embeddings = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(","))
            .mapToDouble(Double::parseDouble)
            .toArray();

    // Check dimensions
    assertThat(embeddings.length).isEqualTo(1536).as("Dimensions for openai model is 1536");
    // Check each value is finite
    for (double value : embeddings) {
        assertThat(Double.isFinite(value)).isTrue();
    }
}

4. Descriptive Test Names

Renaming existing tests helps clarify their expected behavior. For example, instead of sentimentAnalyzer, you could rename it to shouldAnalyzeSentimentAsSarcastic(). Below is an illustrative renaming (you can rename others too):

@Test
void shouldAnalyzeSentimentAsSarcastic() {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest("Why did the Python programmer go broke? Because he couldn't C#"))
            .when()
            .post("/api/ai/sentiment/analyze")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", is("SARCASTIC"));
}

5. Parameterization

If you plan to test multiple input prompts for chatWithPrompt, you can use JUnit parameterized tests. Below is a minimal example of how you might set this up:

import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.MethodSource;
import java.util.stream.Stream;

static Stream<String> chatPrompts() {
    return Stream.of(
        "java",
        "spring framework",
        "some other interesting prompt"
    );
}

@ParameterizedTest
@MethodSource("chatPrompts")
void shouldChatWithMultiplePrompts(String prompt) {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest(prompt))
            .when()
            .post("/api/ai/chat-with-prompt")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", containsStringIgnoringCase(prompt));
}

Notice how the containsStringIgnoringCase check ensures the response includes your provided prompt in some manner.


6. Maintainability

If your test class grows in complexity, you can reduce duplication of common setup steps by extracting them into utility methods or a separate class. For example, if you find yourself constructing AIChatRequest with the same defaults repeatedly, you might do something like:

private AIChatRequest defaultChatRequest(String message) {
    return new AIChatRequest(message);
}

@Test
void shouldTestChatWithDefaults() {
    given().contentType(ContentType.JSON)
            .body(defaultChatRequest("Hello?"))
            .when()
            .post("/api/ai/chat")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", containsString("Hello!"));
}

I hope these examples help you integrate the suggestions effectively. Let me know if you have any questions, or if there are other specific adjustments you’d like!

void embeddingClientConversion() {
String response = given().contentType(ContentType.JSON)
.body(new AIChatRequest("Why did the Python programmer go broke? Because he couldn't C#"))
.when()
.post("/api/ai/embedding-client-conversion")
.then()
.statusCode(HttpStatus.SC_OK)
.contentType(ContentType.JSON)
.extract()
.jsonPath()
.get("answer");

double[] doubles = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(","))
.mapToDouble(Double::parseDouble)
.toArray();

assertThat(doubles.length).isEqualTo(1536).as("Dimensions for openai model is 1536");
Copy link
Owner Author

@rajadilipkolli rajadilipkolli Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, there is no method called allSatisfy in assertj library, please fix below suggestion as provided in nitpick

assertThat(doubles).allSatisfy(value ->
           assertThat(value).isBetween(-1.0, 1.0)
        ).as("All embedding values should be between -1 and 1");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajadilipkolli, thanks for pointing that out! Since allSatisfy isn't available in older versions of AssertJ, you can replace it with a simple loop or an allMatch check on the stream. For example:

for (double value : doubles) {
    assertThat(value).isBetween(-1.0, 1.0);
}

Alternatively, with streams:

assertThat(Arrays.stream(doubles).allMatch(value -> value >= -1.0 && value <= 1.0))
        .isTrue()
        .as("All embedding values should be between -1.0 and 1.0");

}

@Test
void outputParser() {
given().param("actor", "Jr NTR")
Expand Down
Loading