Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat : adds test case for embeddingClientConversion #136

Merged
merged 9 commits into from
Jan 2, 2025

Conversation

rajadilipkolli
Copy link
Owner

No description provided.

@rajadilipkolli rajadilipkolli self-assigned this Jan 2, 2025
Copy link
Contributor

coderabbitai bot commented Jan 2, 2025

Walkthrough

The pull request introduces several modifications to the ChatController, ChatControllerTest, AIChatRequest, and ChatService classes. The ChatController is now annotated with @Validated, and methods accepting AIChatRequest parameters are updated to include @Valid for input validation. The endpoint for client conversion is corrected from /emebedding-client-conversion to /embedding-client-conversion. The ChatControllerTest class has been enhanced with new tests for validation and error handling. Additionally, the AIChatRequest now includes a validation constraint for the query parameter.

Changes

File Change Summary
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java Annotated class with @Validated; updated methods to include @Valid for AIChatRequest; corrected endpoint from /emebedding-client-conversion to /embedding-client-conversion.
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java Added several new test methods, including checks for bad requests, multiple prompts, and valid embeddings; renamed existing tests for clarity; added constant OPENAI_EMBEDDING_DIMENSION.
chatmodel-springai/src/main/java/com/example/ai/model/request/AIChatRequest.java Added @NotBlank validation to query parameter with a custom error message.
chatmodel-springai/src/main/java/com/example/ai/service/ChatService.java Updated the string template in generateAsBean method to specify "the Indian actor" and include "as of today."

Possibly related PRs

  • feat : adds vector store to chatbot #58: The changes in ChatConfig.java involve configuration for ChatService, which is relevant as the main PR modifies the ChatController that interacts with chat services.
  • feat : create chatbot using openai and springai #59: The modifications in ChatbotOllamaApplicationTests.java include changes to the chat() method that utilizes AIChatRequest, which is directly related to the updates made in the main PR regarding validation and method signatures in ChatController.

Poem

🐰 A Rabbit's Routing Rhyme 🌐

A typo lurked in our endpoint's name,
One letter off, oh what a shame!
We fixed the path with careful care,
Now embeddings flow without despair,
Our code now sings a perfect frame!

🚀 Hop, hop, hooray! 🎉


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 23fd193 and 88dc14a.

⛔ Files ignored due to path filters (1)
  • chatmodel-springai/pom.xml is excluded by !**/*.xml
📒 Files selected for processing (4)
  • chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java (4 hunks)
  • chatmodel-springai/src/main/java/com/example/ai/model/request/AIChatRequest.java (1 hunks)
  • chatmodel-springai/src/main/java/com/example/ai/service/ChatService.java (1 hunks)
  • chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java
🔇 Additional comments (15)
chatmodel-springai/src/main/java/com/example/ai/service/ChatService.java (1)

88-90: Consider clarifying the temporal context in the prompt.

Using "as of today" could imply time-sensitive data, potentially leading to inaccuracies if the model does not actually have up-to-date information. You may want to:

  1. Dynamically include the current date or a reference date in the prompt to make it explicit.
  2. Provide disclaimers if the model data might be outdated.

Would you like me to propose a dynamic date-injection approach or help refine the prompt for clarity?

chatmodel-springai/src/main/java/com/example/ai/model/request/AIChatRequest.java (1)

3-5: Well-structured validation approach.

Annotating query with @NotBlank is an important step toward ensuring your application receives valid input. This also makes the error message explicit and user-friendly.

chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (13)

30-30: Good practice for reusable constants.

Defining a constant (OPENAI_EMBEDDING_DIMENSION) clarifies the expected embedding length, making your codebase more maintainable.


43-43: Good use of a default request builder.

Using defaultChatRequest("Hello?") reduces duplication and improves readability.


53-60: Excellent negative test coverage for malformed requests.

The test ensures proper validation and robust error handling. Consider adding additional edge cases or partial request bodies for further coverage.


62-72: Comprehensive parameterized test method.

Validating multiple prompts is a good practice to ensure consistent handling of various scenarios.


78-78: Consistent negative testing for system prompts.

These tests confirm that an empty request body triggers a BAD_REQUEST. This aligns well with the @NotBlank constraint.

Also applies to: 88-95


97-100: Meaningful test name.

Renaming from a more generic name to shouldAnalyzeSentimentAsSarcastic clarifies the test’s expected behavior.


109-136: Solid embedding validation logic.

  1. Verifying the JSON structure as a bracketed list.
  2. Parsing to a double[].
  3. Confirming correct length (1536).
  4. Checking each value’s range.

All are critical checks to ensure stable and correct embedding output.


138-146: Good error handling test.

This confirms the endpoint rejects invalid embedding requests. Alternatively, consider adding tests for partially invalid or out-of-spec numeric data if your application needs to handle such cases.


149-158: Clear scenario-based test naming.

outputParserWithParam clarifies its intent to verify correct handling of query parameters.


Line range hint 160-167: Default parameter handling check.

Verifying the response body for default parameters ensures correct fallback behavior and improves coverage.


Line range hint 172-180: Thorough domain-specific test.

testRagWithSimpleStoreProvidesValidResponse checks the presence of a specific answer, ensuring domain relevance in responses.


184-186: Nice use of the MethodSource.

Providing a dedicated source for test prompts keeps the logic structured and allows for easy expansion if you add more test scenarios.


188-190: Convenient helper method.

Reducing repetition by centralizing chat request creation is a clean approach.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (1)

81-99: Comprehensive test validates embedding endpoint.
Adding end-to-end checks on the response length is an excellent way of verifying that the embedding service returns the correct vector size. You might also consider adding more assertions on the numerical content or distribution to catch unexpected regressions in the embeddings.

+ // Potential improvement: add additional assertions on the returned embedding's content.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2e8664e and 972bd2c.

📒 Files selected for processing (2)
  • chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java (1 hunks)
  • chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (3 hunks)
🔇 Additional comments (3)
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java (1)

46-46: Well-corrected endpoint spelling.
It's great to see the endpoint spelling fixed from /emebedding-client-conversion to /embedding-client-conversion. Make sure any references to the old spelling are removed throughout the codebase to avoid potential routing mismatches.

✅ Verification successful

No references to the old endpoint spelling found

The verification shows that there are no occurrences of the misspelled endpoint emebedding-client-conversion in the codebase, which is good. The endpoint has been correctly renamed to embedding-client-conversion and no lingering references to the old spelling exist that could cause routing issues.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify no other references to the old endpoint remain
rg 'emebedding-client-conversion'

Length of output: 33

chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)

4-4: AssertJ import looks good.
Using AssertJ's rich assertion features can enhance test readability and maintainability.


13-13: Arrays import is standard for your new test.
The import is necessary for stream operations in the new test method.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (3)

83-87: Consider using more relevant test data.

The test data about a Python programmer joke seems unrelated to the embedding conversion test. Consider using a simpler, more relevant text that better represents the expected use case.

-                .body(new AIChatRequest("Why did the Python programmer go broke? Because he couldn't C#"))
+                .body(new AIChatRequest("This is a test sentence for embedding conversion."))

94-96: Add input validation before processing.

The current implementation assumes the response is always well-formed. Consider adding validation for the response format before processing.

+        assertThat(response).isNotNull()
+                           .startsWith("[")
+                           .endsWith("]");
+
         double[] doubles = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(","))
                 .mapToDouble(Double::parseDouble)
                 .toArray();

98-98: Enhance assertions and extract magic number.

The test could be improved by:

  1. Extracting the embedding dimension as a constant
  2. Adding assertions for embedding value ranges
+    private static final int OPENAI_EMBEDDING_DIMENSION = 1536;
+
     @Test
     void embeddingClientConversion() {
         // ... existing code ...
-        assertThat(doubles.length).isEqualTo(1536).as("Dimensions for openai model is 1536");
+        assertThat(doubles.length).isEqualTo(OPENAI_EMBEDDING_DIMENSION)
+                                 .as("Dimensions for openai model is %d", OPENAI_EMBEDDING_DIMENSION);
+        assertThat(doubles).allSatisfy(value ->
+            assertThat(value).isBetween(-1.0, 1.0)
+        ).as("All embedding values should be between -1 and 1");
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 972bd2c and 60f1954.

📒 Files selected for processing (1)
  • chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (3 hunks)
🔇 Additional comments (2)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)

4-4: LGTM! Required imports added correctly.

The new imports are necessary and properly used in the new test method.

Also applies to: 13-13


81-99: Overall test implementation looks good!

The test follows the project's testing patterns and properly validates the embedding conversion endpoint. The structure is clean and uses appropriate assertions.

.mapToDouble(Double::parseDouble)
.toArray();

assertThat(doubles.length).isEqualTo(1536).as("Dimensions for openai model is 1536");
Copy link
Owner Author

@rajadilipkolli rajadilipkolli Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, there is no method called allSatisfy in assertj library, please fix below suggestion as provided in nitpick

assertThat(doubles).allSatisfy(value ->
           assertThat(value).isBetween(-1.0, 1.0)
        ).as("All embedding values should be between -1 and 1");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajadilipkolli, thanks for pointing that out! Since allSatisfy isn't available in older versions of AssertJ, you can replace it with a simple loop or an allMatch check on the stream. For example:

for (double value : doubles) {
    assertThat(value).isBetween(-1.0, 1.0);
}

Alternatively, with streams:

assertThat(Arrays.stream(doubles).allMatch(value -> value >= -1.0 && value <= 1.0))
        .isTrue()
        .as("All embedding values should be between -1.0 and 1.0");

@@ -76,6 +78,26 @@ void sentimentAnalyzer() {
.body("answer", is("SARCASTIC"));
}

@Test
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, please review all test cases in this class and provide if there are any suggestions or improvements

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajadilipkolli, here are my suggestions and observations after reviewing the test cases in ChatControllerTest:

  1. Balance of Positive and Negative Tests

    • Currently, all test cases verify success scenarios (200 OK). Consider adding negative test cases to verify the behavior when invalid or malformed requests are sent—e.g., empty request bodies, missing fields, or invalid JSON formats.
  2. Additional Assertions

    • In tests like testChat, chatWithPrompt, chatWithSystemPrompt, or sentimentAnalyzer, you might add more assertions checking the structure of the JSON body. For instance, verifying each top-level field in the response to ensure your controller includes all expected properties.
    • For outputParser, besides checking the size of movies, you might verify each item meets certain criteria if relevant (such as ensuring the type is a string and non-empty).
  3. Edge / Corner Cases

    • In embeddingClientConversion, you test the expected size of the double array. Consider verifying whether the values are valid embeddings (e.g., checking if any non-numeric values appear, or if each value is finite), though this depends on how much your application needs to validate them.
    • For ragWithSimpleStore, you are checking a general text snippet in the answer. If the application logic is expected to do specific manipulations, consider verifying that the entire returned structure is well-formed.
  4. Descriptive Test Names

    • The names of the test methods are clear but you could prepend them with "should" or "test" to further clarify expected outcomes, for example, shouldReturnSarcasticSentiment() or testRagWithSimpleStoreProvidesValidResponse(). This especially helps when reading test reports or debugging.
  5. Parameterization

    • For scenarios like chatWithPrompt and chatWithSystemPrompt, if you plan to test multiple variations of user prompts, JUnit’s parameterized tests could reduce duplication.
  6. Maintainability

    • If logic in the ChatController grows more complex, consider refactoring common RestAssured setup steps into a utility function or a custom test extension. This way, you keep each test more “readable” while reducing duplicate code.

Overall, the tests are straightforward and effectively use RestAssured’s fluent approach. Implementing additional negative tests and refining validations will help ensure broader coverage and correctness of the API responses. If you have any further questions or would like more suggestions, let me know!

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, can you please help me with writing code to implement all above suggestions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajadilipkolli, here's some example code illustrating how you could incorporate the suggestions from my earlier review into your test suite. Please feel free to adjust any details to suit your project’s requirements.


1. Negative Test Cases

Below is an example test checking how the endpoint responds to a malformed request (e.g., an empty JSON). You can add more variations of malformed or invalid data as needed:

@Test
void shouldReturnBadRequestForMalformedChatRequest() {
    given().contentType(ContentType.JSON)
            .body("{}")  // Empty or malformed request body
            .when()
            .post("/api/ai/chat")
            .then()
            .statusCode(HttpStatus.SC_BAD_REQUEST);
}

2. Additional Assertions

For more thorough checks on JSON structure, you can use RestAssured’s fluent approach to verify multiple response fields. Here’s an example updated from the testChat method, checking top-level fields:

@Test
void shouldReturnChatResponseWithExpectedFields() {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest("Hello?"))
            .when()
            .post("/api/ai/chat")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", containsString("Hello!"))
            // Additional field checks, if the response includes them
            .body("timestamp", is(notNullValue()))
            .body("userId", is(notNullValue()));
}

3. Edge / Corner Cases

In embeddingClientConversion, if you want to ensure the returned values are valid embeddings (for instance, checking if they are finite numbers), you can add these checks:

@Test
void shouldReturnValidEmbeddingArray() {
    String response = given().contentType(ContentType.JSON)
            .body(new AIChatRequest("Corner case for embedding."))
            .when()
            .post("/api/ai/embedding-client-conversion")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .extract()
            .jsonPath()
            .get("answer");

    double[] embeddings = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(","))
            .mapToDouble(Double::parseDouble)
            .toArray();

    // Check dimensions
    assertThat(embeddings.length).isEqualTo(1536).as("Dimensions for openai model is 1536");
    // Check each value is finite
    for (double value : embeddings) {
        assertThat(Double.isFinite(value)).isTrue();
    }
}

4. Descriptive Test Names

Renaming existing tests helps clarify their expected behavior. For example, instead of sentimentAnalyzer, you could rename it to shouldAnalyzeSentimentAsSarcastic(). Below is an illustrative renaming (you can rename others too):

@Test
void shouldAnalyzeSentimentAsSarcastic() {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest("Why did the Python programmer go broke? Because he couldn't C#"))
            .when()
            .post("/api/ai/sentiment/analyze")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", is("SARCASTIC"));
}

5. Parameterization

If you plan to test multiple input prompts for chatWithPrompt, you can use JUnit parameterized tests. Below is a minimal example of how you might set this up:

import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.MethodSource;
import java.util.stream.Stream;

static Stream<String> chatPrompts() {
    return Stream.of(
        "java",
        "spring framework",
        "some other interesting prompt"
    );
}

@ParameterizedTest
@MethodSource("chatPrompts")
void shouldChatWithMultiplePrompts(String prompt) {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest(prompt))
            .when()
            .post("/api/ai/chat-with-prompt")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", containsStringIgnoringCase(prompt));
}

Notice how the containsStringIgnoringCase check ensures the response includes your provided prompt in some manner.


6. Maintainability

If your test class grows in complexity, you can reduce duplication of common setup steps by extracting them into utility methods or a separate class. For example, if you find yourself constructing AIChatRequest with the same defaults repeatedly, you might do something like:

private AIChatRequest defaultChatRequest(String message) {
    return new AIChatRequest(message);
}

@Test
void shouldTestChatWithDefaults() {
    given().contentType(ContentType.JSON)
            .body(defaultChatRequest("Hello?"))
            .when()
            .post("/api/ai/chat")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .body("answer", containsString("Hello!"));
}

I hope these examples help you integrate the suggestions effectively. Let me know if you have any questions, or if there are other specific adjustments you’d like!

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (1)

83-109: Consider enhancing test coverage with error cases and parameterization.

While the happy path is well tested, consider adding:

  1. Error case tests (e.g., empty input, very long input)
  2. Parameterized tests with different input texts
  3. More descriptive test name (e.g., shouldGenerateValidEmbeddingsWithinExpectedRange)

Example parameterized test:

@ParameterizedTest
@ValueSource(strings = {
    "This is a test sentence.",
    "Another different sentence.",
    "A third unique test case."
})
void shouldGenerateValidEmbeddingsWithinExpectedRange(String input) {
    String response = given().contentType(ContentType.JSON)
            .body(new AIChatRequest(input))
            .when()
            .post("/api/ai/embedding-client-conversion")
            .then()
            .statusCode(HttpStatus.SC_OK)
            .contentType(ContentType.JSON)
            .extract()
            .jsonPath()
            .get("answer");

    assertThat(response).isNotNull().startsWith("[").endsWith("]");

    double[] doubles = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(","))
            .mapToDouble(Double::parseDouble)
            .toArray();

    assertThat(doubles.length)
            .isEqualTo(OPENAI_EMBEDDING_DIMENSION)
            .as("Dimensions for openai model is %d", OPENAI_EMBEDDING_DIMENSION);

    assertThat(Arrays.stream(doubles).allMatch(value -> value >= -1.0 && value <= 1.0))
            .isTrue()
            .as("All embedding values should be between -1.0 and 1.0");
}

@Test
void shouldHandleErrorCasesGracefully() {
    given().contentType(ContentType.JSON)
            .body(new AIChatRequest(""))
            .when()
            .post("/api/ai/embedding-client-conversion")
            .then()
            .statusCode(HttpStatus.SC_BAD_REQUEST);
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60f1954 and 23fd193.

📒 Files selected for processing (1)
  • chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (4 hunks)
🔇 Additional comments (2)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)

4-4: LGTM! Well-structured imports and constant declaration.

The imports are appropriate for the new test method, and the constant OPENAI_EMBEDDING_DIMENSION is well-named and correctly set to OpenAI's embedding dimension of 1536.

Also applies to: 13-13, 25-26


83-109: LGTM! Well-implemented test with comprehensive validations.

The test effectively validates:

  • Response status and content type
  • JSON structure
  • Embedding dimensions
  • Value ranges within [-1.0, 1.0]

The implementation correctly uses Arrays.stream().allMatch() instead of allSatisfy() as discussed in previous feedback.

@rajadilipkolli rajadilipkolli merged commit dbd65ad into main Jan 2, 2025
3 checks passed
@rajadilipkolli rajadilipkolli deleted the adds-testcase branch January 2, 2025 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant