-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat : adds test case for embeddingClientConversion #136
Conversation
WalkthroughThe pull request introduces several modifications to the Changes
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (4)
🚧 Files skipped from review as they are similar to previous changes (1)
🔇 Additional comments (15)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (1)
81-99
: Comprehensive test validates embedding endpoint.
Adding end-to-end checks on the response length is an excellent way of verifying that the embedding service returns the correct vector size. You might also consider adding more assertions on the numerical content or distribution to catch unexpected regressions in the embeddings.+ // Potential improvement: add additional assertions on the returned embedding's content.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java
(1 hunks)chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java
(3 hunks)
🔇 Additional comments (3)
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java (1)
46-46
: Well-corrected endpoint spelling.
It's great to see the endpoint spelling fixed from/emebedding-client-conversion
to/embedding-client-conversion
. Make sure any references to the old spelling are removed throughout the codebase to avoid potential routing mismatches.✅ Verification successful
No references to the old endpoint spelling found
The verification shows that there are no occurrences of the misspelled endpoint
emebedding-client-conversion
in the codebase, which is good. The endpoint has been correctly renamed toembedding-client-conversion
and no lingering references to the old spelling exist that could cause routing issues.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify no other references to the old endpoint remain rg 'emebedding-client-conversion'Length of output: 33
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)
4-4
: AssertJ import looks good.
Using AssertJ's rich assertion features can enhance test readability and maintainability.
13-13
: Arrays import is standard for your new test.
The import is necessary for stream operations in the new test method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (3)
83-87
: Consider using more relevant test data.The test data about a Python programmer joke seems unrelated to the embedding conversion test. Consider using a simpler, more relevant text that better represents the expected use case.
- .body(new AIChatRequest("Why did the Python programmer go broke? Because he couldn't C#")) + .body(new AIChatRequest("This is a test sentence for embedding conversion."))
94-96
: Add input validation before processing.The current implementation assumes the response is always well-formed. Consider adding validation for the response format before processing.
+ assertThat(response).isNotNull() + .startsWith("[") + .endsWith("]"); + double[] doubles = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(",")) .mapToDouble(Double::parseDouble) .toArray();
98-98
: Enhance assertions and extract magic number.The test could be improved by:
- Extracting the embedding dimension as a constant
- Adding assertions for embedding value ranges
+ private static final int OPENAI_EMBEDDING_DIMENSION = 1536; + @Test void embeddingClientConversion() { // ... existing code ... - assertThat(doubles.length).isEqualTo(1536).as("Dimensions for openai model is 1536"); + assertThat(doubles.length).isEqualTo(OPENAI_EMBEDDING_DIMENSION) + .as("Dimensions for openai model is %d", OPENAI_EMBEDDING_DIMENSION); + assertThat(doubles).allSatisfy(value -> + assertThat(value).isBetween(-1.0, 1.0) + ).as("All embedding values should be between -1 and 1");
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java
(3 hunks)
🔇 Additional comments (2)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)
4-4
: LGTM! Required imports added correctly.The new imports are necessary and properly used in the new test method.
Also applies to: 13-13
81-99
: Overall test implementation looks good!The test follows the project's testing patterns and properly validates the embedding conversion endpoint. The structure is clean and uses appropriate assertions.
.mapToDouble(Double::parseDouble) | ||
.toArray(); | ||
|
||
assertThat(doubles.length).isEqualTo(1536).as("Dimensions for openai model is 1536"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai, there is no method called allSatisfy in assertj library, please fix below suggestion as provided in nitpick
assertThat(doubles).allSatisfy(value ->
assertThat(value).isBetween(-1.0, 1.0)
).as("All embedding values should be between -1 and 1");
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rajadilipkolli, thanks for pointing that out! Since allSatisfy
isn't available in older versions of AssertJ, you can replace it with a simple loop or an allMatch
check on the stream. For example:
for (double value : doubles) {
assertThat(value).isBetween(-1.0, 1.0);
}
Alternatively, with streams:
assertThat(Arrays.stream(doubles).allMatch(value -> value >= -1.0 && value <= 1.0))
.isTrue()
.as("All embedding values should be between -1.0 and 1.0");
@@ -76,6 +78,26 @@ void sentimentAnalyzer() { | |||
.body("answer", is("SARCASTIC")); | |||
} | |||
|
|||
@Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai, please review all test cases in this class and provide if there are any suggestions or improvements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rajadilipkolli, here are my suggestions and observations after reviewing the test cases in ChatControllerTest
:
-
Balance of Positive and Negative Tests
- Currently, all test cases verify success scenarios (200 OK). Consider adding negative test cases to verify the behavior when invalid or malformed requests are sent—e.g., empty request bodies, missing fields, or invalid JSON formats.
-
Additional Assertions
- In tests like
testChat
,chatWithPrompt
,chatWithSystemPrompt
, orsentimentAnalyzer
, you might add more assertions checking the structure of the JSON body. For instance, verifying each top-level field in the response to ensure your controller includes all expected properties. - For
outputParser
, besides checking the size ofmovies
, you might verify each item meets certain criteria if relevant (such as ensuring the type is a string and non-empty).
- In tests like
-
Edge / Corner Cases
- In
embeddingClientConversion
, you test the expected size of the double array. Consider verifying whether the values are valid embeddings (e.g., checking if any non-numeric values appear, or if each value is finite), though this depends on how much your application needs to validate them. - For
ragWithSimpleStore
, you are checking a general text snippet in the answer. If the application logic is expected to do specific manipulations, consider verifying that the entire returned structure is well-formed.
- In
-
Descriptive Test Names
- The names of the test methods are clear but you could prepend them with "should" or "test" to further clarify expected outcomes, for example,
shouldReturnSarcasticSentiment()
ortestRagWithSimpleStoreProvidesValidResponse()
. This especially helps when reading test reports or debugging.
- The names of the test methods are clear but you could prepend them with "should" or "test" to further clarify expected outcomes, for example,
-
Parameterization
- For scenarios like
chatWithPrompt
andchatWithSystemPrompt
, if you plan to test multiple variations of user prompts, JUnit’s parameterized tests could reduce duplication.
- For scenarios like
-
Maintainability
- If logic in the
ChatController
grows more complex, consider refactoring common RestAssured setup steps into a utility function or a custom test extension. This way, you keep each test more “readable” while reducing duplicate code.
- If logic in the
Overall, the tests are straightforward and effectively use RestAssured’s fluent approach. Implementing additional negative tests and refining validations will help ensure broader coverage and correctness of the API responses. If you have any further questions or would like more suggestions, let me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai, can you please help me with writing code to implement all above suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rajadilipkolli, here's some example code illustrating how you could incorporate the suggestions from my earlier review into your test suite. Please feel free to adjust any details to suit your project’s requirements.
1. Negative Test Cases
Below is an example test checking how the endpoint responds to a malformed request (e.g., an empty JSON). You can add more variations of malformed or invalid data as needed:
@Test
void shouldReturnBadRequestForMalformedChatRequest() {
given().contentType(ContentType.JSON)
.body("{}") // Empty or malformed request body
.when()
.post("/api/ai/chat")
.then()
.statusCode(HttpStatus.SC_BAD_REQUEST);
}
2. Additional Assertions
For more thorough checks on JSON structure, you can use RestAssured’s fluent approach to verify multiple response fields. Here’s an example updated from the testChat
method, checking top-level fields:
@Test
void shouldReturnChatResponseWithExpectedFields() {
given().contentType(ContentType.JSON)
.body(new AIChatRequest("Hello?"))
.when()
.post("/api/ai/chat")
.then()
.statusCode(HttpStatus.SC_OK)
.contentType(ContentType.JSON)
.body("answer", containsString("Hello!"))
// Additional field checks, if the response includes them
.body("timestamp", is(notNullValue()))
.body("userId", is(notNullValue()));
}
3. Edge / Corner Cases
In embeddingClientConversion
, if you want to ensure the returned values are valid embeddings (for instance, checking if they are finite numbers), you can add these checks:
@Test
void shouldReturnValidEmbeddingArray() {
String response = given().contentType(ContentType.JSON)
.body(new AIChatRequest("Corner case for embedding."))
.when()
.post("/api/ai/embedding-client-conversion")
.then()
.statusCode(HttpStatus.SC_OK)
.contentType(ContentType.JSON)
.extract()
.jsonPath()
.get("answer");
double[] embeddings = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(","))
.mapToDouble(Double::parseDouble)
.toArray();
// Check dimensions
assertThat(embeddings.length).isEqualTo(1536).as("Dimensions for openai model is 1536");
// Check each value is finite
for (double value : embeddings) {
assertThat(Double.isFinite(value)).isTrue();
}
}
4. Descriptive Test Names
Renaming existing tests helps clarify their expected behavior. For example, instead of sentimentAnalyzer
, you could rename it to shouldAnalyzeSentimentAsSarcastic()
. Below is an illustrative renaming (you can rename others too):
@Test
void shouldAnalyzeSentimentAsSarcastic() {
given().contentType(ContentType.JSON)
.body(new AIChatRequest("Why did the Python programmer go broke? Because he couldn't C#"))
.when()
.post("/api/ai/sentiment/analyze")
.then()
.statusCode(HttpStatus.SC_OK)
.contentType(ContentType.JSON)
.body("answer", is("SARCASTIC"));
}
5. Parameterization
If you plan to test multiple input prompts for chatWithPrompt
, you can use JUnit parameterized tests. Below is a minimal example of how you might set this up:
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.MethodSource;
import java.util.stream.Stream;
static Stream<String> chatPrompts() {
return Stream.of(
"java",
"spring framework",
"some other interesting prompt"
);
}
@ParameterizedTest
@MethodSource("chatPrompts")
void shouldChatWithMultiplePrompts(String prompt) {
given().contentType(ContentType.JSON)
.body(new AIChatRequest(prompt))
.when()
.post("/api/ai/chat-with-prompt")
.then()
.statusCode(HttpStatus.SC_OK)
.contentType(ContentType.JSON)
.body("answer", containsStringIgnoringCase(prompt));
}
Notice how the containsStringIgnoringCase
check ensures the response includes your provided prompt in some manner.
6. Maintainability
If your test class grows in complexity, you can reduce duplication of common setup steps by extracting them into utility methods or a separate class. For example, if you find yourself constructing AIChatRequest
with the same defaults repeatedly, you might do something like:
private AIChatRequest defaultChatRequest(String message) {
return new AIChatRequest(message);
}
@Test
void shouldTestChatWithDefaults() {
given().contentType(ContentType.JSON)
.body(defaultChatRequest("Hello?"))
.when()
.post("/api/ai/chat")
.then()
.statusCode(HttpStatus.SC_OK)
.contentType(ContentType.JSON)
.body("answer", containsString("Hello!"));
}
I hope these examples help you integrate the suggestions effectively. Let me know if you have any questions, or if there are other specific adjustments you’d like!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (1)
83-109
: Consider enhancing test coverage with error cases and parameterization.While the happy path is well tested, consider adding:
- Error case tests (e.g., empty input, very long input)
- Parameterized tests with different input texts
- More descriptive test name (e.g.,
shouldGenerateValidEmbeddingsWithinExpectedRange
)Example parameterized test:
@ParameterizedTest @ValueSource(strings = { "This is a test sentence.", "Another different sentence.", "A third unique test case." }) void shouldGenerateValidEmbeddingsWithinExpectedRange(String input) { String response = given().contentType(ContentType.JSON) .body(new AIChatRequest(input)) .when() .post("/api/ai/embedding-client-conversion") .then() .statusCode(HttpStatus.SC_OK) .contentType(ContentType.JSON) .extract() .jsonPath() .get("answer"); assertThat(response).isNotNull().startsWith("[").endsWith("]"); double[] doubles = Arrays.stream(response.replaceAll("[\\[\\]]", "").split(",")) .mapToDouble(Double::parseDouble) .toArray(); assertThat(doubles.length) .isEqualTo(OPENAI_EMBEDDING_DIMENSION) .as("Dimensions for openai model is %d", OPENAI_EMBEDDING_DIMENSION); assertThat(Arrays.stream(doubles).allMatch(value -> value >= -1.0 && value <= 1.0)) .isTrue() .as("All embedding values should be between -1.0 and 1.0"); } @Test void shouldHandleErrorCasesGracefully() { given().contentType(ContentType.JSON) .body(new AIChatRequest("")) .when() .post("/api/ai/embedding-client-conversion") .then() .statusCode(HttpStatus.SC_BAD_REQUEST); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java
(4 hunks)
🔇 Additional comments (2)
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)
4-4
: LGTM! Well-structured imports and constant declaration.The imports are appropriate for the new test method, and the constant
OPENAI_EMBEDDING_DIMENSION
is well-named and correctly set to OpenAI's embedding dimension of 1536.Also applies to: 13-13, 25-26
83-109
: LGTM! Well-implemented test with comprehensive validations.The test effectively validates:
- Response status and content type
- JSON structure
- Embedding dimensions
- Value ranges within [-1.0, 1.0]
The implementation correctly uses
Arrays.stream().allMatch()
instead ofallSatisfy()
as discussed in previous feedback.
No description provided.