-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat : upgrade to spring boot 3.4.1 #133
Conversation
Warning Rate limit exceeded@rajadilipkolli has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 49 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
WalkthroughThis pull request introduces significant modifications to the chat model implementation in a Spring AI application. The changes primarily focus on updating the chat service and controller, modifying the OpenAI model configuration, and adjusting test cases. The main updates include switching to a new OpenAI model (gpt-4o-mini), commenting out certain chat-related methods, and simplifying the dependency injection in the chat service. Changes
Sequence DiagramsequenceDiagram
participant Client
participant ChatController
participant ChatService
participant ChatClient
Client->>ChatController: Send chat request
ChatController->>ChatService: Process chat request
ChatService->>ChatClient: Generate prompt
ChatClient-->>ChatService: Return chat response
ChatService-->>ChatController: Return AI chat response
ChatController-->>Client: Send response
Possibly Related PRs
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java (2)
44-47
: Commenting out the embeddings endpoint
The embedding endpoint code is commented out rather than removed. If embeddings are no longer required, consider removing this code for clarity and maintainability. Otherwise, keep it documented (e.g., through Javadoc or a README) so future developers know how to re-enable it.
54-57
: Commenting out the RAG endpoint
This portion of code is also commented out. If this endpoint is only temporarily disabled, consider adding more descriptive TODO comments indicating the rationale and next steps for reactivation or permanent removal.chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (1)
93-93
: Temporarily disablingragWithSimpleStore
As with the controller’s RAG endpoint, if this test is no longer necessary, consider removing it entirely. Otherwise, ensure a clear plan is documented for re-enablement.chatmodel-springai/src/main/java/com/example/ai/service/ChatService.java (3)
74-77
: Embeddings code commented out
Similar to the controller, removing rather than commenting is often clearer in the long run unless you have a near-future plan to reintroduce embeddings.
94-130
: RAG code commented out
Similar recommendation to the embeddings code: remove if it's no longer needed or add clarifying comments if you plan to bring it back.
133-134
: Streaming chat method commented out
Aligns with the changes inChatController
. The same best practices apply: either remove or formalize a plan to bring it back.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
chatmodel-springai/pom.xml
is excluded by!**/*.xml
📒 Files selected for processing (4)
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java
(1 hunks)chatmodel-springai/src/main/java/com/example/ai/service/ChatService.java
(3 hunks)chatmodel-springai/src/main/resources/application.properties
(1 hunks)chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java
(2 hunks)
🔇 Additional comments (14)
chatmodel-springai/src/main/java/com/example/ai/controller/ChatController.java (1)
59-63
: Stream chat method commented out
Disabling the stream chat functionality may affect real-time or partial-response capabilities. If you plan to re-introduce streaming soon, consider using feature toggles instead of commenting out the code to prevent confusion among team members.
chatmodel-springai/src/test/java/com/example/ai/controller/ChatControllerTest.java (2)
14-14
: Importing @disabled for test
@Disabled
annotation is introduced. Ensure that any test coverage metrics are updated accordingly, as disabled tests can lower test completeness.
89-89
: Reduced expected movie list size
Changing the minimum size from 25 to 13 is less strict. Confirm that this updated requirement accurately reflects new or intended behavior.
chatmodel-springai/src/main/java/com/example/ai/service/ChatService.java (7)
9-17
: Refinement of imports
Imports are updated to reflect new org.springframework.ai.chat
package structure. This looks appropriate for the new architecture.
38-39
: Constructor now receives a ChatClient.Builder
Using a builder pattern simplifies dependency injection and fosters more flexible configuration. This is a neat enhancement.
43-43
: Single-line chat logic
The direct .prompt(...).call().content()
usage is clean, but ensure that any required error handling or fallback logic is handled at a higher level if needed.
50-52
: Refactored to use chatResponse()
This improves clarity by accessing the higher-level chatResponse()
method. Ensure it is tested thoroughly for edge cases, particularly when ChatResponse
might be null.
60-60
: System prompt approach
Specifying the system’s personality via SystemMessage
is a nice pattern. Verify that the system message behavior is consistent with team expectations, especially for prompting large language models.
68-68
: Analyzing sentiment
Ensure that the output format strictly adheres to the expected specification (i.e., [POSITIVE, NEGATIVE, SARCASTIC]
). Any model hallucination or escape from these boundaries should be handled if business logic demands.
80-91
: Adopting BeanOutputConverter
A neat approach for converting the response into a domain object. This can significantly reduce parsing complexity. Just be mindful of any potential JSON structure mismatches.
chatmodel-springai/src/main/resources/application.properties (4)
11-12
: Verify embedding configuration compatibility
The embedding configuration looks correct as text-embedding-3-small
is a valid OpenAI model. However, we should verify if this version is supported by the current Spring AI version.
Let's check the compatibility:
✅ Verification successful
The embedding model configuration is compatible and widely used in the project
The text-embedding-3-small
model is being used consistently across multiple modules in the project with Spring AI 1.0.0-M5:
- chatmodel-springai
- embeddingstores/pgvector-springai
- embeddingstores/neo4j-springai
- rag/rag-springai-openai-llm
This configuration is part of working implementations, particularly in RAG (Retrieval-Augmented Generation) and various embedding store modules, indicating it's fully compatible with the current Spring AI version.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify Spring AI version compatibility with the embedding model
# Test: Check Spring AI documentation and dependencies
fd "pom.xml" --exec grep -A 5 "spring-ai"
# Check if there are any existing embedding implementations
ast-grep --pattern 'class $_ {
$$$
@Autowired
private EmbeddingClient $_
$$$
}'
Length of output: 6311
Script:
#!/bin/bash
# Let's check the Spring AI OpenAI configuration properties
rg -A 5 "text-embedding-3" .
# Check for any OpenAI embedding related configuration or classes
rg -l "OpenAiEmbeddingClient|OpenAiEmbeddingProperties"
# Check the Spring AI version's changelog or documentation
fd "CHANGELOG.md|README.md" --exec grep -A 5 -i "embedding"
Length of output: 5674
7-7
:
Verify the OpenAI model name
The model name gpt-4o-mini
appears to be incorrect. OpenAI's standard model names are gpt-4
, gpt-3.5-turbo
, etc. This might cause runtime errors when making API calls.
Let's verify the available models in the OpenAI documentation:
4-5
: Verify OpenAI API configuration
The API key is set to 'demo' and using a non-standard base URL. This appears to be a development/testing configuration. Ensure this is intentional and not accidentally committed.
Let's check for any production configuration files:
✅ Verification successful
Demo configuration confirmed as intentional
Based on the configuration files analysis, the 'demo' API key and non-standard base URL (http://langchain4j.dev/demo/openai
) are consistently used across multiple application properties files in this repository. This appears to be an intentional development/demo setup rather than a misconfiguration, as:
- The same demo configuration is used consistently across different Spring AI examples (neo4j-springai, rag-springai-openai-llm, chatmodel-springai)
- The base URL points to a development/demo endpoint
- Other configurations also use development values (like test databases, demo initialization flags)
- There are comments indicating demo purposes (e.g., "only for demo purpose")
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for production configuration files
# Test: Look for different environment-specific properties files
fd "application-(prod|production).properties|application-(prod|production).yml"
Length of output: 81
Script:
#!/bin/bash
# Let's check for all application configuration files and their content
fd "application.*\.(properties|yml|yaml)" --exec cat -A {}
Length of output: 5038
Script:
#!/bin/bash
# Let's also check if there are any other configuration files that might contain API keys
rg -i "api[._-]?key" --type properties --type yaml
Length of output: 87
Script:
#!/bin/bash
# Check for environment-related configuration in other common locations
fd "\.env|config\.(properties|yml|yaml)" --exec cat -A {}
Length of output: 61
Script:
#!/bin/bash
# Check README or documentation for configuration instructions
fd "README\.md" --exec grep -i -A 5 -B 5 "configuration|api.?key|openai"
Length of output: 74
9-9
: Review the impact of commenting out responseFormat
The response format configuration has been commented out. This change might affect JSON response handling in the application, potentially breaking any code that expects structured JSON responses.
Let's check for JSON response handling in the codebase:
✅ Verification successful
Let me gather more specific information about the Spring AI configuration and usage.
Let me gather more specific information about JSON response handling in the application.
Commenting out responseFormat property is safe
The commented out responseFormat
property in application.properties
does not impact the application's functionality because:
-
The codebase primarily uses Spring AI's built-in response handling mechanisms through
ChatResponse
,AIChatResponse
, and other model classes that handle both JSON and non-JSON responses automatically. -
The application has multiple response handling patterns:
- Direct string content extraction via
chatClient.prompt(query).call().content()
- Bean conversion using
BeanOutputConverter
for specific cases - Custom response records like
AIChatResponse
that work independently of the response format
- Direct string content extraction via
-
Other modules in the project (like rag-springai-openai-llm) explicitly set their own response format configurations when needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for JSON response handling code that might be affected
# Test: Look for JSON response handling in controllers and services
ast-grep --pattern 'class $_ {
$$$
@$_
$_ $_($$_) {
$$$
ObjectMapper$$$
$$$
}
}'
rg -A 5 "ResponseFormat|ObjectMapper|JsonNode"
Length of output: 6750
Script:
#!/bin/bash
# Check for Spring AI chat-related configurations and implementations
rg -l "spring.ai" --type properties
# Look for Spring AI chat implementations
rg -A 5 "ChatClient|ChatResponse|AiResponse|@Bean.*ChatClient"
# Check the specific file content
cat chatmodel-springai/src/main/resources/application.properties
Length of output: 50289
Script:
#!/bin/bash
# Check for JSON response format usage and bean conversion
rg -A 5 "BeanOutputConverter|@JsonFormat|@JsonProperty|ObjectMapper|responseFormat"
# Check for specific chat response handling in the codebase
ast-grep --pattern 'class $_ {
$$$
$_ $_($_, $_) {
$$$
.responseFormat($$$)
$$$
}
}'
Length of output: 11367
No description provided.