-
Notifications
You must be signed in to change notification settings - Fork 1.6k
update openai server #3346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
update openai server #3346
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files? |
Yes, it was copied from PR2311. When I tried merging |
i see,maybe It's been a while since 2311 last update. will help to merge main to 2311 |
Wendong-Fan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @LuoPengcheng12138 , left some initial comments below
camel/agents/chat_agent.py
Outdated
| elif msg_role == "tool": | ||
| # Handle tool response messages if needed | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also support taking system message to align with OpenAI interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems content in this file is not used in this PR
camel/agents/chat_agent.py
Outdated
| # Type guard to ensure tools_to_use is not None | ||
| if tools_to_use is not None: | ||
| for tool in tools_to_use: | ||
| self.add_external_tool(tool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems here would accumulate the tools added to agent, should we clean after one request ended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @LuoPengcheng12138 , left some comments below and added one enhance PR:#3380 , feel free to review and leave your comments
camel/agents/chat_agent.py
Outdated
| # Define Pydantic models for request/response | ||
| class ChatMessage(BaseModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems defined but not used
camel/agents/chat_agent.py
Outdated
| if reset_memory: | ||
| self.init_messages() | ||
|
|
||
| def reset_system_message( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we already has update_system_message, seems no necessary to add this new function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deleted
camel/agents/chat_agent.py
Outdated
| content={"error": f"Internal server error: {e!s}"}, | ||
| ) | ||
|
|
||
| async def _stream_response(message: BaseMessage, request_data: dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The streaming implementation awaited the full response first, then split it into words and sent them one-by-one with artificial delays. This defeats the purpose of streaming
camel/agents/chat_agent.py
Outdated
| messages = request_data.get("messages", []) | ||
| model = request_data.get("model", "camel-model") | ||
| stream = request_data.get("stream", False) | ||
| functions = request_data.get("functions") | ||
| tools = request_data.get("tools") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The agent's state persisted across different API requests, which would causing conversations from different clients to mix and memory leaks, would better to add self.reset() and self._external_tool_schemas.clear() at the start of each request?
camel/agents/chat_agent.py
Outdated
| elif msg_role == "system": | ||
| sys_msg = BaseMessage.make_system_message( | ||
| role_name="System", content=msg_content | ||
| ) | ||
| self.update_memory(sys_msg, OpenAIBackendRole.SYSTEM) | ||
| self.reset_system_message(msg_content, True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we update system message without clearing memory to preserve conversation history?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, during the design phase, we considered that multiple GAIA2 tests might have different system_messages. I was concerned about potential interference between these system_messages so the system_message is reset each time. Currently, it is not confirmed whether the ARE environment reuses a single agent instance during testing, and I will continue to investigate this. However, for the function serving as the OpenAI Server interface, it is indeed necessary to not clear memory so as to preserve the conversation history.This may lead to functional contradictions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.My understanding of the chat interface is not thorough enough, and I will update the code!
This PR is still in draft status, and I will revise it later based on your comments. |
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
| except Exception as e: | ||
| return JSONResponse( | ||
| status_code=500, | ||
| content={"error": f"Internal server error: {e!s}"}, |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix this issue, the code should avoid including the actual string representation of the exception ({e!s}) in the response sent to the client. Instead, upon catching an exception, the error (including the exception message and ideally the stack trace) should be logged on the server for diagnostic purposes, and a generic error message (e.g., "An internal server error has occurred.") should be returned to the client. This keeps sensitive implementation details from being exposed to a potential attacker while still providing developers the information they need to debug the error. The required changes are as follows:
- In the exception handler in the
chat_completionsroute, uselogger.errorto log the actual exception and stack trace. - Change the frontend-facing response to a generic message, such as
{"error": "Internal server error"}. - Ensure the logger is available (already present as
loggerin the code) and used correctly for logging exceptions. - No new imports are needed, as
loggerand FastAPI'sJSONResponseare already imported.
-
Copy modified line R5912 -
Copy modified line R5915
| @@ -5909,9 +5909,10 @@ | ||
| content={"error": "No user message provided"}, | ||
| ) | ||
| except Exception as e: | ||
| logger.error("Exception in /v1/chat/completions endpoint", exc_info=True) | ||
| return JSONResponse( | ||
| status_code=500, | ||
| content={"error": f"Internal server error: {e!s}"}, | ||
| content={"error": "Internal server error"}, | ||
| ) | ||
|
|
||
| async def _stream_response( |
|
Hi, @Wendong-Fan ,I think this PR is finished and need you review again,Please feel free to let me know if you have any questions.(I think the failed check in github actions is not caused by this pr) |
Description
Feat #3190
Checklist
Go over all the following points, and put an
xin all the boxes that apply.Fixes #issue-numberin the PR description (required)pyproject.tomlanduv lockIf you are unsure about any of these, don't hesitate to ask. We are here to help!