Skip to content

Conversation

@LuoPengcheng12138
Copy link
Collaborator

Description

Feat #3190

Checklist

Go over all the following points, and put an x in all the boxes that apply.

  • I have read the CONTRIBUTION guide (required)
  • I have linked this PR to an issue using the Development section on the right sidebar or by adding Fixes #issue-number in the PR description (required)
  • I have checked if any dependencies need to be added or updated in pyproject.toml and uv lock
  • I have updated the tests accordingly (required for a bug fix or a new feature)
  • I have updated the documentation if needed:
  • I have added examples if this is a new feature

If you are unsure about any of these, don't hesitate to ask. We are here to help!

@github-actions github-actions bot added the Review Required PR need to be reviewed label Oct 30, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 30, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/openai-compatible-server

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@fengju0213
Copy link
Collaborator

@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files?

@LuoPengcheng12138
Copy link
Collaborator Author

@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files?

Yes, it was copied from PR2311. When I tried merging main into 2311, the operation couldn’t be completed due to numerous conflicts with the main branch(maybe), so I created a new branch instead. Could you help me merge main into 2311 if needed?

@fengju0213
Copy link
Collaborator

@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files?

Yes, it was copied from PR2311. When I tried merging main into 2311, the operation couldn’t be completed due to numerous conflicts with the main branch(maybe), so I created a new branch instead. Could you help me merge main into 2311 if needed?

i see,maybe It's been a while since 2311 last update. will help to merge main to 2311

Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @LuoPengcheng12138 , left some initial comments below

Comment on lines 5287 to 5289
elif msg_role == "tool":
# Handle tool response messages if needed
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also support taking system message to align with OpenAI interface?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems content in this file is not used in this PR

# Type guard to ensure tools_to_use is not None
if tools_to_use is not None:
for tool in tools_to_use:
self.add_external_tool(tool)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems here would accumulate the tools added to agent, should we clean after one request ended?

@Wendong-Fan Wendong-Fan added Waiting for Update PR has been reviewed, need to be updated based on review comment and removed Review Required PR need to be reviewed labels Nov 2, 2025
Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @LuoPengcheng12138 , left some comments below and added one enhance PR:#3380 , feel free to review and leave your comments

Comment on lines 5258 to 5259
# Define Pydantic models for request/response
class ChatMessage(BaseModel):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems defined but not used

if reset_memory:
self.init_messages()

def reset_system_message(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already has update_system_message, seems no necessary to add this new function

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted

content={"error": f"Internal server error: {e!s}"},
)

async def _stream_response(message: BaseMessage, request_data: dict):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The streaming implementation awaited the full response first, then split it into words and sent them one-by-one with artificial delays. This defeats the purpose of streaming

Comment on lines 5278 to 5282
messages = request_data.get("messages", [])
model = request_data.get("model", "camel-model")
stream = request_data.get("stream", False)
functions = request_data.get("functions")
tools = request_data.get("tools")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The agent's state persisted across different API requests, which would causing conversations from different clients to mix and memory leaks, would better to add self.reset() and self._external_tool_schemas.clear() at the start of each request?

Comment on lines 5301 to 5306
elif msg_role == "system":
sys_msg = BaseMessage.make_system_message(
role_name="System", content=msg_content
)
self.update_memory(sys_msg, OpenAIBackendRole.SYSTEM)
self.reset_system_message(msg_content, True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we update system message without clearing memory to preserve conversation history?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, during the design phase, we considered that multiple GAIA2 tests might have different system_messages. I was concerned about potential interference between these system_messages so the system_message is reset each time. Currently, it is not confirmed whether the ARE environment reuses a single agent instance during testing, and I will continue to investigate this. However, for the function serving as the OpenAI Server interface, it is indeed necessary to not clear memory so as to preserve the conversation history.This may lead to functional contradictions.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.My understanding of the chat interface is not thorough enough, and I will update the code!

@LuoPengcheng12138
Copy link
Collaborator Author

thanks @LuoPengcheng12138 , left some comments below and added one enhance PR:#3380 , feel free to review and leave your comments

This PR is still in draft status, and I will revise it later based on your comments.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

except Exception as e:
return JSONResponse(
status_code=500,
content={"error": f"Internal server error: {e!s}"},

Check warning

Code scanning / CodeQL

Information exposure through an exception Medium

Stack trace information
flows to this location and may be exposed to an external user.

Copilot Autofix

AI 4 days ago

To fix this issue, the code should avoid including the actual string representation of the exception ({e!s}) in the response sent to the client. Instead, upon catching an exception, the error (including the exception message and ideally the stack trace) should be logged on the server for diagnostic purposes, and a generic error message (e.g., "An internal server error has occurred.") should be returned to the client. This keeps sensitive implementation details from being exposed to a potential attacker while still providing developers the information they need to debug the error. The required changes are as follows:

  • In the exception handler in the chat_completions route, use logger.error to log the actual exception and stack trace.
  • Change the frontend-facing response to a generic message, such as {"error": "Internal server error"}.
  • Ensure the logger is available (already present as logger in the code) and used correctly for logging exceptions.
  • No new imports are needed, as logger and FastAPI's JSONResponse are already imported.

Suggested changeset 1
camel/agents/chat_agent.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/camel/agents/chat_agent.py b/camel/agents/chat_agent.py
--- a/camel/agents/chat_agent.py
+++ b/camel/agents/chat_agent.py
@@ -5909,9 +5909,10 @@
                         content={"error": "No user message provided"},
                     )
             except Exception as e:
+                logger.error("Exception in /v1/chat/completions endpoint", exc_info=True)
                 return JSONResponse(
                     status_code=500,
-                    content={"error": f"Internal server error: {e!s}"},
+                    content={"error": "Internal server error"},
                 )
 
         async def _stream_response(
EOF
@@ -5909,9 +5909,10 @@
content={"error": "No user message provided"},
)
except Exception as e:
logger.error("Exception in /v1/chat/completions endpoint", exc_info=True)
return JSONResponse(
status_code=500,
content={"error": f"Internal server error: {e!s}"},
content={"error": "Internal server error"},
)

async def _stream_response(
Copilot is powered by AI and may make mistakes. Always verify output.
@LuoPengcheng12138
Copy link
Collaborator Author

Hi, @Wendong-Fan ,I think this PR is finished and need you review again,Please feel free to let me know if you have any questions.(I think the failed check in github actions is not caused by this pr)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Waiting for Update PR has been reviewed, need to be updated based on review comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants