Replies: 1 comment
-
|
Hi @wugutech, Yeah, you can definitely get this working! Since vLLM speaks the same language as OpenAI (it’s API-compatible), you don't actually need a special "vLLM" node to make them talk to each other. The easiest way I've found is to just use the HTTP Request node in n8n. You just point it at your vLLM server URL (like http://your-ip:8000/v1/chat/completions), set it to POST, and send over the JSON body just like you would for GPT-4. Alternatively, if you're using the AI Agent nodes in n8n, you can sometimes trick the OpenAI node into working by just swapping out the Base URL in the credential settings to point to your local vLLM instance instead of https://www.google.com/search?q=api.openai.com. Hope that helps you get it connected! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Any known integration with n8n ?
Beta Was this translation helpful? Give feedback.
All reactions