[POST] /ai/chat

This endpoint allows you to make a request to our Ollama server and have AI return a generated response using one of our many available models in a timely manner. It's very easy to make a request and obtain a quick response, see below how to do that.

Message Roles

Within the body of your request, you are required to provide an array of messages that show your prompts. An example of one of these is below:

[
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Why is the sky blue?" },
    { "role": "assistant", "content": "That’s a great question! The sky appears blue due to the scattering of sunlight by the Earth's atmosphere..." } 
]

role can only be one of three values:

  • system: This role is used to define the AI's personality and traits, providing background context that guides its responses.

  • user: This role represents the input or prompt provided by the user.

  • assistant: This role contains the output generated by the AI in response to the user's prompt.

Making the API call.

POST /ai/chat

Headers

Body

Response

TO COMPLETE

Last updated