[POST] /ai/chat
Last updated
Last updated
This endpoint allows you to make a request to our Ollama server and have AI return a generated response using one of our many available models in a timely manner. It's very easy to make a request and obtain a quick response, see below how to do that.
Within the body of your request, you are required to provide an array of messages that show your prompts. An example of one of these is below:
role
can only be one of three values:
system: This role is used to define the AI's personality and traits, providing background context that guides its responses.
user: This role represents the input or prompt provided by the user.
assistant: This role contains the output generated by the AI in response to the user's prompt.
POST
/ai/chat
Headers
Name | Value |
---|---|
Body
Response
Name | Type | Required? | Description |
---|---|---|---|
Content-Type
application/json
Authorization
Your API token
model
string
YES
ID of model
format
string
NO
Can only be JSON
messages
array
YES
Array of objects