HENTAICORD API
  • πŸ‘‹Welcome
  • πŸ”Obtain API Token
  • ⚠️Daily API Rate Limits
  • πŸ¦™AI Chat Completion Tokens
  • API Endpoints
    • πŸ”žRetrieve NSFW media
    • πŸ”₯Chat with our unfiltered LLM
Powered by GitBook
On this page
  • Message Roles
  • Making the API call.

Was this helpful?

  1. API Endpoints

Chat with our unfiltered LLM

Integrate your server with our AI to provide unfilitered responses (trained for erotic RP).

This endpoint allows you to make a request to our LLM and have AI return a generated response in a timely manner. It's very easy to make a request and obtain a quick response, see below how to do that.

Our LLM is trained for Erotic RP πŸ”ž

Message Roles

Within the body of your request, you are required to provide an array of messages that show your prompts. An example of one of these is below:

[
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Why is the sky blue?" },
    { "role": "assistant", "content": "That’s a great question! The sky appears blue due to the scattering of sunlight by the Earth's atmosphere..." } 
]

role can only be one of three values:

  • system: This role is used to define the AI's personality and traits, providing background context that guides its responses.

  • user: This role represents the input or prompt provided by the user.

  • assistant: This role contains the output generated by the AI in response to the user's prompt.

Making the API call.

POST /ai/chat

Headers

Name
Value

Content-Type

application/json

Authorization

Your API token

Body

Name
Type
Required?
Description

messages

Array

YES

Array of objects

format

String

NO

Can only be JSON

mirostat

Integer

NO

Enables Mirostat, an adaptive sampling method for maintaining coherence (0 = disabled, 1 = enabled, 2 = stronger). (default: 0).

mirostat_eta

Float

NO

Learning rate for Mirostat; controls how quickly it adapts (default: 0.1).

mirostat_tau

Float

NO

Controls the entropy target for Mirostat; lower values make responses more predictable (default: 5.0).

num_ctx

Integer

NO

Context window size; how many past tokens the model considers (default: 2048).

repeat_last_n

Integer

NO

Number of recent tokens to track for repetition avoidance (default: 64).

repeat_penalty

Float

NO

Penalizes repeated words; values >1 discourage repetition (default: 1.1).

temperature

Float

NO

Controls randomness; lower values make responses more deterministic, higher values make them more creative (default: 0.8).

seed

Integer

NO

Sets a fixed seed for reproducible results (0 = random).

stop

Array

NO

Defines stop sequences; model stops generating when it encounters any of them.

tfs_z

Integer

NO

Tail-Free Sampling; controls diversity by reducing the probability of low-ranked tokens (default: 1).

num_predict

Integer

NO

Maximum number of tokens to generate in a response (default: 128).

top_k

Integer

NO

Limits sampling to the top K most probable tokens (default: 40); lower values make responses more predictable.

top_p

Float

NO

Nucleus sampling; only considers tokens whose cumulative probability is below P (default: 0.9).

Response

TO COMPLETE
{
  "error": "Invalid request"
}

Last updated 3 months ago

Was this helpful?

πŸ”₯