Skip to main content
POST
/
gemini
/
chat
/
completions
cURL
curl --request POST \
  --url https://api.ttapi.io/gemini/chat/completions \
  --header 'Content-Type: application/json' \
  --header 'TT-API-KEY: <api-key>' \
  --data '
{
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "model": "<string>",
  "stream": "false"
}
'
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Hello! Nice to chat with you.\n\nHow can I help you? Or what would you like to talk about? 😊",
        "role": "assistant"
      }
    }
  ],
  "created": 1757472417,
  "id": "oebAaMTvMfCUjMcP_prYwAs",
  "model": "gemini-2.5-pro",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 21,
    "prompt_tokens": 3,
    "total_tokens": 1466
  }
}
The TTAPI proxy for the Gemini Chat API uses TT-API-KEY in the request header for authentication, while all other request and response parameters remain consistent with the official API. For details, refer to the Google official documentation.

Authorizations

TT-API-KEY
string
header
required

You can obtain your API key from the TTAPI Dashboard.

Body

application/json
messages
object[]
required
Example:
[{ "role": "user", "content": "Hello!" }]
model
string
required

Models supported by TTAPI, see Gemini Supported Models

stream
boolean
default:false

Whether to use server-sent events for progressive response transmission

Response

Successful response (stream=false returns JSON, stream=true returns Gemini stream)

choices
object[]

List of response results

created
integer

Response creation timestamp (seconds)

Example:

1757472417

id
string

Unique request identifier

Example:

"oebAaMTvMfCUjMcP_prYwAs"

model
string

Gemini model version used

Example:

"gemini-2.5-pro"

object
string

Response object type

Example:

"chat.completion"

usage
object

Token usage statistics

Last modified on March 16, 2026