Skip to main content

Quick Start

Get up and running with the Apertis API in under 5 minutes. This guide walks you through making your first API call.

Prerequisites

  • An Apertis account (Sign up here)
  • An API key (Get your key)
  • Basic knowledge of HTTP requests or a programming language

Step 1: Get Your API Key

  1. Log in to Apertis Dashboard
  2. Navigate to API Keys
  3. Click Create New Key
  4. Copy your key (format: sk-xxxxxxxx)
warning

Save your API key securely. It's only shown once!

Step 2: Make Your First Request

Choose your preferred method:

Using cURL

curl https://api.apertis.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-your-api-key" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "Hello! What can you do?"}
]
}'

Using Python

First, install the OpenAI SDK:

pip install openai

Then make a request:

from openai import OpenAI

client = OpenAI(
api_key="sk-your-api-key",
base_url="https://api.apertis.ai/v1"
)

response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "user", "content": "Hello! What can you do?"}
]
)

print(response.choices[0].message.content)

Using Node.js

First, install the OpenAI SDK:

npm install openai

Then make a request:

import OpenAI from 'openai';

const client = new OpenAI({
apiKey: 'sk-your-api-key',
baseURL: 'https://api.apertis.ai/v1'
});

async function main() {
const response = await client.chat.completions.create({
model: 'gpt-4.1',
messages: [
{ role: 'user', content: 'Hello! What can you do?' }
]
});

console.log(response.choices[0].message.content);
}

main();

Step 3: Understand the Response

A successful response looks like this:

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1703894400,
"model": "gpt-4.1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm an AI assistant. I can help you with..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 45,
"total_tokens": 57
}
}

Key Fields

FieldDescription
idUnique identifier for this completion
modelThe model used for generation
choices[0].message.contentThe AI's response
usageToken usage for billing

Step 4: Try Different Models

Apertis provides access to 60+ AI models. Try different ones:

# OpenAI GPT-4o
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)

# Anthropic Claude Sonnet 4.5
response = client.chat.completions.create(
model="claude-sonnet-4.5",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)

# Google Gemini Pro
response = client.chat.completions.create(
model="gemini-3-pro-preview",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
ModelBest For
gpt-4.1General purpose, balanced
gpt-4.1-miniFast, cost-effective
claude-sonnet-4.5Long context, analysis
claude-opus-4-5-20251101Complex reasoning
gemini-3-pro-previewMultimodal, long context

View all models →

Step 5: Enable Streaming

For real-time responses, enable streaming:

response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Write a short poem"}],
stream=True # Enable streaming
)

for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

Common Use Cases

Multi-turn Conversations

messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris."},
{"role": "user", "content": "What's the population?"}
]

response = client.chat.completions.create(
model="gpt-4.1",
messages=messages
)

Code Generation

response = client.chat.completions.create(
model="gpt-4.1",
messages=[{
"role": "user",
"content": "Write a Python function to calculate fibonacci numbers"
}]
)

Image Analysis

response = client.chat.completions.create(
model="gpt-4.1",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
]
}]
)

Text Embeddings

response = client.embeddings.create(
model="text-embedding-3-small",
input="Hello, world!"
)

embedding = response.data[0].embedding
print(f"Embedding dimension: {len(embedding)}")

Environment Variables

For production, use environment variables instead of hardcoding:

# Set environment variable
export APERTIS_API_KEY="sk-your-api-key"
import os
from openai import OpenAI

client = OpenAI(
api_key=os.environ.get("APERTIS_API_KEY"),
base_url="https://api.apertis.ai/v1"
)

Error Handling

Always handle potential errors:

from openai import OpenAI, APIError, RateLimitError

client = OpenAI(
api_key="sk-your-api-key",
base_url="https://api.apertis.ai/v1"
)

try:
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

except RateLimitError:
print("Rate limited! Please wait and retry.")

except APIError as e:
print(f"API error: {e}")

Next Steps

Now that you've made your first API call, explore more:

Quick Reference

Base URL

https://api.apertis.ai/v1

Authentication

Authorization: Bearer sk-your-api-key

Key Endpoints

EndpointDescription
/v1/chat/completionsChat completions
/v1/embeddingsText embeddings
/v1/images/generationsImage generation
/v1/audio/speechText to speech
/v1/audio/transcriptionsSpeech to text
/v1/modelsList available models

Getting Help