Skip to main content

Migration Guides

This guide helps you migrate to Apertis from other API providers or update between API versions.

Migrating from OpenAI

Apertis is fully compatible with the OpenAI API format, making migration straightforward.

Step 1: Update Base URL

The only required change is updating the base URL:

Python:

from openai import OpenAI

# Before (OpenAI)
client = OpenAI(
api_key="sk-openai-key"
)

# After (Apertis)
client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

Node.js:

import OpenAI from 'openai';

// Before (OpenAI)
const client = new OpenAI({
apiKey: 'sk-openai-key'
});

// After (Apertis)
const client = new OpenAI({
apiKey: 'sk-apertis-key',
baseURL: 'https://api.apertis.ai/v1'
});

cURL:

# Before (OpenAI)
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer sk-openai-key" \
...

# After (Apertis)
curl https://api.apertis.ai/v1/chat/completions \
-H "Authorization: Bearer sk-apertis-key" \
...

Step 2: Get Your Apertis API Key

  1. Sign up at api.apertis.ai
  2. Navigate to API Keys
  3. Create a new API key
  4. Store it securely

Step 3: Update Environment Variables

# Before
export OPENAI_API_KEY="sk-openai-key"

# After
export APERTIS_API_KEY="sk-apertis-key"
export APERTIS_BASE_URL="https://api.apertis.ai/v1"

Step 4: Test Your Integration

from openai import OpenAI
import os

client = OpenAI(
api_key=os.environ.get("APERTIS_API_KEY"),
base_url=os.environ.get("APERTIS_BASE_URL")
)

# Test with a simple request
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Model Name Mapping

Most OpenAI models are available with the same names:

OpenAI ModelApertis ModelNotes
gpt-4.1gpt-4.1✓ Same
gpt-4.1-minigpt-4.1-mini✓ Same
gpt-4.1gpt-4.1✓ Same
gpt-5.2gpt-5.2✓ Same
text-embedding-3-smalltext-embedding-3-small✓ Same
whisper-1whisper-1✓ Same
tts-1tts-1✓ Same
dall-e-3dall-e-3✓ Same

Additional Models

Apertis provides access to models not available directly from OpenAI:

# Use Claude models
response = client.chat.completions.create(
model="claude-sonnet-4.5",
messages=[{"role": "user", "content": "Hello!"}]
)

# Use Gemini models
response = client.chat.completions.create(
model="gemini-3-pro-preview",
messages=[{"role": "user", "content": "Hello!"}]
)

Migrating from Anthropic (Claude Direct)

API Format Differences

Anthropic uses a different API format. Here's how to migrate:

Before (Anthropic SDK):

import anthropic

client = anthropic.Anthropic(api_key="sk-ant-xxx")

message = client.messages.create(
model="claude-sonnet-4.5",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello!"}
]
)

After (Apertis with OpenAI format):

from openai import OpenAI

client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

response = client.chat.completions.create(
model="claude-sonnet-4.5",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello!"}
]
)

Key Differences

FeatureAnthropic APIApertis API
SDKanthropicopenai
Response formatmessage.content[0].textchoices[0].message.content
System promptsystem parameterSystem message in messages
StreamingDifferent formatOpenAI SSE format

System Prompts

Anthropic format:

message = client.messages.create(
model="claude-sonnet-4.5",
system="You are a helpful assistant.",
messages=[{"role": "user", "content": "Hello!"}]
)

Apertis format:

response = client.chat.completions.create(
model="claude-sonnet-4.5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)

Response Parsing

Anthropic:

text = message.content[0].text

Apertis:

text = response.choices[0].message.content

Migrating from Azure OpenAI

Configuration Changes

Before (Azure OpenAI):

from openai import AzureOpenAI

client = AzureOpenAI(
api_key="azure-api-key",
api_version="2024-02-15-preview",
azure_endpoint="https://your-resource.openai.azure.com"
)

response = client.chat.completions.create(
model="your-deployment-name", # Deployment name, not model
messages=[{"role": "user", "content": "Hello!"}]
)

After (Apertis):

from openai import OpenAI

client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

response = client.chat.completions.create(
model="gpt-4.1", # Actual model name
messages=[{"role": "user", "content": "Hello!"}]
)

Key Differences

FeatureAzure OpenAIApertis
ClientAzureOpenAIOpenAI
Model paramDeployment nameModel name
API versioningRequiredNot needed
EndpointCustom per resourceFixed

Migrating from Google (Vertex AI / Gemini)

From Vertex AI SDK

Before (Vertex AI):

import vertexai
from vertexai.generative_models import GenerativeModel

vertexai.init(project="your-project", location="us-central1")
model = GenerativeModel("gemini-3-pro-preview")

response = model.generate_content("Hello!")

After (Apertis):

from openai import OpenAI

client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

response = client.chat.completions.create(
model="gemini-3-pro-preview",
messages=[{"role": "user", "content": "Hello!"}]
)

From Google AI Studio

Before (Google AI SDK):

import google.generativeai as genai

genai.configure(api_key="google-api-key")
model = genai.GenerativeModel("gemini-3-pro-preview")

response = model.generate_content("Hello!")
text = response.text

After (Apertis):

from openai import OpenAI

client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

response = client.chat.completions.create(
model="gemini-3-pro-preview",
messages=[{"role": "user", "content": "Hello!"}]
)
text = response.choices[0].message.content

Migrating from Other Providers

LiteLLM

If you're using LiteLLM, you can switch to Apertis:

Before (LiteLLM):

from litellm import completion

response = completion(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}]
)

After (Apertis):

from openai import OpenAI

client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}]
)

OpenRouter

Before (OpenRouter):

from openai import OpenAI

client = OpenAI(
api_key="sk-or-xxx",
base_url="https://openrouter.ai/api/v1"
)

After (Apertis):

from openai import OpenAI

client = OpenAI(
api_key="sk-apertis-key",
base_url="https://api.apertis.ai/v1"
)

Framework Migrations

LangChain

from langchain_openai import ChatOpenAI

# Before
llm = ChatOpenAI(model="gpt-5.2")

# After
llm = ChatOpenAI(
model="gpt-4.1",
openai_api_key="sk-apertis-key",
openai_api_base="https://api.apertis.ai/v1"
)

LlamaIndex

from llama_index.llms.openai import OpenAI

# Before
llm = OpenAI(model="gpt-5.2")

# After
llm = OpenAI(
model="gpt-4.1",
api_key="sk-apertis-key",
api_base="https://api.apertis.ai/v1"
)

Migration Checklist

Pre-Migration

  • Sign up for Apertis account
  • Create API key
  • Review current API usage patterns
  • Identify all integration points
  • Plan testing strategy

During Migration

  • Update base URL/endpoint
  • Update API key
  • Update environment variables
  • Update model names if needed
  • Test each integration point

Post-Migration

  • Verify all features work correctly
  • Monitor error rates
  • Check response quality
  • Review billing/usage
  • Update documentation

Troubleshooting Migration Issues

Common Issues

IssueCauseSolution
401 UnauthorizedWrong API keyUse Apertis key, not old provider key
Model not foundDifferent model nameCheck available models
Different response formatSDK mismatchUse OpenAI SDK format
Missing featuresNot all features supportedCheck documentation for alternatives

Getting Help

If you encounter issues during migration:

  1. Check the Troubleshooting Guide
  2. Review Error Codes
  3. Contact support at [email protected]