Get started with Nordlys by changing one line of code. No complex setup required.
Nordlys is a Mixture of Models model: each prompt activates the right models under the hood.
Step 1: Get Your API Key
Generate Key
Generate your API key from the dashboard
Step 2: Install SDK (Optional)
Nordlys SDK
OpenAI SDK
Anthropic SDK
Gemini SDK
Vercel AI SDK
LangChain
cURL
Python only - native Nordlys SDK with Registry and Router APIs
npm install @anthropic-ai/sdk
pip install google-generativeai
npm install @google/genai
npm install ai @ai-sdk/openai
pip install langchain-openai
npm install @langchain/openai
No installation required - cURL is available on most systems.
Authentication
Nordlys uses API keys for authentication. Include your API key in requests:
Header: Authorization: Bearer YOUR_API_KEY
Store keys in environment variables (NORDLYS_API_KEY)
Step 3: Make Your First Request
Choose your preferred language and framework:
Nordlys SDK
OpenAI SDK
Anthropic SDK
Gemini SDK
Vercel AI SDK
LangChain
from nordlys_py import Nordlys
nordlys = Nordlys( api_key = "your-nordlys-api-key" )
response = nordlys.chat.completions.create(
model = "nordlys/hypernova" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
print (response.choices[ 0 ].message.content)
JavaScript/Node.js
Python
cURL
import OpenAI from 'openai' ;
const client = new OpenAI ({
apiKey: 'your-nordlys-api-key' ,
baseURL: 'https://api.nordlyslabs.com/v1'
});
const response = await client . chat . completions . create ({
model: 'nordlys/hypernova' ,
messages: [{ role: 'user' , content: 'Hello!' }]
});
console . log ( response . choices [ 0 ]. message . content );
JavaScript/Node.js
Python
cURL
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
apiKey: 'your-nordlys-api-key' ,
baseURL: 'https://api.nordlyslabs.com/v1'
});
const response = await client . messages . create ({
model: 'nordlys/hypernova' ,
max_tokens: 1000 ,
messages: [{ role: 'user' , content: 'Hello!' }]
});
console . log ( response . content [ 0 ]. text );
JavaScript/Node.js
Python
cURL
import { GoogleGenerativeAI } from '@google/genai' ;
const genAI = new GoogleGenerativeAI ({
apiKey: process . env . NORDLYS_API_KEY || 'your-nordlys-api-key' ,
httpOptions: {
baseUrl: 'https://api.nordlyslabs.com/v1beta'
}
});
const model = genAI . getGenerativeModel ({ model: 'nordlys/hypernova' });
const result = await model . generateContent ({
contents: [
{
role: 'user' ,
parts: [{ text: 'Hello!' }]
}
],
generationConfig: {
maxOutputTokens: 512
}
});
console . log ( result . response . text ());
Basic Text Generation
Streaming
React Components
import { openai } from '@ai-sdk/openai' ;
import { generateText } from 'ai' ;
const { text } = await generateText ({
model: openai ( 'nordlys/hypernova' , {
baseURL: 'https://api.nordlyslabs.com/v1' ,
apiKey: 'your-nordlys-api-key'
}),
prompt: 'Hello!'
});
console . log ( text );
JavaScript/Node.js
Python
Chains
import { ChatOpenAI } from '@langchain/openai' ;
const model = new ChatOpenAI ({
openAIApiKey: 'your-nordlys-api-key' ,
configuration: {
baseURL: 'https://api.nordlyslabs.com/v1'
},
modelName: 'nordlys/hypernova'
});
const response = await model . invoke ( 'Hello!' );
console . log ( response . content );
Error Handling
Always implement proper error handling in production. Nordlys provides detailed error information to help you build resilient applications.
TypeScript
Python
JavaScript (Browser)
import OpenAI from 'openai' ;
const client = new OpenAI ({
apiKey: process . env . NORDLYS_API_KEY ,
baseURL: 'https://api.nordlyslabs.com/v1'
});
async function chatWithRetry ( message : string , maxRetries = 3 ) {
for ( let attempt = 1 ; attempt <= maxRetries ; attempt ++ ) {
try {
const response = await client . chat . completions . create ({
model: 'nordlys/hypernova' ,
messages: [{ role: 'user' , content: message }]
});
return response . choices [ 0 ]. message . content ;
} catch ( error : any ) {
console . error ( `Attempt ${ attempt } failed:` , error . message );
if ( attempt === maxRetries ) throw error ;
// Exponential backoff
await new Promise ( resolve =>
setTimeout ( resolve , Math . pow ( 2 , attempt ) * 1000 )
);
}
}
}
// Usage
try {
const result = await chatWithRetry ( 'Explain quantum computing' );
console . log ( result );
} catch ( error ) {
console . error ( 'All retries failed:' , error );
// Implement your preferred recovery behavior (message, etc.)
}
Production Tip : Always log the request_id from error responses for debugging. For comprehensive error handling patterns, see the Error Handling Best Practices guide.
Example Response
OpenAI Format
Anthropic Format
{
"id" : "chatcmpl-abc123" ,
"object" : "chat.completion" ,
"created" : 1677652288 ,
"model" : "gpt-5-nano" ,
"choices" : [{
"index" : 0 ,
"message" : {
"role" : "assistant" ,
"content" : "Hello! I'm ready to help you."
},
"finish_reason" : "stop"
}],
"usage" : {
"prompt_tokens" : 5 ,
"completion_tokens" : 10 ,
"total_tokens" : 15
}
}
Nordlys returns standard OpenAI or Anthropic-compatible responses.
Testing Your Integration
Send Test Request
Run your code with a simple message like “Hello!” to verify the connection
Check Response
Confirm you receive a response and check the model field in the response
Next Steps
Need Help?