Documentation Index Fetch the complete documentation index at: https://docs.nordlyslabs.com/llms.txt
Use this file to discover all available pages before exploring further.
Common Issues
Authentication Problems
Problem: Getting authentication errors when making API calls.Solutions:
Check your API key:
# Verify your API key is set correctly
echo $NORDLYS_API_KEY
Ensure correct format:
// Correct - no 'Bearer' prefix needed
const openai = new OpenAI ({
apiKey: 'your-nordlys-api-key' ,
baseURL: 'https://api.nordlyslabs.com/v1'
});
Verify API key validity:
Check if your API key has expired
Ensure you’re using the correct key for your environment
Try regenerating your API key in the dashboard
Test with curl:
curl -H "Authorization: Bearer apk_123456" \
-H "Content-Type: application/json" \
https://api.nordlyslabs.com/v1/chat/completions \
-d '{"model":"nordlys/hypernova","messages":[{"role":"user","content":"test"}]}'
Problem: Environment variable not being loaded.Solutions:
Check environment variable:
# In terminal
export NORDLYS_API_KEY = your-key-here
# Or in .env file
echo "NORDLYS_API_KEY=your-key-here" >> .env
Load environment variables:
// Node.js
require ( 'dotenv' ). config ();
// Or using ES modules
import 'dotenv/config' ;
Python environment:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv( "NORDLYS_API_KEY" )
Configuration Issues
Problem: Using incorrect base URL causing connection failures.Correct base URL: https://api.nordlyslabs.com/v1
Common mistakes: // ❌ Wrong
baseURL : 'https://api.openai.com/v1'
baseURL : 'https://nordlys.ai/api/v1'
baseURL : 'https://www.nordlyslabs.com/v1'
// ✅ Correct
baseURL : 'https://api.nordlyslabs.com/v1'
Problem: Nordlys model not working or model errors.Solutions:
Use default model ID for Nordlys model:
// ✅ Correct - enables Nordlys model
model : "nordlys/hypernova"
// ❌ Wrong - tries to use specific model
model : "nordlys"
model : "nordlys-code"
model : "nordlys/nordlys"
TypeScript type issues:
// Option 1: Type assertion
model : "nordlys/hypernova" as any
// Option 2: Disable strict checking for this parameter
// @ts-ignore
model : "nordlys/hypernova"
SSL/TLS Certificate Errors
Problem: Certificate validation errors in some environments.Solutions:
Update certificates:
# Ubuntu/Debian
sudo apt-get update && sudo apt-get install ca-certificates
# macOS
brew install ca-certificates
Node.js certificate issues:
// Temporary workaround (not recommended for production)
process . env [ "NODE_TLS_REJECT_UNAUTHORIZED" ] = 0 ;
// Better solution: update Node.js or certificates
Python certificate issues:
import ssl
import certifi
# Ensure certificates are up to date
ssl.create_default_context( cafile = certifi.where())
Request/Response Issues
Problem: Getting empty responses or no content.Diagnostic steps:
Check request format:
const completion = await openai . chat . completions . create ({
model: "nordlys/hypernova" ,
messages: [
{ role: "user" , content: "Hello" } // Ensure content is not empty
]
});
Verify response handling:
console . log ( "Full response:" , completion );
console . log ( "Content:" , completion . choices [ 0 ]?. message ?. content );
Check for API errors:
try {
const completion = await openai . chat . completions . create ({ ... });
} catch ( error ) {
console . log ( "Error details:" , error );
console . log ( "Status:" , error . status );
console . log ( "Message:" , error . message );
}
Problem: Streaming responses not appearing or failing.Solutions:
Check streaming syntax:
// ✅ Correct streaming setup
const stream = await openai . chat . completions . create ({
model: "nordlys/hypernova" ,
messages: [ ... ],
stream: true
});
for await ( const chunk of stream ) {
process . stdout . write ( chunk . choices [ 0 ]?. delta ?. content || '' );
}
Browser streaming with fetch:
const response = await fetch ( '/api/stream-chat' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ message })
});
const reader = response . body . getReader ();
const decoder = new TextDecoder ();
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
const chunk = decoder . decode ( value );
// Process chunk...
}
Server-sent events setup:
// Server
res . writeHead ( 200 , {
'Content-Type' : 'text/event-stream' ,
'Cache-Control' : 'no-cache' ,
'Connection' : 'keep-alive'
});
Problem: Getting 429 errors (rate limit exceeded).Solutions:
Implement exponential backoff:
async function retryWithBackoff ( fn , maxRetries = 3 ) {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await fn ();
} catch ( error ) {
if ( error . status === 429 && i < maxRetries - 1 ) {
const delay = Math . pow ( 2 , i ) * 1000 ; // 1s, 2s, 4s
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw error ;
}
}
}
Check your rate limits:
Free tier : 100 requests/minute, 10,000 tokens/minute
Pro tier : 1,000 requests/minute, 100,000 tokens/minute
Enterprise : Custom limits
Implement request queuing:
class RequestQueue {
constructor ( maxPerMinute = 100 ) {
this . queue = [];
this . maxPerMinute = maxPerMinute ;
this . requestTimes = [];
}
async enqueue ( requestFn ) {
return new Promise (( resolve , reject ) => {
this . queue . push ({ requestFn , resolve , reject });
this . processQueue ();
});
}
async processQueue () {
if ( this . queue . length === 0 ) return ;
const now = Date . now ();
this . requestTimes = this . requestTimes . filter ( time => now - time < 60000 );
if ( this . requestTimes . length < this . maxPerMinute ) {
const { requestFn , resolve , reject } = this . queue . shift ();
this . requestTimes . push ( now );
try {
const result = await requestFn ();
resolve ( result );
} catch ( error ) {
reject ( error );
}
// Process next request
setTimeout (() => this . processQueue (), 100 );
} else {
// Wait and try again
setTimeout (() => this . processQueue (), 1000 );
}
}
}
Integration-Specific Issues
LangChain Integration Problems
Problem: LangChain not working with Nordlys.Solutions:
Correct LangChain setup:
# Python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
api_key = os.getenv( "NORDLYS_API_KEY" ),
base_url = "https://api.nordlyslabs.com/v1" ,
model = "nordlys/hypernova" # Important: default model ID
)
// JavaScript
import { ChatOpenAI } from "@langchain/openai" ;
const llm = new ChatOpenAI ({
apiKey: process . env . NORDLYS_API_KEY ,
configuration: {
baseURL: "https://api.nordlyslabs.com/v1"
},
model: "nordlys/hypernova"
});
Handle LangChain-specific errors:
from openai import APIError
try :
response = llm.invoke( "Hello" )
except APIError as e:
print ( f "API Error: { e } " )
except Exception as e:
print ( f "Other error: { e } " )
Problem: Vercel AI SDK not connecting properly.Solutions:
Using the OpenAI-compatible client:
import { openai } from '@ai-sdk/openai' ;
const nordlysOpenAI = openai ({
apiKey: process . env . NORDLYS_API_KEY ,
baseURL: 'https://api.nordlyslabs.com/v1' ,
});
const { text } = await generateText ({
model: nordlysOpenAI ( 'nordlys/hypernova' ),
prompt: 'Hello'
});
TypeScript issues:
// If getting type errors
const model = nordlysOpenAI ( 'nordlys/hypernova' as any );
Environment variables in Next.js:
// next.config.js
module . exports = {
env: {
NORDLYS_API_KEY: process . env . NORDLYS_API_KEY ,
},
};
Nordlys Error Scenarios
Model Registry Errors (404)
Scenario: Model Not Found
Symptom: {
"error" : {
"type" : "model_registry_error" ,
"message" : "Model 'invalid-model' not found"
}
}
Common causes:
Typo in model name
Model not available in your region
Model temporarily disabled
Solutions:
Check for typos in your model ID
Use the default model :
model : "nordlys/hypernova" // ✅ Recommended default
Contact support if the model remains unavailable
Upstream Service Errors
Scenario: Upstream Service Errors
Symptom: {
"error" : {
"type" : "upstream_error" ,
"message" : "Upstream service error: rate limit exceeded"
}
}
Solutions:
Retry with exponential backoff
Check rate limits in your dashboard
Reduce request frequency or batch size
Error Investigation Checklist
When encountering errors:
Capture Context
Check Error Details
Verify Configuration
Review Documentation
Contact Support (if needed)
Include request_id
Provide error reproduction steps
Share redacted request/response
Problem: Responses taking longer than expected.Diagnostic steps:
Reduce prompt size:
Keep prompts concise and trim long chat histories.
Batch smaller requests:
Split large documents into smaller chunks.
Check local network latency:
Test connectivity to api.nordlyslabs.com.
Problem: Network latency issues.Solutions:
Check your network:
# Test connectivity
ping nordlyslabs.com
# Test TLS handshake
curl -w "@curl-format.txt" -o /dev/null https://api.nordlyslabs.com/v1/models
Implement timeout handling:
const controller = new AbortController ();
const timeoutId = setTimeout (() => controller . abort (), 30000 ); // 30s timeout
try {
const completion = await openai . chat . completions . create ({
model: "nordlys/hypernova" ,
messages: [ ... ]
}, {
signal: controller . signal
});
} catch ( error ) {
if ( error . name === 'AbortError' ) {
console . log ( 'Request timed out' );
}
} finally {
clearTimeout ( timeoutId );
}
Use connection pooling:
import https from 'https' ;
const agent = new https . Agent ({
keepAlive: true ,
maxSockets: 10
});
const openai = new OpenAI ({
apiKey: process . env . NORDLYS_API_KEY ,
baseURL: 'https://api.nordlyslabs.com/v1' ,
httpAgent: agent
});
Development Environment Issues
Problem: Cross-origin resource sharing errors.Solutions:
Never call API directly from browser:
// ❌ Wrong - exposes API key
// const completion = await openai.chat.completions.create({...});
// ✅ Correct - use your backend
const response = await fetch ( '/api/chat' , {
method: 'POST' ,
body: JSON . stringify ({ message })
});
Set up proxy in development:
// Next.js API route
// pages/api/chat.js
export default async function handler ( req , res ) {
const completion = await openai . chat . completions . create ({
model: "nordlys/hypernova" ,
messages: req . body . messages
});
res . json ({ response: completion . choices [ 0 ]. message . content });
}
Configure CORS for your backend:
// Express.js
app . use ( cors ({
origin: [ 'http://localhost:3000' , 'https://yourdomain.com' ],
credentials: true
}));
TypeScript Compilation Errors
Problem: TypeScript errors with Nordlys integration.Solutions:
Install correct types:
npm install --save-dev @types/node
npm install openai # Latest version includes types
Type assertion for model parameter:
const completion = await openai . chat . completions . create ({
model: "nordlys/hypernova" as any , // Type assertion
messages: [ ... ]
});
Create custom types if needed:
interface NordlysCompletion extends ChatCompletion {
model : string ;
}
Problem: ES modules vs CommonJS issues.Solutions:
Use correct imports:
// ES modules
import OpenAI from 'openai' ;
// CommonJS
const OpenAI = require ( 'openai' );
Package.json configuration:
{
"type" : "module" ,
"dependencies" : {
"openai" : "^4.0.0"
}
}
Node.js version compatibility:
# Check Node.js version
node --version
# Nordlys requires Node.js 18+
# Update if necessary
Getting Help
When reporting issues, please include:
Environment Details
# System info
node --version
npm --version
# Package versions
npm list openai
npm list @langchain/openai
Request Details
// Sanitized request (remove API key)
{
"model" : "nordlys/hypernova" ,
"messages" : [ ... ],
"temperature" : 0.7
}
Error Information
console . log ( "Error status:" , error . status );
console . log ( "Error message:" , error . message );
console . log ( "Error stack:" , error . stack );
Network Diagnostics
# Test connectivity
curl -I https://api.nordlyslabs.com/v1/models
# DNS resolution
nslookup nordlyslabs.com
Support Channels
Documentation Check our comprehensive guides and API reference for solutions
GitHub Issues Report bugs and request features on our GitHub repository
Discord Community Get help from the community and Nordlys team members
Best Practices for Debugging
Start with Simple Requests
Test basic functionality first const simple = await openai . chat . completions . create ({
model: "nordlys/hypernova" ,
messages: [{ role: "user" , content: "Hello" }]
});
Enable Verbose Logging
Add detailed logging to understand what’s happening console . log ( "Request:" , JSON . stringify ( requestData , null , 2 ));
console . log ( "Response:" , JSON . stringify ( response , null , 2 ));
Test with curl
Verify API access outside your application curl -X POST https://api.nordlyslabs.com/v1/chat/completions \
-H "Authorization: Bearer apk_123456" \
-H "Content-Type: application/json" \
-d '{"model":"nordlys/hypernova","messages":[{"role":"user","content":"test"}]}'
Isolate the Problem
Systematically narrow down the issue:
Test different messages
Try different parameters
Test in different environments
Compare with working examples
Complete Error Handling Example
Here’s a production-ready error handling implementation:
class NordlysClient {
constructor ( apiKey ) {
this . openai = new OpenAI ({
apiKey: apiKey ,
baseURL: 'https://api.nordlyslabs.com/v1'
});
}
async createCompletion ( params , retries = 3 ) {
for ( let attempt = 1 ; attempt <= retries ; attempt ++ ) {
try {
const completion = await this . openai . chat . completions . create ({
model: "nordlys/hypernova" ,
... params
});
// Log success metrics
console . log ( `✅ Success: ${ completion . usage . total_tokens } tokens` );
return completion ;
} catch ( error ) {
// Handle specific errors
if ( error . status === 401 ) {
throw new Error ( 'Invalid API key - check your credentials' );
}
if ( error . status === 429 ) {
const delay = Math . min ( 1000 * Math . pow ( 2 , attempt ), 10000 );
console . log ( `⚠️ Rate limited, retrying in ${ delay } ms (attempt ${ attempt } / ${ retries } )` );
if ( attempt < retries ) {
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw new Error ( 'Rate limit exceeded - reduce request frequency' );
}
if ( error . status === 400 ) {
throw new Error ( `Invalid request: ${ error . message } ` );
}
if ( error . status >= 500 ) {
const delay = 1000 * attempt ;
console . log ( `🔄 Server error, retrying in ${ delay } ms (attempt ${ attempt } / ${ retries } )` );
if ( attempt < retries ) {
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw new Error ( 'Server error - try again later' );
}
// Unexpected error
throw new Error ( `Unexpected error: ${ error . message } ` );
}
}
}
}
// Usage example
const client = new NordlysClient ( process . env . NORDLYS_API_KEY );
try {
const response = await client . createCompletion ({
messages: [{ role: "user" , content: "Hello!" }],
model: "nordlys/hypernova"
});
console . log ( "Response:" , response . choices [ 0 ]. message . content );
} catch ( error ) {
console . error ( "Failed to get completion:" , error . message );
}
FAQ
How do I choose a Nordlys model?
Use the model ID in your request: model : "nordlys/hypernova"
How do I see model metadata?