Skip to main content

Common Issues

Authentication Problems

Problem: Getting authentication errors when making API calls.Solutions:
  1. Check your API key:
    # Verify your API key is set correctly
    echo $NORDLYS_API_KEY
    
  2. Ensure correct format:
    // Correct - no 'Bearer' prefix needed
    const openai = new OpenAI({
      apiKey: 'your-nordlys-api-key',
      baseURL: 'https://api.nordlyslabs.com/v1'
    });
    
  3. Verify API key validity:
    • Check if your API key has expired
    • Ensure you’re using the correct key for your environment
    • Try regenerating your API key in the dashboard
  4. Test with curl:
     curl -H "Authorization: Bearer apk_123456" \
          -H "Content-Type: application/json" \
          https://api.nordlyslabs.com/v1/chat/completions \
          -d '{"model":"nordlys/hypernova","messages":[{"role":"user","content":"test"}]}'
    
Problem: Environment variable not being loaded.Solutions:
  1. Check environment variable:
    # In terminal
    export NORDLYS_API_KEY=your-key-here
    
    # Or in .env file
    echo "NORDLYS_API_KEY=your-key-here" >> .env
    
  2. Load environment variables:
    // Node.js
    require('dotenv').config();
    
    // Or using ES modules
    import 'dotenv/config';
    
  3. Python environment:
    import os
    from dotenv import load_dotenv
    
    load_dotenv()
    api_key = os.getenv("NORDLYS_API_KEY")
    

Configuration Issues

Problem: Using incorrect base URL causing connection failures.Correct base URL:
https://api.nordlyslabs.com/v1
Common mistakes:
// ❌ Wrong
baseURL: 'https://api.openai.com/v1'
baseURL: 'https://nordlys.ai/api/v1'
baseURL: 'https://www.nordlyslabs.com/v1'

// ✅ Correct
baseURL: 'https://api.nordlyslabs.com/v1'
Problem: Nordlys model not working or model errors.Solutions:
  1. Use default model ID for Nordlys model:
    // ✅ Correct - enables Nordlys model
    model: "nordlys/hypernova"
    
    // ❌ Wrong - tries to use specific model
    model: "nordlys"
    model: "nordlys-code"
    model: "nordlys/nordlys"
    
  2. TypeScript type issues:
    // Option 1: Type assertion
    model: "nordlys/hypernova" as any
    
    // Option 2: Disable strict checking for this parameter
    // @ts-ignore
    model: "nordlys/hypernova"
    
Problem: Certificate validation errors in some environments.Solutions:
  1. Update certificates:
    # Ubuntu/Debian
    sudo apt-get update && sudo apt-get install ca-certificates
    
    # macOS
    brew install ca-certificates
    
  2. Node.js certificate issues:
    // Temporary workaround (not recommended for production)
    process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
    
    // Better solution: update Node.js or certificates
    
  3. Python certificate issues:
    import ssl
    import certifi
    
    # Ensure certificates are up to date
    ssl.create_default_context(cafile=certifi.where())
    

Request/Response Issues

Problem: Getting empty responses or no content.Diagnostic steps:
  1. Check request format:
    const completion = await openai.chat.completions.create({
      model: "nordlys/hypernova",
      messages: [
        { role: "user", content: "Hello" } // Ensure content is not empty
      ]
    });
    
  2. Verify response handling:
     console.log("Full response:", completion);
     console.log("Content:", completion.choices[0]?.message?.content);
    
  3. Check for API errors:
    try {
      const completion = await openai.chat.completions.create({...});
    } catch (error) {
      console.log("Error details:", error);
      console.log("Status:", error.status);
      console.log("Message:", error.message);
    }
    
Problem: Streaming responses not appearing or failing.Solutions:
  1. Check streaming syntax:
    // ✅ Correct streaming setup
    const stream = await openai.chat.completions.create({
      model: "nordlys/hypernova",
      messages: [...],
      stream: true
    });
    
    for await (const chunk of stream) {
      process.stdout.write(chunk.choices[0]?.delta?.content || '');
    }
    
  2. Browser streaming with fetch:
    const response = await fetch('/api/stream-chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message })
    });
    
    const reader = response.body.getReader();
    const decoder = new TextDecoder();
    
    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      
      const chunk = decoder.decode(value);
      // Process chunk...
    }
    
  3. Server-sent events setup:
    // Server
    res.writeHead(200, {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive'
    });
    
Problem: Getting 429 errors (rate limit exceeded).Solutions:
  1. Implement exponential backoff:
    async function retryWithBackoff(fn, maxRetries = 3) {
      for (let i = 0; i < maxRetries; i++) {
        try {
          return await fn();
        } catch (error) {
          if (error.status === 429 && i < maxRetries - 1) {
            const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw error;
        }
      }
    }
    
  2. Check your rate limits:
    • Free tier: 100 requests/minute, 10,000 tokens/minute
    • Pro tier: 1,000 requests/minute, 100,000 tokens/minute
    • Enterprise: Custom limits
  3. Implement request queuing:
    class RequestQueue {
      constructor(maxPerMinute = 100) {
        this.queue = [];
        this.maxPerMinute = maxPerMinute;
        this.requestTimes = [];
      }
      
      async enqueue(requestFn) {
        return new Promise((resolve, reject) => {
          this.queue.push({ requestFn, resolve, reject });
          this.processQueue();
        });
      }
      
      async processQueue() {
        if (this.queue.length === 0) return;
        
        const now = Date.now();
        this.requestTimes = this.requestTimes.filter(time => now - time < 60000);
        
        if (this.requestTimes.length < this.maxPerMinute) {
          const { requestFn, resolve, reject } = this.queue.shift();
          this.requestTimes.push(now);
          
          try {
            const result = await requestFn();
            resolve(result);
          } catch (error) {
            reject(error);
          }
          
          // Process next request
          setTimeout(() => this.processQueue(), 100);
        } else {
          // Wait and try again
          setTimeout(() => this.processQueue(), 1000);
        }
      }
    }
    

Integration-Specific Issues

Problem: LangChain not working with Nordlys.Solutions:
  1. Correct LangChain setup:
    # Python
    from langchain_openai import ChatOpenAI
    
    llm = ChatOpenAI(
        api_key=os.getenv("NORDLYS_API_KEY"),
        base_url="https://api.nordlyslabs.com/v1",
        model="nordlys/hypernova"  # Important: default model ID
    )
    
    // JavaScript
    import { ChatOpenAI } from "@langchain/openai";
    
    const llm = new ChatOpenAI({
      apiKey: process.env.NORDLYS_API_KEY,
      configuration: {
        baseURL: "https://api.nordlyslabs.com/v1"
      },
      model: "nordlys/hypernova"
    });
    
  2. Handle LangChain-specific errors:
    from openai import APIError
    
    try:
        response = llm.invoke("Hello")
    except APIError as e:
        print(f"API Error: {e}")
    except Exception as e:
        print(f"Other error: {e}")
    
Problem: Vercel AI SDK not connecting properly.Solutions:
  1. Using the OpenAI-compatible client:
    import { openai } from '@ai-sdk/openai';
    
    const nordlysOpenAI = openai({
      apiKey: process.env.NORDLYS_API_KEY,
      baseURL: 'https://api.nordlyslabs.com/v1',
    });
    
    const { text } = await generateText({
      model: nordlysOpenAI('nordlys/hypernova'),
      prompt: 'Hello'
    });
    
  2. TypeScript issues:
    // If getting type errors
    const model = nordlysOpenAI('nordlys/hypernova' as any);
    
  3. Environment variables in Next.js:
    // next.config.js
    module.exports = {
      env: {
        NORDLYS_API_KEY: process.env.NORDLYS_API_KEY,
      },
    };
    

Nordlys Error Scenarios

Model Registry Errors (404)

Symptom:
{
  "error": {
    "type": "model_registry_error",
    "message": "Model 'invalid-model' not found"
  }
}
Common causes:
  • Typo in model name
  • Model not available in your region
  • Model temporarily disabled
Solutions:
  1. Check for typos in your model ID
  2. Use the default model:
    model: "nordlys/hypernova" // ✅ Recommended default
    
  3. Contact support if the model remains unavailable

Upstream Service Errors

Symptom:
{
  "error": {
    "type": "upstream_error",
    "message": "Upstream service error: rate limit exceeded"
  }
}
Solutions:
  • Retry with exponential backoff
  • Check rate limits in your dashboard
  • Reduce request frequency or batch size

Error Investigation Checklist

When encountering errors:
  1. Capture Context
    • Copy full error response
    • Note the request_id
    • Record timestamp
    • Save request payload (redacted)
  2. Check Error Details
    • Error type and HTTP code
    • Upstream error details (if present)
    • Duration metrics
    • Any retry-after headers
  3. Verify Configuration
    • API key is valid
    • Base URL is correct
    • Model identifier is valid
    • Request payload structure
  4. Review Documentation
  5. Contact Support (if needed)
    • Include request_id
    • Provide error reproduction steps
    • Share redacted request/response

Performance Issues

Problem: Responses taking longer than expected.Diagnostic steps:
  1. Reduce prompt size: Keep prompts concise and trim long chat histories.
  2. Batch smaller requests: Split large documents into smaller chunks.
  3. Check local network latency: Test connectivity to api.nordlyslabs.com.
Problem: Network latency issues.Solutions:
  1. Check your network:
    # Test connectivity
    ping nordlyslabs.com
    
    # Test TLS handshake
    curl -w "@curl-format.txt" -o /dev/null https://api.nordlyslabs.com/v1/models
    
  2. Implement timeout handling:
    const controller = new AbortController();
    const timeoutId = setTimeout(() => controller.abort(), 30000); // 30s timeout
    
    try {
      const completion = await openai.chat.completions.create({
        model: "nordlys/hypernova",
        messages: [...]
      }, {
        signal: controller.signal
      });
    } catch (error) {
      if (error.name === 'AbortError') {
        console.log('Request timed out');
      }
    } finally {
      clearTimeout(timeoutId);
    }
    
  3. Use connection pooling:
    import https from 'https';
    
    const agent = new https.Agent({
      keepAlive: true,
      maxSockets: 10
    });
    
    const openai = new OpenAI({
      apiKey: process.env.NORDLYS_API_KEY,
      baseURL: 'https://api.nordlyslabs.com/v1',
      httpAgent: agent
    });
    

Development Environment Issues

Problem: Cross-origin resource sharing errors.Solutions:
  1. Never call API directly from browser:
    // ❌ Wrong - exposes API key
    // const completion = await openai.chat.completions.create({...});
    
    // ✅ Correct - use your backend
    const response = await fetch('/api/chat', {
      method: 'POST',
      body: JSON.stringify({ message })
    });
    
  2. Set up proxy in development:
    // Next.js API route
    // pages/api/chat.js
    export default async function handler(req, res) {
      const completion = await openai.chat.completions.create({
        model: "nordlys/hypernova",
        messages: req.body.messages
      });
      
      res.json({ response: completion.choices[0].message.content });
    }
    
  3. Configure CORS for your backend:
    // Express.js
    app.use(cors({
      origin: ['http://localhost:3000', 'https://yourdomain.com'],
      credentials: true
    }));
    
Problem: TypeScript errors with Nordlys integration.Solutions:
  1. Install correct types:
    npm install --save-dev @types/node
    npm install openai  # Latest version includes types
    
  2. Type assertion for model parameter:
    const completion = await openai.chat.completions.create({
      model: "nordlys/hypernova" as any, // Type assertion
      messages: [...]
    });
    
  3. Create custom types if needed:
    interface NordlysCompletion extends ChatCompletion {
      model: string;
    }
    
Problem: ES modules vs CommonJS issues.Solutions:
  1. Use correct imports:
    // ES modules
    import OpenAI from 'openai';
    
    // CommonJS
    const OpenAI = require('openai');
    
  2. Package.json configuration:
    {
      "type": "module",
      "dependencies": {
        "openai": "^4.0.0"
      }
    }
    
  3. Node.js version compatibility:
    # Check Node.js version
    node --version
    
    # Nordlys requires Node.js 18+
    # Update if necessary
    

Getting Help

Debug Information to Collect

When reporting issues, please include:
1

Environment Details

# System info
node --version
npm --version

# Package versions
npm list openai
npm list @langchain/openai
2

Request Details

// Sanitized request (remove API key)
{
  "model": "nordlys/hypernova",
  "messages": [...],
  "temperature": 0.7
}
3

Error Information

console.log("Error status:", error.status);
console.log("Error message:", error.message);
console.log("Error stack:", error.stack);
4

Network Diagnostics

# Test connectivity
curl -I https://api.nordlyslabs.com/v1/models

# DNS resolution
nslookup nordlyslabs.com

Support Channels

Documentation

Check our comprehensive guides and API reference for solutions

GitHub Issues

Report bugs and request features on our GitHub repository

Discord Community

Get help from the community and Nordlys team members

Email Support

Contact [email protected] for priority assistance

Best Practices for Debugging

1

Start with Simple Requests

Test basic functionality first
const simple = await openai.chat.completions.create({
  model: "nordlys/hypernova",
  messages: [{ role: "user", content: "Hello" }]
});
2

Enable Verbose Logging

Add detailed logging to understand what’s happening
console.log("Request:", JSON.stringify(requestData, null, 2));
console.log("Response:", JSON.stringify(response, null, 2));
3

Test with curl

Verify API access outside your application
curl -X POST https://api.nordlyslabs.com/v1/chat/completions \
  -H "Authorization: Bearer apk_123456" \
  -H "Content-Type: application/json" \
  -d '{"model":"nordlys/hypernova","messages":[{"role":"user","content":"test"}]}'
4

Isolate the Problem

Systematically narrow down the issue:
  • Test different messages
  • Try different parameters
  • Test in different environments
  • Compare with working examples

Complete Error Handling Example

Here’s a production-ready error handling implementation:
class NordlysClient {
  constructor(apiKey) {
    this.openai = new OpenAI({
      apiKey: apiKey,
      baseURL: 'https://api.nordlyslabs.com/v1'
    });
  }
  
  async createCompletion(params, retries = 3) {
    for (let attempt = 1; attempt <= retries; attempt++) {
      try {
        const completion = await this.openai.chat.completions.create({
          model: "nordlys/hypernova",
          ...params
        });
        
        // Log success metrics
        console.log(`✅ Success: ${completion.usage.total_tokens} tokens`);
        return completion;
        
      } catch (error) {
        // Handle specific errors
        if (error.status === 401) {
          throw new Error('Invalid API key - check your credentials');
        }
        
        if (error.status === 429) {
          const delay = Math.min(1000 * Math.pow(2, attempt), 10000);
          console.log(`⚠️  Rate limited, retrying in ${delay}ms (attempt ${attempt}/${retries})`);
          
          if (attempt < retries) {
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw new Error('Rate limit exceeded - reduce request frequency');
        }
        
        if (error.status === 400) {
          throw new Error(`Invalid request: ${error.message}`);
        }
        
        if (error.status >= 500) {
          const delay = 1000 * attempt;
          console.log(`🔄 Server error, retrying in ${delay}ms (attempt ${attempt}/${retries})`);
          
          if (attempt < retries) {
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw new Error('Server error - try again later');
        }
        
        // Unexpected error
        throw new Error(`Unexpected error: ${error.message}`);
      }
    }
  }
}

// Usage example
const client = new NordlysClient(process.env.NORDLYS_API_KEY);

try {
  const response = await client.createCompletion({
    messages: [{ role: "user", content: "Hello!" }],
    model: "nordlys/hypernova"
  });
  
  console.log("Response:", response.choices[0].message.content);
} catch (error) {
  console.error("Failed to get completion:", error.message);
}

FAQ

Use the model ID in your request:
model: "nordlys/hypernova"
Check the model field in the response:
console.log("Model used:", completion.model);