Skip to main content

Get Your Nordlys API Key

Sign up here to create an account and generate your API key.

Overview

LangGraph is a low-level orchestration framework for building stateful, multi-actor applications with AI models. By integrating Nordlys with LangGraph’s ChatOpenAI, you get advanced Nordlys model while building complex agent workflows with graphs, state management, and tool integration.

Key Benefits

  • Keep existing workflows - No changes to your LangGraph graph structure
  • Nordlys model - Automatic Nordlys model for each agent interaction
  • Cost optimization - 30-70% cost reduction across agent executions
  • Stateful agents - Works seamlessly with LangGraph’s state management
  • Tool support - Nordlys selects function-calling capable models automatically
  • Streaming support - Real-time responses in agent workflows

Installation

pip install langgraph langchain-openai

Basic Usage

Initialize ChatOpenAI with Nordlys

The only change needed is to point LangGraph’s ChatOpenAI to Nordlys’s endpoint:
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
api_key="your-nordlys-api-key",
base_url="https://api.nordlylabs.com/v1",
model="nordlys/hypernova",
temperature=0,
)

Simple Chatbot with StateGraph

from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_openai import ChatOpenAI

# Initialize model with Nordlys

model = ChatOpenAI(
api_key="your-nordlys-api-key",
base_url="https://api.nordlylabs.com/v1",
model="nordlys/hypernova",
temperature=0,
)

# Define the chatbot function

def call_model(state: MessagesState):
response = model.invoke(state["messages"])
return {"messages": [response]}

# Create the graph

workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_edge(START, "agent")
workflow.add_edge("agent", END)

app = workflow.compile()

# Use the chatbot

result = app.invoke({
"messages": [{"role": "user", "content": "What is LangGraph?"}]
})

print(result["messages"][-1].content)

Advanced Examples

Agent with Tools

Nordlys automatically selects models that support function calling when tools are detected:
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

# Define tools

@tool
def get_weather(location: str) -> str:
"""Get the current weather for a location.""" # Weather API call
return f"Weather in {location}: 72°F, sunny"

tools = [get_weather]

# Initialize model with tools

model = ChatOpenAI(
api_key="your-nordlys-api-key",
base_url="https://api.nordlylabs.com/v1",
model="nordlys/hypernova",
temperature=0,
).bind_tools(tools)

# Define the agent function

def call_model(state: MessagesState):
response = model.invoke(state["messages"])
return {"messages": [response]}

# Model selection function

def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END

# Create the graph

workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", ToolNode(tools))
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")

app = workflow.compile()

# Use the agent

result = app.invoke({
"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]
})

Streaming Agent Responses

import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

const app = workflow.compile();

// Stream the agent's responses
const stream = await app.stream({
  messages: [{ role: "user", content: "Write a short poem about AI" }],
});

for await (const chunk of stream) {
  if (chunk.agent && chunk.agent.messages) {
    const message = chunk.agent.messages[0];
    if (message && message.content) {
      process.stdout.write(message.content);
    }
  }
}

Multi-Agent Workflow

import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

// Define custom state with multiple agents
const WorkflowState = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (x, y) => x.concat(y),
  }),
  currentAgent: Annotation<string>({
    reducer: (x, y) => y ?? x,
    default: () => "researcher",
  }),
});

// Initialize different models for different agents
const researcherModel = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova",
  temperature: 0.3,
});

const writerModel = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova",
  temperature: 0.7,
});

// Define agent nodes
async function researcher(state: typeof WorkflowState.State) {
  const response = await researcherModel.invoke([
    { role: "system", content: "You are a research specialist." },
    ...state.messages,
  ]);
  return {
    messages: [response],
    currentAgent: "writer",
  };
}

async function writer(state: typeof WorkflowState.State) {
  const response = await writerModel.invoke([
    { role: "system", content: "You are a creative writer." },
    ...state.messages,
  ]);
  return {
    messages: [response],
    currentAgent: "end",
  };
}

// Model layer function
function model layer(state: typeof WorkflowState.State) {
  return state.currentAgent === "writer" ? "writer" : "__end__";
}

// Build the workflow
const workflow = new StateGraph(WorkflowState)
  .addNode("researcher", researcher)
  .addNode("writer", writer)
  .addEdge("__start__", "researcher")
  .addConditionalEdges("researcher", model layer)
  .addEdge("writer", "__end__");

const app = workflow.compile();

// Execute multi-agent workflow
const result = await app.invoke({
  messages: [
    { role: "user", content: "Research and write about quantum computing" },
  ],
});

Integration Patterns

With Memory/Checkpointing

import { MemorySaver } from "@langchain/langgraph";
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

// Add memory for persistent conversations
const memory = new MemorySaver();
const app = workflow.compile({ checkpointer: memory });

// Use with conversation threads
const config = { configurable: { thread_id: "user-123" } };

// First message
await app.invoke(
  { messages: [{ role: "user", content: "My name is Alice" }] },
  config,
);

// Follow-up message (remembers context)
const result = await app.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  config,
);

Human-in-the-Loop

import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";

const model = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

// Compile with interrupt before agent node for human approval
const memory = new MemorySaver();
const app = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["agent"],
});

const config = { configurable: { thread_id: "conversation-1" } };

// Start the workflow (will pause before agent)
await app.invoke(
  { messages: [{ role: "user", content: "Send an email to the team" }] },
  config,
);

// Get current state for human review
const state = await app.getState(config);
console.log("Pending action:", state.values);

// Human approves and continues
await app.invoke(null, config);

Configuration Options

Model Selection

Use the Nordlys model ID:
modelName: "nordlys/hypernova";

Temperature and Parameters

All standard ChatOpenAI parameters work with Nordlys:
const model = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova",
  temperature: 0.7,
  maxTokens: 1000,
  topP: 1,
  frequencyPenalty: 0,
  presencePenalty: 0,
});

Best Practices

  1. Use nordlys/hypernova across agent nodes
  2. Different temperatures for different agents (research vs creative)
  3. Leverage checkpointing for stateful conversations with memory
  4. Use conditional edges for complex agent logic
  5. Add human-in-the-loop for critical decisions with interrupt points
  6. Tool integration - keep tool calls explicit and deterministic

Error Handling

Nordlys retries transient errors automatically. For comprehensive error handling patterns, see the Error Handling Guide.
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

// Nordlys handles Nordlys model failures automatically
const model = new ChatOpenAI({
  apiKey: process.env.NORDLYS_API_KEY,
  configuration: {
    baseURL: "https://api.nordlylabs.com/v1",
  },
  modelName: "nordlys/hypernova", // Nordlys model with automatic retries
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

const app = workflow.compile();

// Nordlys handles errors automatically - no try/catch needed for basic usage
const result = await app.invoke({
  messages: [{ role: "user", content: "Hello, how are you?" }],
});

Complete Example

See the complete LangGraph example for a full working implementation including:
  • Stateful agent with memory
  • Tool integration with conditional Nordlys model
  • Multi-agent workflows
  • Streaming responses
  • Human-in-the-loop patterns
  • Error handling

Next Steps