LangChain.js Agents: Build Your First AI Agent in Node.js
An LLM that answers questions is useful. An LLM that can act — call APIs, search the web, run code, query a database — is a product. That's what agents do, and LangChain.js is the fastest path to building them in Node.js.
This post covers what agents actually are, how LangChain.js structures them, and how to build a working agent that uses real tools.
What Is an AI Agent?
An agent is an LLM with a decision loop. Instead of taking your prompt and returning a response, an agent:
- Reads your request
- Decides what action to take (call a tool, ask for more info, or respond directly)
- Executes the action and observes the result
- Loops back to step 2 until it has enough information to respond
The classic formulation is ReAct (Reason + Act): the model reasons about what to do, acts, observes the outcome, and reasons again. This is what separates agents from chains. A chain is a fixed sequence of steps. An agent is a dynamic loop — the model decides what steps to take and in what order.
Setting Up LangChain.js
npm install langchain @langchain/openai @langchain/core @langchain/community
You'll need an OpenAI API key (or swap in Vertex AI as the backend — LangChain.js supports both through a unified interface):
export OPENAI_API_KEY=sk-...
Defining Tools
A tool is any function the agent can invoke. LangChain.js provides built-in tools and a clean interface to define your own.
Built-in Tool: Web Search (SerpAPI)
npm install @langchain/community export SERPAPI_API_KEY=your-key
import { SerpAPI } from '@langchain/community/tools/serpapi'; const searchTool = new SerpAPI(process.env.SERPAPI_API_KEY, { location: 'United States', hl: 'en', gl: 'us', });
Custom Tool: Calculator
import { DynamicTool } from '@langchain/core/tools'; const calculatorTool = new DynamicTool({ name: 'calculator', description: 'Evaluates a mathematical expression. Input should be a valid JavaScript math expression like "2 + 2" or "Math.sqrt(16)".', func: async (input) => { try { return String(eval(input)); } catch (e) { return `Error: ${e.message}`; } }, });
Custom Tool: Database Lookup
const userLookupTool = new DynamicTool({ name: 'user-lookup', description: "Look up a user by email address. Returns the user's name and account status. Input should be an email address.", func: async (email) => { const user = await db.users.findOne({ where: { email } }); if (!user) return 'User not found'; return JSON.stringify({ name: user.name, status: user.status, plan: user.plan }); }, });
The description field is critical — it is what the LLM reads to decide when to use the tool. Write it like you are explaining the tool to a smart colleague, not a computer.
Building the Agent Executor
import { ChatOpenAI } from '@langchain/openai'; import { createReactAgent, AgentExecutor } from 'langchain/agents'; import { pull } from 'langchain/hub'; const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0, }); const tools = [searchTool, calculatorTool, userLookupTool]; // Pull the standard ReAct prompt from LangChain Hub const prompt = await pull('hwchase17/react'); const agent = await createReactAgent({ llm, tools, prompt }); const executor = new AgentExecutor({ agent, tools, verbose: true, // logs each step — disable in production maxIterations: 10, // prevents infinite loops });
temperature: 0 matters for agents — you want deterministic decisions about which tool to call, not creative variation.
Running the Agent
const result = await executor.invoke({ input: "What is the square root of 144, and what is today's top tech news?", }); console.log(result.output);
With verbose: true, you see the full ReAct trace:
Thought: I need to calculate the square root of 144 and find today's top tech news.
Action: calculator
Action Input: Math.sqrt(144)
Observation: 12
Thought: I have the math answer. Now I need to search for tech news.
Action: serpapi
Action Input: top tech news today
Observation: [search results...]
Thought: I have all the information needed.
Final Answer: The square root of 144 is 12. Today's top tech news includes...
Adding Memory
Without memory, every agent invocation starts fresh. For multi-turn conversations, add a message buffer:
import { BufferMemory } from 'langchain/memory'; const memory = new BufferMemory({ memoryKey: 'chat_history', returnMessages: true, }); const agentWithMemory = new AgentExecutor({ agent, tools, memory, maxIterations: 10, }); await agentWithMemory.invoke({ input: 'My email is [email protected]. Who am I?' }); await agentWithMemory.invoke({ input: 'What is my account status?' }); // The second invocation knows the email from the first turn
For production, swap BufferMemory with a persistent store — Redis or a database — so memory survives process restarts.
Agents vs RAG: When to Use Which
Agents and RAG pipelines are complementary, not competing patterns. The most powerful pattern is an agent with a RAG tool — the agent decides when to consult the knowledge base, rather than blindly retrieving on every query.
| Use Case | Pattern |
|---|---|
| Answer questions from a fixed knowledge base | RAG |
| Take action based on user input | Agent |
| Look up, transform, and synthesize information | Agent with RAG tool |
| Deterministic, predictable pipeline | Chain (no agent) |
You define a tool called knowledge-base-search, back it with a vector database, and the agent calls it only when the question requires internal knowledge.
Production Checklist
Set maxIterations — without a cap, a confused agent can loop forever and burn through tokens.
Sanitize tool inputs — the LLM generates tool inputs from user text; treat them like untrusted user input.
Log every trace — agents are non-deterministic; you need traces to debug failures. See How to Monitor AI Pipelines in Production — LangSmith captures agent traces automatically when connected.
Timeout your tools — a slow external API can stall an agent indefinitely:
const safeTool = new DynamicTool({ name: 'safe-api-call', description: '...', func: async (input) => { const timeout = new Promise((_, reject) => setTimeout(() => reject(new Error('Tool timeout')), 5000) ); return Promise.race([yourApiCall(input), timeout]); }, });
Rate limit your executor — agents can make multiple LLM calls per user request; budget accordingly with your chosen provider.
Agents unlock a new category of AI features — not just Q&A but autonomous workflows, multi-step reasoning, and real integration with your existing systems. Start with well-defined tools and low maxIterations limits, then expand as you gain confidence in the system's behavior.