Get started
Overview
The Warden Agent Development CLI allows you to easily build an A2A LangGraph Agent compatible with Warden.
This guide explains how to create your first Agent: you'll run the CLI, provide the required details, and the Agent will be immediately available for local testing.
Agent templates
When creating an Agent, you'll be prompted to select one of the supported Agent templates:
- OpenAI + Streaming: A GPT-powered Agent with streaming responses
- OpenAI + Multi-turn: A GPT-powered Agent with conversation history
- Blank + Streaming: A minimal streaming Agent that echoes input
- Blank + Multi-turn: A minimal multi-turn conversation agent
This guide will focus on creating a GPT-powered Agent for the sake of quick onboarding.
If you choose Blank when creating an Agent, you can use it with any preferred LLM.
Prerequisites
Before you start, complete the following prerequisites:
- Install Node.js 18 or higher.
- Get an OpenAI API key or an API key for any preferred LLM.
1. Install and run the CLI
-
First, clone the Warden Agent Development CLI:
git clone https://github.com/warden-protocol/warden-code.git -
Navigate to the
warden-codedirectory:cd warden-code -
Install the tool:
npm install -g warden-codeAlternatively, you can use
pnpmornpx:pnpm add -g warden-codenpx warden-code -
Install the required packages: (?)
pnpm add @inquirer/prompts
pnpm add -D vitest -
Run the SDK:
wardenYou'll see the list of available commands:
Available Commands:
/new - Create a new agent interactively
/help - Show available commands
/clear - Clear the terminal screen
/exit - Exit the CLI
Type /help <command> for more info on a specific command
2. Create an Agent
Now you can create your Agent:
-
Initiate Agent creation:
/new -
You'll be prompted to provide the following details:
- Agent name
- Agent description
- Template: Blank/OpenAI
- Capability: Streaming/Multi-turn conversations
- Skills (optional)
To follow this guide, select OpenAI in the third step.
tipDepending on your choices, the CLI tool will use one of the four Agent templates. Note that if you select a Blank template, later you'll need to take additional steps such as specifying your preferred LLM in the code.
-
Confirm Agent creation. You'll find your Agent's code in
warden-code/src/agent.ts. -
Duplicate
.env.exampleand rename it to.env. -
In the
.envfile, add your OpenAI API key from Prerequisites. You can leave other settings unchanged:HOST=localhost
PORT=3000
OPENAI_API_KEY=your-api-key-here
OPENAI_MODEL=gpt-4o-mini -
In a new terminal window, navigate to the
warden-codedirectory and run the following:pnpm install
pnpm build
pnpm agentCongratulations! Your Agent is available on
http://localhost:3000.
3. Test your Agent locally
Every new Agent is immediately accessible through LangGraph API. To learn more, see LangGraph API reference. Alternatively, you can view and test all endpoints locally: http://localhost:3000/docs. (?)
To make sure your Agent is working locally, run some of the LangGraph API endpoints:
-
Access your A2A Agent Card:
http://localhost:3000/.well-known/agent-card.json?assistant_id=fe096781-5601-53d2-b2f6-0d3403f7e9caThe card will display your Agent's name and capabilities, alongside with other information:
{
"name": "general-test",
"description": "A helpful AI agent named general-test",
"url": "http://localhost:3000",
"version": "0.1.0",
"capabilities": {
"streaming": true,
"multiTurn": false
},
"skills": [],
"defaultInputModes": [
"text"
],
"defaultOutputModes": [
"text"
]
} -
Run the Search Assistants endpoint to get your Agent's ID:
- Postman
- cURL
POST
http://localhost:3000/assistants/search
Headers:Content-Type:application/json
Body:{
"metadata": {},
"graph_id": "agent",
"limit": 10,
"offset": 0,
"sort_by": "assistant_id",
"sort_order": "asc",
"select": [
"assistant_id"
]
}curl http://localhost:3000/assistants/search \
--request POST \
--header 'Content-Type: application/json' \
--data '{
"metadata": {},
"graph_id": "",
"limit": 10,
"offset": 0,
"sort_by": "assistant_id",
"sort_order": "asc",
"select": [
"assistant_id"
]
}'The ID will be returned in the
assistant_idfield. Typically, it'sfe096781-5601-53d2-b2f6-0d3403f7e9ca. -
Finally, try...
If everything is fine, you'll receive a response including your prompt, assistant's reply, and other data.
-
In addition, you can check logs in LangSmith Studio: navigate to Tracing Project in the left menu and select your project. The logs will display data on all threads and runs (Agent invocations). (?)
Next steps
Now you can implement a custom logic: just edit your Agent's code in warden-code/src/agent.ts.
For inspiration, see our examples.