Skip to main content

Get started

Overview

The Warden Agent Development CLI allows you to easily build an A2A LangGraph Agent compatible with Warden.

This guide explains how to create your first Agent: you'll run the CLI, provide the required details, and the Agent will be immediately available for local testing.

Agent templates

When creating an Agent, you'll be prompted to select one of the supported Agent templates:

  • OpenAI + Streaming: A GPT-powered Agent with streaming responses
  • OpenAI + Multi-turn: A GPT-powered Agent with conversation history
  • Blank + Streaming: A minimal streaming Agent that echoes input
  • Blank + Multi-turn: A minimal multi-turn conversation agent

This guide will focus on creating a GPT-powered Agent for the sake of quick onboarding.

tip

If you choose Blank when creating an Agent, you can use it with any preferred LLM.

Prerequisites

Before you start, complete the following prerequisites:

1. Install and run the CLI

  1. First, clone the Warden Agent Development CLI:

    git clone https://github.com/warden-protocol/warden-code.git
  2. Navigate to the warden-code directory:

    cd warden-code
  3. Install the tool:

    npm install -g warden-code

    Alternatively, you can use pnpm or npx:

    pnpm add -g warden-code
    npx warden-code
  4. Install the required packages: (?)

    pnpm add @inquirer/prompts
    pnpm add -D vitest
  5. Run the SDK:

    warden

    You'll see the list of available commands:

    Available Commands:

    /new - Create a new agent interactively
    /help - Show available commands
    /clear - Clear the terminal screen
    /exit - Exit the CLI

    Type /help <command> for more info on a specific command

2. Create an Agent

Now you can create your Agent:

  1. Initiate Agent creation:

    /new
  2. You'll be prompted to provide the following details:

    • Agent name
    • Agent description
    • Template: Blank/OpenAI
    • Capability: Streaming/Multi-turn conversations
    • Skills (optional)

    To follow this guide, select OpenAI in the third step.

    tip

    Depending on your choices, the CLI tool will use one of the four Agent templates. Note that if you select a Blank template, later you'll need to take additional steps such as specifying your preferred LLM in the code.

  3. Confirm Agent creation. You'll find your Agent's code in warden-code/src/agent.ts.

  4. Duplicate .env.example and rename it to .env.

  5. In the .env file, add your OpenAI API key from Prerequisites. You can leave other settings unchanged:

    HOST=localhost
    PORT=3000
    OPENAI_API_KEY=your-api-key-here
    OPENAI_MODEL=gpt-4o-mini
  6. In a new terminal window, navigate to the warden-code directory and run the following:

    pnpm install
    pnpm build
    pnpm agent

    Congratulations! Your Agent is available on http://localhost:3000.

3. Test your Agent locally

important

Every new Agent is immediately accessible through LangGraph API. To learn more, see LangGraph API reference. Alternatively, you can view and test all endpoints locally: http://localhost:3000/docs. (?)

To make sure your Agent is working locally, run some of the LangGraph API endpoints:

  1. Access your A2A Agent Card:

    http://localhost:3000/.well-known/agent-card.json?assistant_id=fe096781-5601-53d2-b2f6-0d3403f7e9ca

    The card will display your Agent's name and capabilities, alongside with other information:

    {
    "name": "general-test",
    "description": "A helpful AI agent named general-test",
    "url": "http://localhost:3000",
    "version": "0.1.0",
    "capabilities": {
    "streaming": true,
    "multiTurn": false
    },
    "skills": [],
    "defaultInputModes": [
    "text"
    ],
    "defaultOutputModes": [
    "text"
    ]
    }
  2. Run the Search Assistants endpoint to get your Agent's ID:

    POST http://localhost:3000/assistants/search
    Headers: Content-Type: application/json
    Body:

    {
    "metadata": {},
    "graph_id": "agent",
    "limit": 10,
    "offset": 0,
    "sort_by": "assistant_id",
    "sort_order": "asc",
    "select": [
    "assistant_id"
    ]
    }

    The ID will be returned in the assistant_id field. Typically, it's fe096781-5601-53d2-b2f6-0d3403f7e9ca.

  3. Finally, try...

    If everything is fine, you'll receive a response including your prompt, assistant's reply, and other data.

  4. In addition, you can check logs in LangSmith Studio: navigate to Tracing Project in the left menu and select your project. The logs will display data on all threads and runs (Agent invocations). (?)

Next steps

Now you can implement a custom logic: just edit your Agent's code in warden-code/src/agent.ts.

For inspiration, see our examples.