Deliveryman Demo
Complete Application
This is a complete, runnable application demonstrating Atulya integration. View source on GitHub →
A delivery agent simulation that demonstrates Atulya's long-term memory capabilities. An AI agent navigates a multi-building office complex to deliver packages, learning employee locations and optimal paths over time through mental models.
Prerequisites
- Python 3.11+
- Node.js 18+
- uv (Python package manager)
Setup (Fresh Environment)
1. Clone Repositories
# Clone Atulya (memory engine)
git clone https://github.com/eight-atulya/atulya.git
# Clone the cookbook (contains this demo)
git clone https://github.com/eight-atulya/atulya-cookbook.git
2. Start Atulya API
cd atulya
cp .env.example .env
Edit .env with your LLM configuration:
ATULYA_API_LLM_PROVIDER=groq
ATULYA_API_LLM_API_KEY=<your-groq-api-key>
ATULYA_API_LLM_MODEL=openai/gpt-oss-120b
ATULYA_API_HOST=0.0.0.0
ATULYA_API_PORT=8888
ATULYA_API_ENABLE_OBSERVATIONS=true
# Retain extraction settings (improves employee/location extraction)
ATULYA_API_RETAIN_EXTRACTION_MODE=custom
ATULYA_API_RETAIN_CUSTOM_INSTRUCTIONS="Delivery agent. Remember employee locations, building layout, and optimal paths."
# Embedded database storage
PG0_DATA_DIR=/tmp/atulya-data
Start the API:
./scripts/dev/start-api.sh
# Runs on http://localhost:8888
3. Start Atulya Control Plane (Optional)
The control plane provides a web UI for inspecting memory banks, facts, and mental models.
cd atulya
./scripts/dev/start-control-plane.sh
# Runs on a dynamic port (check terminal output)
4. Start Demo Backend
cd atulya-cookbook/deliveryman-demo/backend
# Create virtual environment and install dependencies
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Create backend/.env:
OPENAI_API_KEY=<your-openai-api-key>
GROQ_API_KEY=<your-groq-api-key>
ATULYA_API_URL=http://localhost:8888
LLM_MODEL=openai/gpt-4o
Start the backend:
./run.sh
# Or manually:
python -m uvicorn app.main:app --host 0.0.0.0 --port 8000 --ws wsproto --reload
Note: The --ws wsproto flag is required for WebSocket support. Without it, connections will fail with error 1006.
5. Start Demo Frontend
cd atulya-cookbook/deliveryman-demo/frontend
npm install
npm run dev
# Runs on http://localhost:5173
6. Open the Demo
Navigate to http://localhost:5173 in your browser.
How It Works
- The agent receives a delivery task (e.g., "Deliver Package #3954 to Victor Huang")
- It navigates a multi-building complex with floors, elevators, and sky bridges
- Along the way it encounters employees and learns their locations
- After each delivery, the conversation is sent to Atulya via the retain API
- Atulya extracts facts (employee locations, building layout) and builds mental models
- On subsequent deliveries, the agent queries Atulya to recall what it learned
Architecture
Browser (5173) → Frontend (React + Phaser)
↓ WebSocket
Backend (8000) → FastAPI + Delivery Agent
↓ HTTP
Atulya API (8888) → Memory Engine + PostgreSQL
Troubleshooting
| Problem | Solution |
|---|---|
| WebSocket error 1006 | Restart backend with --ws wsproto flag |
| Mental models missing employees | Check ATULYA_API_RETAIN_EXTRACTION_MODE=custom is set |
| Atulya connection refused | Verify Atulya API is running on port 8888 |
| Frontend shows "Disconnected" | Check backend is running on port 8000 |