Quick Start
Get up and running with Atulya in 60 seconds.
Start the API Server
- pip (API only)
- Docker (Full Experience)
pip install atulya-api
export OPENAI_API_KEY=sk-xxx
export ATULYA_API_LLM_API_KEY=$OPENAI_API_KEY
atulya-api
API available at http://localhost:8888
export OPENAI_API_KEY=sk-xxx
docker run --rm -it --pull always -p 8888:8888 -p 9999:9999 \
-e ATULYA_API_LLM_API_KEY=$OPENAI_API_KEY \
-v $HOME/.atulya-docker:/home/atulya/.pg0 \
ghcr.io/eight-atulya/atulya:latest
- API: http://localhost:8888
- Control Plane (Web UI): http://localhost:9999
LLM Provider
Atulya requires an LLM with structured output support. Recommended: Groq with gpt-oss-20b for fast, cost-effective inference.
See LLM Providers for more details.
Use the Client
- Python
- Node.js
- CLI
pip install atulya-client
from atulya_client import Atulya
client = Atulya(base_url="http://localhost:8888")
# Retain: Store information
client.retain(bank_id="my-bank", content="Alice works at Google as a software engineer")
# Recall: Search memories
client.recall(bank_id="my-bank", query="What does Alice do?")
# Reflect: Generate disposition-aware response
client.reflect(bank_id="my-bank", query="Tell me about Alice")
npm install @eight-atulya/atulya-client
import { AtulyaClient } from '@eight-atulya/atulya-client';
const client = new AtulyaClient({ baseUrl: 'http://localhost:8888' });
// Retain: Store information
await client.retain('my-bank', 'Alice works at Google as a software engineer');
// Recall: Search memories
await client.recall('my-bank', 'What does Alice do?');
// Reflect: Generate response
await client.reflect('my-bank', 'Tell me about Alice');
curl -fsSL https://atulya.eightengine.com/get-cli | bash
# Retain: Store information
atulya memory retain my-bank "Alice works at Google as a software engineer"
# Recall: Search memories
atulya memory recall my-bank "What does Alice do?"
# Reflect: Generate response
atulya memory reflect my-bank "Tell me about Alice"
What's Happening
| Operation | What it does |
|---|---|
| Retain | Content is processed, facts are extracted, entities are identified and linked in a knowledge graph |
| Recall | Four search strategies (semantic, keyword, graph, temporal) run in parallel to find relevant memories |
| Reflect | Retrieved memories are used to generate a disposition-aware response |
Next Steps
- Retain — Advanced options for storing memories
- Recall — Search and retrieval strategies
- Reflect — Disposition-aware reasoning
- Memory Banks — Configure disposition and background
- Server Deployment — Docker Compose, Helm, and production setup