Skip to main content

Chat Memory App (Atulya Cloud)

Complete Application

This is a complete, runnable application demonstrating Atulya integration. View source on GitHub →

A demo chat application with persistent per-user memory powered by Atulya Cloud. Supports OpenAI or Groq as the LLM provider. No local Atulya server required.

Features

  • 🧠 Persistent Memory: Each user gets their own memory bank that remembers conversations
  • ☁️ Atulya Cloud: Memory stored in the cloud — no Docker setup needed
  • 🔀 Selectable LLM: Choose between OpenAI (GPT-4o) or Groq (Qwen 32B)
  • 🎯 Per-User Context: Isolated memory per user with automatic context retrieval
  • 💬 Real-time Chat: Instant responses with memory-augmented context

Setup

1. Get API Keys

2. Configure Environment

Edit .env.local with your API keys and preferred provider:

# LLM Provider: "openai" or "groq"
LLM_PROVIDER=openai

# OpenAI (required if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-your-key-here

# Groq (required if LLM_PROVIDER=groq)
GROQ_API_KEY=gsk_your-key-here

# Atulya Cloud
ATULYA_API_URL=https://api.atulya.eightengine.com
ATULYA_API_KEY=hsk_your-key-here

You can also override the model with LLM_MODEL (defaults to gpt-4o for OpenAI, qwen/qwen3-32b for Groq).

3. Install Dependencies

npm install

4. Run the App

npm run dev

Open http://localhost:3000 in your browser.

How It Works

  1. User Identity: Each browser session gets a unique user ID
  2. Memory Bank Creation: First message creates a personal memory bank in Atulya Cloud
  3. Context Retrieval: Before responding, relevant memories are recalled
  4. Memory Augmented Response: LLM generates responses with memory context
  5. Conversation Storage: Each conversation is retained for future context

Architecture

User Message

Next.js API Route (/api/chat)

Atulya Cloud recall() → Get relevant memories

OpenAI or Groq → Generate response with memory context

Atulya Cloud retain() → Store conversation

Response to User

Memory Bank Structure

Each user gets their own isolated memory bank with:

  • Name: "Chat Memory for [userId]"
  • Background: Conversational AI assistant context
  • Disposition: Empathetic (4), Low Skepticism (2), Balanced Literalism (3)

Try It Out

  1. First Conversation: Tell the assistant about yourself

    • "Hi! I'm a software engineer from San Francisco. I love Python and machine learning."
  2. Second Conversation: Ask what it remembers

    • "What do you know about me?"
    • "What programming languages do I like?"
  3. Context Building: Continue sharing preferences

    • "I prefer VS Code over other editors"
    • "I'm working on a React project"
  4. Memory Verification: Log in to the Atulya dashboard to see stored memories

Configuration

VariableDefaultDescription
LLM_PROVIDERopenaiLLM provider: openai or groq
LLM_MODELautoModel override (defaults: gpt-4o / qwen/qwen3-32b)
OPENAI_API_KEYRequired when using OpenAI
GROQ_API_KEYRequired when using Groq
ATULYA_API_URLhttps://api.atulya.eightengine.comAtulya API endpoint
ATULYA_API_KEYYour Atulya API key