This project explores Model Context Protocol (MCP) servers, specifically focusing on integrating persistent memory capabilities using LibSQL. It demonstrates how AI agents can be augmented with MCP tools to maintain stateful conversations and persistent data storage.
MCP (Model Context Protocol) is a protocol that enables:
- Connecting AI agents with tools - Provides a standardized way for agents to access external tools and capabilities
- Resource sharing - Agents can access and share resources like databases, APIs, and local files
- Prompt templates - Reusable prompt patterns for common tasks
- Tool discovery - Agents can dynamically discover available tools from MCP servers
This project demonstrates:
- Setting up and connecting to an MCP server (mcp-memory-libsql)
- Using LibSQL as a persistent memory backend for AI agents
- Creating agents that maintain context across conversations
- Implementing multi-turn conversations with persistent state
Project_3_MCP_LibSQL/
├── Src/
│ └── Lab3.py # Main demonstration script
├── memory/
│ └── kamal.db # LibSQL database for persistent memory
├── pyproject.toml # Project dependencies and metadata
├── .env # Environment configuration (secrets)
├── .gitignore # Git ignore rules
├── .venv/ # Python virtual environment
└── README.md # This file
- Python 3.12+
- Node.js (for running the MCP server)
- pip or uv package manager
-
Clone or navigate to the project directory
cd /home/dell/projects/Project_3_MCP_LibSQL -
Activate the virtual environment
source .venv/bin/activate -
Install dependencies
pip install -e . -
Set up environment variables Create or update
.envfile with:OPENAI_API_KEY=your_api_key_here LIBSQL_URL=file:./memory/kamal.db
python Src/Lab3.pyThis script demonstrates:
- Connecting to an MCP server (mcp-memory-libsql) running locally
- Creating an AI agent with the MCP server attached
- Running multi-turn conversations where the agent can:
- Store information about the user ("Kamal is an AI engineer")
- Recall stored information in subsequent queries
- Provide context-aware responses based on persistent memory
MCP Server Configuration:
- Server:
mcp-memory-libsql(runs via npx) - Database: Local LibSQL file-based database
- Environment:
LIBSQL_URL=file:./memory/kamal.db
Agent Configuration:
- Model: GPT-4.1-mini
- Instructions: Configured to use entity tools for persistent memory
- MCP Servers: Attached LibSQL memory server
- anthropic - Anthropic API client
- autogen-agentchat - Agent orchestration framework
- langchain - LLM framework with multiple integrations
- langgraph - Graph-based agent workflows
- mcp - Model Context Protocol implementation
- mcp-server-fetch - MCP server for web fetching
- openai - OpenAI API client
- langsmith - LLM tracing and debugging
LibSQL is a lightweight SQL database that runs locally. In this project:
- Stores entity information (e.g., user details)
- Provides persistent memory across agent sessions
- Allows agents to query and update information
Storage Capacity for Personal Use:
- Local database (file-based): ✅ UNLIMITED - Limited only by your available disk space
- Turso Cloud (hosted): Up to 9 GB in free tier
- This Project: Uses local file-based storage, so truly unlimited storage for personal use
- No cost: Running locally means zero subscription fees
The code uses:
- Agents: AI entities with role-specific instructions
- Runners: Execute agents with given prompts
- Tracing: Track conversation flows and debugging
- Agent receives initial input: "My name's Kamal. I'm an AI engineer..."
- Agent uses MCP tools to store this information in LibSQL
- In a second query, agent asks: "What do you know about me?"
- Agent retrieves stored information from memory using MCP tools
- Agent provides context-aware response based on persisted data
- The project uses async/await patterns for non-blocking MCP server communication
- Rich markdown formatting is used for console output
- The memory database persists between runs (stored in
memory/kamal.db) - Node options are configured to suppress deprecation warnings
To extend this project:
- Add more MCP tools: Modify the agent configuration
- Enhance memory schema: Update LibSQL database structure
- Multi-agent coordination: Add multiple agents with shared MCP servers
- API integration: Use mcp-server-fetch or create custom MCP servers
Issue: MCP server connection timeout
- Ensure Node.js is installed:
node --version - Check npx availability:
npx --version
Issue: LibSQL database locked
- Delete
memory/kamal.dband let it regenerate - Ensure only one process accesses the database
Issue: API key errors
- Verify
.envfile has correctOPENAI_API_KEY - Use
python -c "from dotenv import load_dotenv; load_dotenv(); import os; print(os.getenv('OPENAI_API_KEY'))"to debug
- Model Context Protocol Documentation
- LibSQL Documentation
- Anthropic Models API
- LangChain Documentation
This project is part of learning AI Agents and the MCP protocol. It serves as a practical guide to:
- Understanding MCP architecture
- Building stateful AI agents
- Implementing persistent memory
- Orchestrating multi-turn conversations with tools
Last Updated: April 2026 Status: Active Development