Β
CodingAgent is an open-source, CLI-based AI coding agent that works like the best closed-source tools β Claude Code, Codex, and Antigravity β but without the vendor lock-in. It supports any LLM (cloud or local), persists memory across sessions, integrates natively with Git and MCP servers, and stays lightweight enough for real engineering workflows.
- Model Agnostic LLM Support: Works with any LLM out of the box β Claude, GPT, Gemini, Ollama local models, Kimi β switched via a single config line with no code changes required.
- Persistent Multi-Tier Memory: Three-tier memory system (short-term buffer, long-term local vector store, per-project
AGENT.mdscratchpad) that survives session resets and context limits β the agent remembers decisions, patterns, and past bugs. - Reliable Agent Loop with File & Shell Tools: A clean goal β plan β execute β observe loop with native file read/write, shell command execution, and automatic error recovery β no bloated orchestration framework, just a tight and predictable core loop.
- MCP Integration: First-class Model Context Protocol support so users can plug in GitHub, databases, documentation, or any custom tool server via a simple config entry β no core code changes needed.
- Optimised Context & File Caching: Files already read in a session are cached and not re-fetched unless modified β drastically cuts token usage and API costs across long sessions.
- User-Editable Prompt Profile: A simple
profile.mdwhere users describe their tech stack, coding style, and preferences β the agent reads this on every session startup and codes exactly the way you like. - Git Integration: Native Git awareness β reads commit history and diffs so the agent understands what changed recently and makes smarter, context-aware suggestions over time.
- Commit-Level Code Review: On-demand security and quality review for any commit or staged diff β lists vulnerabilities, code smells, and risky changes as a clean CLI report when explicitly asked.
- TypeScript
- Commander.js (argument parsing)
- Inquirer (interactive terminal prompts)
- Node.js
- Execa (safe async shell execution)
- Ripgrep + Glob (fast codebase file navigation)
- Simple Git (native Git integration β diffs, history, commits)
- SQLite (local persistent storage, zero external services)
- sqlite-vec (vector search extension for semantic memory)
- In-memory JSON ring buffer (short-term session context)
- Vercel AI SDK (provider-agnostic LLM routing)
- Google Gemini / OpenAI GPT / Anthropic Claude / Ollama / Kimi
- Model Context Protocol SDK (MCP tool + server connections)
- Custom token budget manager + auto-compaction engine
- RAG over local codebase / user prompt profile engineering
- tsup (TypeScript bundler)
- tsx (zero-config TypeScript dev runner)
- Vitest (unit and integration testing)
- ESLint + Prettier (code quality and formatting)
- SWE-bench (agent performance benchmarking)
-
The CLI Agent (core):
- Agent loop (goal β plan β execute β observe) is implemented and documented.
- Works end-to-end with at least one local model (Ollama) and one cloud model (Claude/GPT).
- Installable via
npx codingagentwith zero manual setup.
-
Multi-LLM Support:
- Provider-agnostic LLM routing implemented and tested across Claude, OpenAI, Gemini, and Ollama.
- Model switching works via single config change with no code modification.
- Streaming responses work correctly across all supported providers.
-
Memory System:
- Short-term in-session buffer implemented with token cap.
- Long-term SQLite vector store persists across sessions.
- Per-project
AGENT.mdscratchpad is read and written by the agent automatically. - File caching prevents re-reading unchanged files within a session.
-
Context & Cost Optimisation:
- Token budget manager tracks and allocates context per turn.
- Auto-compaction triggers when context hits 80% of model limit.
- User-editable
profile.mdfor tech stack and coding style preferences is loaded on every session.
-
MCP Integration:
- MCP client connects to at least GitHub and one custom tool server.
- New MCP servers can be added via config with no code changes.
-
Git Integration:
- Agent reads commit history and diffs natively.
- On-demand commit-level code review lists vulnerabilities and risky changes in CLI.
-
The AI/ML components:
- LLM model selection and configuration are documented.
- System prompt and user
profile.mdare version-controlled. - API keys are stored securely in local config, never hardcoded.
- Rate limit handling and retry logic implemented for all providers.
-
Quality & Open Source:
- Test coverage for core agent loop, memory, and tool execution.
- Benchmarked against SWE-bench or equivalent to prove quality.
- Contributing guide and onboarding docs published in repo.
To be added.
To be added.
β Don't forget to star this repository if you find it useful! β
Thank you for considering contributing to this project! Contributions are highly appreciated and welcomed. To ensure smooth collaboration, please refer to our Contribution Guidelines.
This project is licensed under the GNU General Public License v3.0. See the LICENSE file for details.
Thanks a lot for spending your time helping CodingAgent grow. Keep rocking π₯
Β© 2026 AOSSIE