Cognee MCP
Knowledge graph plus vector memory engine for AI agents, exposed as an MCP server with V2 session-aware memory tools (remember, recall, forget, improve) and classic V1 ingestion pipelines (cognify, codify). Three transports: stdio, SSE, Streamable HTTP. 16,965 GitHub stars, Apache-2.0.
“Knowledge graph plus vector memory engine for AI agents, exposed as an MCP server with V2 session-aware memory tools (remember, recall, forget, improve) and classic V1 ingestion pipelines (cognify, codify). 100 commits on main in the last 30 days, top cadence in this batch. Three transports (stdio, SSE, Streamable HTTP). 16,965 GitHub stars and a published Apache-2.0 licence.”
INSTALL THIS SERVER
{
"mcpServers": {
"cognee": {
"command": "uv",
"args": [
"--directory",
"/path/to/cognee/cognee-mcp",
"run",
"cognee-mcp",
"--transport",
"stdio"
],
"env": {
"LLM_API_KEY": "<your-llm-key>"
}
}
}
}
{
"mcpServers": {
"cognee": {
"command": "uv",
"args": [
"--directory",
"/path/to/cognee/cognee-mcp",
"run",
"cognee-mcp",
"--transport",
"stdio"
],
"env": {
"LLM_API_KEY": "<your-llm-key>"
}
}
}
}
{
"mcpServers": {
"cognee": {
"command": "uv",
"args": [
"--directory",
"/path/to/cognee/cognee-mcp",
"run",
"cognee-mcp",
"--transport",
"stdio"
],
"env": {
"LLM_API_KEY": "<your-llm-key>"
}
}
}
}
{
"mcpServers": {
"cognee": {
"command": "uv",
"args": [
"--directory",
"/path/to/cognee/cognee-mcp",
"run",
"cognee-mcp",
"--transport",
"stdio"
],
"env": {
"LLM_API_KEY": "<your-llm-key>"
}
}
}
}
{
"mcpServers": {
"cognee": {
"command": "uv",
"args": [
"--directory",
"/path/to/cognee/cognee-mcp",
"run",
"cognee-mcp",
"--transport",
"stdio"
],
"env": {
"LLM_API_KEY": "<your-llm-key>"
}
}
}
}
8 TOOLS AVAILABLE
OUR ASSESSMENT
- 100 commits on main in the last 30 days, top cadence in this batch.
- 16,965 GitHub stars and Apache-2.0 licence.
- Three transports: stdio, SSE, Streamable HTTP.
- V2 session-aware memory tools (remember, recall, forget, improve) alongside classic V1 (cognify, codify).
- Background pipeline execution with status polling for long-running ingestion.
- Cloud Mode and API Mode connect to existing Cognee deployments.
- Local file ingestion supports .md, source files, Cursor rule sets.
- One-call developer-rules bootstrap indexes .cursorrules, .cursor/rules, AGENT.md, and similar files.
- Setup complexity: graph plus vector store configuration (Neo4j, Postgres, or alternatives).
- Documentation is split across the main repo, the cognee-mcp subdirectory, and Cognee Cloud docs.
- Cloud Mode requires a Cognee Cloud account.
- Knowledge graph schema decisions affect retrieval quality; tuning is the responsibility of the operator.
The MCP server runs with the same data access as the underlying Cognee instance. In Cloud Mode, traffic flows to Cognee Cloud over the --serve-url endpoint; review the Cognee Cloud data policy before sending sensitive content. In API Mode, point at a private Cognee FastAPI on a network the agent host trusts. Local file ingestion reads any path passed to the ingest tools; scope the agent filesystem access accordingly.
Agent builders who want persistent memory across sessions with both graph and vector retrieval; teams with a Cognee Cloud or self-hosted Cognee FastAPI instance who want MCP access to the same memory store the application uses; coding agents that benefit from indexed .cursorrules, AGENT.md, and similar developer-rule files via the developer_rules nodeset.
TECHNICAL DETAILS
ADOPTION METRICS
// Reading this16,965 stars on the parent topoteretes/cognee repo. 100 commits on main in the last 30 days drive the editorial weight.
// Reading thisFirst-ranked in ai-ml on commit cadence, star count, and tool surface combined.
SOURCES & VERIFICATION
We don't take any single directory's word for it. Before scoring, we cross-reference 5 public MCP sources, install the server ourselves against the clients we cover, and record when we last re-verified.
The same server, 5 different lenses. We reconcile these signals into our editorial score, which is why our number sometimes diverges from a directory-aggregate star count.
| Source | Their rating | Their star count | Their downloads | Last synced |
|---|---|---|---|---|
| AutomationSwitch This page | 4.6editorial | 16,965 | — | MAY 1, 2026 |
| PulseMCP | — unrated | unavailable | unavailable | MAY 1, 2026 |
| MCP.so | — unrated | unavailable | unavailable | MAY 1, 2026 |
| Glama | — unrated | unavailable | unavailable | MAY 1, 2026 |
| Smithery | — unrated | unavailable | unavailable | MAY 1, 2026 |
| Awesome MCP Servers | — unrated | unavailable | unavailable | MAY 1, 2026 |
// Counts are directory-reported; we don't adjust them. Discrepancies usually come from different snapshot times or star-caching.
OTHER AI / ML MCP SERVERS
Codebase Memory MCP
High-performance code intelligence MCP server for AI coding agents. Indexes a codebase into a queryable knowledge graph in milliseconds, with 14 MCP tools spanning structural search, call-chain tracing, impact analysis, dead-code detection, and Cypher queries. Single static C binary, 66 languages via tree-sitter, zero runtime dependencies.
Arize Phoenix MCP
LLM observability platform exposing prompts, projects, traces, spans, sessions, datasets, and experiments through MCP. Published to npm as @arizeai/phoenix-mcp, current 4.0.8 (2026-04-29). 9,496 stars on parent monorepo, Elastic License 2.0.
Qdrant MCP Server
Official Qdrant vector database MCP server. Acts as a semantic memory layer on top of Qdrant: store information with metadata, retrieve via similarity search. Two tools, very small surface area, exceptionally maintained by the Qdrant team. Configurable embedding provider (fastembed default), supports remote and local Qdrant clusters.
Amazon Bedrock AgentCore MCP
Official AWS Labs MCP server for Amazon Bedrock AgentCore: agent runtime, memory, gateway, identity, and observability. Tools fetch curated AgentCore documentation and surface deployment guides for runtime, memory, and gateway resources. Apache-2.0 within awslabs/mcp monorepo (8,924 parent stars).
ElevenLabs MCP
Official ElevenLabs MCP server. Wraps the full ElevenLabs API surface: text-to-speech, voice cloning, speech-to-text, dubbing, sound effect generation, audio isolation, voice design. MIT-licensed, distributed via PyPI as elevenlabs-mcp. Free tier with 10,000 credits per month.
Amazon Bedrock Knowledge Base MCP
Official AWS Labs MCP for Bedrock Knowledge Base retrieval: discover knowledge bases, query with natural language, filter by data source, and rerank results. Apache-2.0 within awslabs/mcp monorepo. Tight tool surface focused on RAG over AWS-managed KBs.
DISCUSS YOUR
MCP REQUIREMENTS.
Evaluating a server, scoping an internal deployment, or working out whether MCP is the right fit at all. Start the conversation and we will point you at the right piece of the ecosystem.