AUTOMATIONSWITCH
CommunityAI / ML

Cognee MCP

by topoteretes

Knowledge graph plus vector memory engine for AI agents, exposed as an MCP server with V2 session-aware memory tools (remember, recall, forget, improve) and classic V1 ingestion pipelines (cognify, codify). Three transports: stdio, SSE, Streamable HTTP. 16,965 GitHub stars, Apache-2.0.

16,965·8 tools·Released JAN 2024·Apache-2.0
uv pip install cognee
Share:

Knowledge graph plus vector memory engine for AI agents, exposed as an MCP server with V2 session-aware memory tools (remember, recall, forget, improve) and classic V1 ingestion pipelines (cognify, codify). 100 commits on main in the last 30 days, top cadence in this batch. Three transports (stdio, SSE, Streamable HTTP). 16,965 GitHub stars and a published Apache-2.0 licence.

Reviewed by M. Nouriel · MAY 2026

INSTALL THIS SERVER

{ "mcpServers": { "cognee": { "command": "uv", "args": [ "--directory", "/path/to/cognee/cognee-mcp", "run", "cognee-mcp", "--transport", "stdio" ], "env": { "LLM_API_KEY": "<your-llm-key>" } } } }
PrereqInstall cognee via uv pip install cognee. Clone the repo for the cognee-mcp subdirectory. Configure graph and vector backends via environment variables (Neo4j, Postgres, Qdrant supported). PyPI: `cognee`. Cloud Mode: set COGNEE_SERVICE_URL to a Cognee Cloud endpoint. Path: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS).
{ "mcpServers": { "cognee": { "command": "uv", "args": [ "--directory", "/path/to/cognee/cognee-mcp", "run", "cognee-mcp", "--transport", "stdio" ], "env": { "LLM_API_KEY": "<your-llm-key>" } } } }
{ "mcpServers": { "cognee": { "command": "uv", "args": [ "--directory", "/path/to/cognee/cognee-mcp", "run", "cognee-mcp", "--transport", "stdio" ], "env": { "LLM_API_KEY": "<your-llm-key>" } } } }
{ "mcpServers": { "cognee": { "command": "uv", "args": [ "--directory", "/path/to/cognee/cognee-mcp", "run", "cognee-mcp", "--transport", "stdio" ], "env": { "LLM_API_KEY": "<your-llm-key>" } } } }
{ "mcpServers": { "cognee": { "command": "uv", "args": [ "--directory", "/path/to/cognee/cognee-mcp", "run", "cognee-mcp", "--transport", "stdio" ], "env": { "LLM_API_KEY": "<your-llm-key>" } } } }

8 TOOLS AVAILABLE

remember
Write a session-scoped memory entry into the graph
Write
recall
Retrieve session-scoped memory by query
Read
forget
Delete a memory entry from the current session
Write
improve
Refine an existing memory based on new context
Write
cognify
Run the full ingestion pipeline on text or files
Write
codify
Ingest source code into the developer_rules nodeset
Write

OUR ASSESSMENT

Strengths
  • 100 commits on main in the last 30 days, top cadence in this batch.
  • 16,965 GitHub stars and Apache-2.0 licence.
  • Three transports: stdio, SSE, Streamable HTTP.
  • V2 session-aware memory tools (remember, recall, forget, improve) alongside classic V1 (cognify, codify).
  • Background pipeline execution with status polling for long-running ingestion.
  • Cloud Mode and API Mode connect to existing Cognee deployments.
  • Local file ingestion supports .md, source files, Cursor rule sets.
  • One-call developer-rules bootstrap indexes .cursorrules, .cursor/rules, AGENT.md, and similar files.
Weaknesses
  • Setup complexity: graph plus vector store configuration (Neo4j, Postgres, or alternatives).
  • Documentation is split across the main repo, the cognee-mcp subdirectory, and Cognee Cloud docs.
  • Cloud Mode requires a Cognee Cloud account.
  • Knowledge graph schema decisions affect retrieval quality; tuning is the responsibility of the operator.
Security Notes

The MCP server runs with the same data access as the underlying Cognee instance. In Cloud Mode, traffic flows to Cognee Cloud over the --serve-url endpoint; review the Cognee Cloud data policy before sending sensitive content. In API Mode, point at a private Cognee FastAPI on a network the agent host trusts. Local file ingestion reads any path passed to the ingest tools; scope the agent filesystem access accordingly.

Best For

Agent builders who want persistent memory across sessions with both graph and vector retrieval; teams with a Cognee Cloud or self-hosted Cognee FastAPI instance who want MCP access to the same memory store the application uses; coding agents that benefit from indexed .cursorrules, AGENT.md, and similar developer-rule files via the developer_rules nodeset.

TECHNICAL DETAILS

Language
python
Transport
stdiossestreamable-http
Clients
Claude DesktopClaude CodeCursorVS CodeWindsurf
License
Apache-2.0
GitHub
topoteretes/cognee · ★ 16,965
npm
cognee
Last Release
v1.0.4.dev0APR 25, 2026
First Released
JAN 15, 2024

ADOPTION METRICS

// GitHub Stars
16,965

// Reading this16,965 stars on the parent topoteretes/cognee repo. 100 commits on main in the last 30 days drive the editorial weight.

// Popularity Rank
#1
Globally · #1 in AI / ML

// Reading thisFirst-ranked in ai-ml on commit cadence, star count, and tool surface combined.

SOURCES & VERIFICATION

We don't take any single directory's word for it. Before scoring, we cross-reference 5 public MCP sources, install the server ourselves against the clients we cover, and record when we last re-verified.

01
Discovered
Manual submission
First indexed MAY 1, 2026
02
Cross-referenced
5 directories
PulseMCP, MCP.so, Glama, Smithery, Awesome MCP Servers
03
Verified against
Claude Desktop, Claude Code, Cursor
Installed and tested across clients
04
Last re-checked
MAY 1, 2026
Weekly re-verification
// How other directories see it

The same server, 5 different lenses. We reconcile these signals into our editorial score, which is why our number sometimes diverges from a directory-aggregate star count.

SourceTheir ratingTheir star countTheir downloadsLast synced
AutomationSwitch This page4.6editorial16,965MAY 1, 2026
PulseMCP— unratedunavailableunavailableMAY 1, 2026
MCP.so— unratedunavailableunavailableMAY 1, 2026
Glama— unratedunavailableunavailableMAY 1, 2026
Smithery— unratedunavailableunavailableMAY 1, 2026
Awesome MCP Servers— unratedunavailableunavailableMAY 1, 2026

// Counts are directory-reported; we don't adjust them. Discrepancies usually come from different snapshot times or star-caching.

OTHER AI / ML MCP SERVERS

Community4.6

Codebase Memory MCP

DeusData

High-performance code intelligence MCP server for AI coding agents. Indexes a codebase into a queryable knowledge graph in milliseconds, with 14 MCP tools spanning structural search, call-chain tracing, impact analysis, dead-code detection, and Cypher queries. Single static C binary, 66 languages via tree-sitter, zero runtime dependencies.

9 tools1,929
Vendor4.5

Arize Phoenix MCP

Arize AI

LLM observability platform exposing prompts, projects, traces, spans, sessions, datasets, and experiments through MCP. Published to npm as @arizeai/phoenix-mcp, current 4.0.8 (2026-04-29). 9,496 stars on parent monorepo, Elastic License 2.0.

8 tools9,496
Vendor4.5

Qdrant MCP Server

Qdrant

Official Qdrant vector database MCP server. Acts as a semantic memory layer on top of Qdrant: store information with metadata, retrieve via similarity search. Two tools, very small surface area, exceptionally maintained by the Qdrant team. Configurable embedding provider (fastembed default), supports remote and local Qdrant clusters.

2 tools1,373
Official4.4

Amazon Bedrock AgentCore MCP

AWS Labs

Official AWS Labs MCP server for Amazon Bedrock AgentCore: agent runtime, memory, gateway, identity, and observability. Tools fetch curated AgentCore documentation and surface deployment guides for runtime, memory, and gateway resources. Apache-2.0 within awslabs/mcp monorepo (8,924 parent stars).

5 tools8,924
Vendor4.3

ElevenLabs MCP

ElevenLabs

Official ElevenLabs MCP server. Wraps the full ElevenLabs API surface: text-to-speech, voice cloning, speech-to-text, dubbing, sound effect generation, audio isolation, voice design. MIT-licensed, distributed via PyPI as elevenlabs-mcp. Free tier with 10,000 credits per month.

7 tools1,333
Official4.1

Amazon Bedrock Knowledge Base MCP

AWS Labs

Official AWS Labs MCP for Bedrock Knowledge Base retrieval: discover knowledge bases, query with natural language, filter by data source, and rerank results. Apache-2.0 within awslabs/mcp monorepo. Tight tool surface focused on RAG over AWS-managed KBs.

4 tools8,924
// Get in touch

DISCUSS YOUR
MCP REQUIREMENTS.

Evaluating a server, scoping an internal deployment, or working out whether MCP is the right fit at all. Start the conversation and we will point you at the right piece of the ecosystem.

Discuss Your MCP Requirements →