Docs · CrewAI

CrewAI Agent Memory
Between Runs

CrewAI Is Stateless by Default

CrewAI builds multi-agent pipelines where each agent has a role, a goal, and a backstory. But when your crew finishes and the script exits, every piece of context your agents built up is gone. The next time you call crew.kickoff(), all agents start with no knowledge of what happened before.

The problem: your research agent re-runs expensive searches it already completed. Your support agent re-asks customers for information they already provided. Your sales agent forgets where a deal left off. None of this should happen — it's a missing memory layer, not a CrewAI limitation.

CrewAI does include a built-in memory system (memory=True), but it's scoped to a single crew run and uses local embeddings. It doesn't persist across runs, can't be shared across deployments, and isn't queryable by user or session. For production persistence, you need an external store.

Multi-Agent Shared Memory

One of the most powerful things about adding an external memory layer to CrewAI is that all agents in your crew — and across different crews — can share the same memory store. An agent that discovers a key fact in one run makes it available to every future run automatically.

Per-user memory

Scope memories by session=user_id to give each user's crew its own isolated history — no cross-contamination.

Shared team memory

Omit the session parameter (or use a shared key) so all agents in a pipeline can read and write to a common knowledge base.

Agent-specific memory

Scope by session=agent_role to keep one agent's memory separate — useful for specialist agents with their own knowledge.

Cross-run continuity

Store crew outputs after every kickoff. The next run recalls what was done, what worked, and what to avoid repeating.

Quick Test — curl First

Get your free API key at memstore.dev and verify the round-trip before integrating with CrewAI:

Store a crew outputcurl
curl -X POST https://memstore.dev/v1/memory/remember \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"content": "Competitor analysis complete: Acme Corp lacks API docs",
      "session": "project_alpha"}'
Recall on next runcurl
curl "https://memstore.dev/v1/memory/recall?q=competitor+analysis&session=project_alpha" \
  -H "Authorization: Bearer YOUR_API_KEY"

# {"memories":[{"content":"Competitor analysis complete: Acme Corp lacks API docs","score":0.96}]}

Python Integration

Installbash
pip install memstore
CrewAI crew with persistent memoryPython
from crewai import Agent, Task, Crew
from memstore import Memstore

ms = Memstore(api_key="am_live_...")

def run_research_crew(project_id: str, goal: str) -> str:
    # 1. Recall what this crew has already done
    past_work = ms.recall(goal, session=project_id)
    context = "\n".join(f"- {m['content']}" for m in past_work)

    # 2. Inject memory into agent backstory
    researcher = Agent(
        role="Senior Researcher",
        goal=goal,
        backstory=f"""You are an expert researcher.

Prior work on this project:
{context or "No prior research — this is the first run."}

Do not repeat work already listed above.""",
        verbose=False,
        allow_delegation=False,
    )

    writer = Agent(
        role="Content Writer",
        goal="Summarise research findings clearly",
        backstory=f"Project context:\n{context or 'First run.'}",
        verbose=False,
    )

    research_task = Task(
        description=f"Research the following goal thoroughly: {goal}",
        agent=researcher,
        expected_output="Detailed research findings",
    )
    write_task = Task(
        description="Write a clear summary of the research findings",
        agent=writer,
        expected_output="Executive summary",
    )

    # 3. Run the crew
    crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
    result = crew.kickoff()

    # 4. Store the output for future runs
    ms.remember(f"Research completed for '{goal[:80]}': {str(result)[:300]}", session=project_id)

    return str(result)
Key pattern: inject memories into backstory before the crew runs, and call remember() after crew.kickoff() returns. Two lines wrap the entire crew execution.

Shared Memory Across Multiple Crews

When you have multiple specialised crews that hand off to each other — a research crew feeding a writing crew, a planning crew feeding an execution crew — they all read from and write to the same Memstore namespace. No message-passing required.

Two crews sharing memory via a project sessionPython
SESSION = "project_q1_launch"

# Crew 1: Research
research_result = run_research_crew(SESSION, "Analyse competitor pricing")
ms.remember(f"Pricing analysis: {research_result[:200]}", session=SESSION)

# Crew 2: Strategy — automatically sees Crew 1's output
strategy_result = run_strategy_crew(SESSION, "Build go-to-market plan")
# recall() inside run_strategy_crew will surface the pricing analysis
# without any explicit data passing between the two crew calls

What to Store After Each Run

Not everything needs to be stored — be selective. Good candidates:

Give your CrewAI agents a memory

Free tier — 1,000 ops/month. No credit card. Persistent memory in under 10 minutes.

Get your API key →

Further Reading