Compare

Memstore vs Building Your Own
pgvector Memory Layer

Building your own agent memory with pgvector requires setting up Supabase or Postgres, enabling the vector extension, writing embedding generation code, building a similarity search query, and handling TTL and cleanup. Memstore is a hosted API that does all of this in two REST calls.

This is the most important comparison page because most developers seriously consider building this themselves. Here is an honest assessment.

What DIY pgvector Actually Involves

  1. Enable pgvector extension in your Postgres database
  2. Create a table with a vector column
  3. Write code to call an embedding model (OpenAI, etc.) for every store and recall operation
  4. Write the cosine similarity query
  5. Handle TTL — build a cleanup job for expired memories
  6. Handle session scoping — filter by agent/user/task
  7. Monitor and tune the IVFFlat or HNSW index
  8. Handle the embedding model costs and errors separately
  9. Deploy, maintain, and scale the database

Honest time estimate: 3–6 hours to build v1. Ongoing maintenance: occasional index tuning, keeping up with pgvector updates, handling embedding API failures.


The Actual Cost Comparison

DIY pgvector Memstore
Infrastructure cost$0 (Supabase free) or ~$25/mo$0–$49/month
Embedding cost~$0.001/1,000 ops (OpenAI)Included
Time to build v14–6 hoursUnder 5 minutes
Ongoing maintenanceYes — index tuning, updatesNone
Embedding pipeline errorsYou handle themHandled for you
Session scopingYou build itBuilt-in
TTL / cleanupYou build itBuilt-in

Code Comparison

DIY pgvector — schema + Python (simplified) SQL + Python
-- schema
CREATE EXTENSION vector;
CREATE TABLE memories (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  agent_id UUID,
  content TEXT,
  embedding VECTOR(1536),
  created_at TIMESTAMPTZ DEFAULT NOW()
);

# Python — store
import openai
import psycopg2

def remember(content, agent_id):
    embedding = openai.embeddings.create(
        input=content,
        model="text-embedding-3-small"
    ).data[0].embedding

    conn.execute("""
        INSERT INTO memories (agent_id, content, embedding)
        VALUES (%s, %s, %s)
    """, (agent_id, content, embedding))

# Python — recall
def recall(query, agent_id, top_k=5):
    embedding = openai.embeddings.create(
        input=query,
        model="text-embedding-3-small"
    ).data[0].embedding

    return conn.execute("""
        SELECT content, 1 - (embedding <=> %s) as score
        FROM memories
        WHERE agent_id = %s
        ORDER BY embedding <=> %s
        LIMIT %s
    """, (embedding, agent_id, embedding, top_k)).fetchall()
Memstore — the whole thing Python
from memstore import Memstore

ms = Memstore(api_key="am_live_...")

ms.remember("User prefers dark mode", session="user_123")
memories = ms.recall("ui preferences", session="user_123")

Honest Verdict

If you have the time and technical depth, DIY pgvector is absolutely viable. The stack is straightforward and the cost is minimal. Memstore is the right choice when:

Skip the build

Get your free Memstore API key and have memory working in under 5 minutes.

Get your free API key →

More Comparisons