Loading...
Loading...
A wave of developer tooling is collapsing “AI memory” and search infrastructure into simpler, local-first primitives. Pg-here highlights the push to make PostgreSQL disposable and project-scoped, lowering friction for experimenting with data-heavy apps. On the retrieval side, new patterns keep ranking pipelines inside Postgres: retrieve with BM25, then rerank with embeddings for personalization, reducing external services and sync overhead. Meanwhile, AgenticMemory argues that agent long-term memory should be a portable, structured knowledge graph rather than loose notes or vendor-tied vector stores, using a single binary graph file for fast traversal and similarity search.
PgDog, built by Lev and Justin, is a PostgreSQL network proxy that combines connection pooling, load balancing, and sharding without requiring application code changes or database migrations. The team says PgDog is now running in production, with direct-to-shard queries generally reliable, while cross-shard queries remain in progress. Recent additions include in-transit query rewriting to support cross-shard aggregates (count, avg, min/max, variance), sorting/grouping with DISTINCT, and support for more than 10 data types. PgDog also adds atomic cross-shard writes and schema changes via two-phase commit by intercepting COMMIT and issuing PREPARE TRANSACTION/COMMIT PREPARED. It supports replicated “omnisharded” tables, splits multi-row inserts for ORMs, allows sharding-key updates by moving rows, and provides a cross-shard unique ID generator (up to 4 million IDs/sec). Built-in resharding uses Postgres 16 logical replication improvements to move large datasets faster, and the load balancer can shift writes during failover on managed services like AWS RDS/Aurora, Azure, and GCP.
Pg-here: Run a local PostgreSQL instance in your project folder with one command
pg-here: Run a local PostgreSQL instance in your project folder with one command
Pg-here: Run a local PostgreSQL instance in your project folder with one command
AgenticMemory, an open-source “brain file format” for AI agents, proposes a single binary graph file (.amem) to store an agent’s long-term memory across any LLM. The project argues that common approaches like vector databases, markdown notes, and key-value stores lose structure, fail to preserve reasoning chains, and can create vendor lock-in. In AgenticMemory, each cognitive event—facts, decisions, inferences, and corrections—is stored as a node connected by typed edges such as caused_by, supports, and supersedes. The author claims sub-millisecond writes (276ns per node), millisecond-scale graph traversal (3.4ms for five levels in a 100K-node graph), and 9ms similarity search across 100K nodes, with modest storage (~24MB/year). It’s built in Rust with zero dependencies and ships with Python and CLI tooling.
Ankit Mittal outlined a way to build personalized “retrieve and rerank” search entirely inside PostgreSQL, avoiding external stacks like Elasticsearch plus separate ML services. In a January 21, 2026 post (building on ideas he presented at PGConf NYC 2023 and experience at Instacart), Mittal demonstrates a prototype movie search engine using Postgres with ParadeDB. The approach first retrieves the top N candidates (e.g., 100) using BM25 full-text search, then reranks those results with vector-based personalization using cosine similarity against a user profile embedding derived from explicit signals such as 1–5 star ratings. The design aims to cut infrastructure complexity, network latency, and data-sync issues while keeping search and personalization in SQL.