Why I rebuilt Karpathy's AI job exposure map
I rebuilt the project from scratch to understand what it actually measures, where it's useful, and where it breaks down.
When the precedent hasn’t been set yet, we get to write it
I rebuilt the project from scratch to understand what it actually measures, where it's useful, and where it breaks down.
AI agents with memory aren't just smarter — they're harder to govern. Each memory layer creates distinct privacy and retention obligations product counsel needs to address at the architecture stage.
AI and platform engineering are converging. For governance teams, that means the platform — not the policy doc — is where your AI guardrails actually live. The architecture matters.
Six data shifts are reshaping enterprise AI in 2026 — from RAG's evolution to contextual memory becoming table stakes. Product counsel need to be in these infrastructure conversations now.
Agentic AI shifts software delivery from applications to automated workflows. The business process and the code become the same thing — which means governance needs to shift too.
MCP servers let AI agents access your APIs without custom code. Most weren't built for production security. That gap between "works in demo" and "safe at scale" is where the liability lives.
Legal AI vendors should publish what they refuse to build, not just what they ship. Architectural constraints aren't limitations — they're competitive differentiators. The first privilege breach will prove who got this right.
When long-running AI agents summarize their own context to stay within token limits, they're deciding what to forget. That's not an engineering problem — it's a governance one.