Agents of change
AI agents can do real work or generate chaos. The difference isn't capability—it's human judgment.
When the precedent hasn’t been set yet, we get to write it
AI agents can do real work or generate chaos. The difference isn't capability—it's human judgment.
The real constraint on agentic AI isn't model capability—it's governance infrastructure. Organizations treat agentic platforms as LLM deployment vehicles when they need complete enterprise systems with guardrails, evaluation layers, and audit mechanisms built in.
G2 data shows 60% of companies have AI agents in production with under 2% failure rates—contradicting MIT predictions of 95% project failure. For legal teams, this means governance frameworks can't wait for academic consensus when systems are already deployed.
Mastercard's Agent Pay creates verifiable authorization trails for AI transactions, embedding accountability directly into payment infrastructure rather than treating it as an afterthought.
Concentric AI found Copilot accessed nearly 3 million confidential records per organization in six months—more than half of all externally shared files. The traceability challenge: documenting which data informed each AI-generated output.
After two days watching practitioners solve problems that didn't exist two years ago, a pattern emerged. The sessions that landed weren't about perfect frameworks. They were about what works when building under constraints.
This deal aims to save millions in costs and provides a legal shield against copyright lawsuits from public data scraping. However, the move—executed post-strike—heightens the unresolved IP conflict over creator consent for AI training.
OpenAI wants ChatGPT conversations legally privileged, but traditional privilege requires professional accountability. For deployers, this means discovery blind spots—your team uses AI for strategy, you get sued, you can't access the conversations.