LLMs don't store personal data like databases do, privacy expert argues
A new analysis from the Future of Privacy Forum questions assumptions about how Large Language Models handle personal data. Yeong Zee Kin, CEO of the…
When the precedent hasn’t been set yet, we get to write it
A new analysis from the Future of Privacy Forum questions assumptions about how Large Language Models handle personal data. Yeong Zee Kin, CEO of the…
While firms debate ethics opinions, the technology is already reshaping how legal work gets done, priced, and delivered.
"Rather than assuming that agents will always execute the best attack strategies known to humans, we demonstrate how knowledge of an agent's actual capability profile can inform proportional control evaluations, resulting in more practical and cost-effective control measures."
Technical accuracy gets you to functional. User comprehension gets you to transformational.
"Getting it right means competitive advantage; getting it wrong means expensive operational theater that impresses no one."
"For AI product development, speed bumps aren't obstacles to deployment—they're the infrastructure that makes rapid, responsible deployment possible."
AI agents are shifting from copilots to autopilots, and Noam Kolt warns their speed, opacity, and autonomy demand governance rooted in inclusivity, visibility, and liability—urgent work for product and legal teams before regulation arrives.
The intersection of AI agents and enterprise accountability fascinates me, particularly the challenge of building systems that can operate autonomously while maintaining complete audit trails and decision traceability.