Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes.
Show notes:
• Agentic AI systems require governance at every step: perception, reasoning, action, and learning
• Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps
• Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)
• Traditional governance approaches are insufficient for the complexity of agentic systems
• Organizations must implement granular monitoring, logging, and validation for each component
• Human-in-the-loop oversight is not a substitute for robust governance frameworks
• The true cost of agentic systems includes governance overhead, monitoring tools, and human expertise
Make sure you check out Part 1: Mechanism design, Part 2: Utility functions, and Part 3: Linear programming. If you're building agentic AI systems, we'd love to hear your questions and experiences. Contact us.
What we're reading:
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More