175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

Released Wednesday, 6th August 2025
Good episode? Give it some love!
175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

Wednesday, 6th August 2025
Good episode? Give it some love!
Rate Episode
List

In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working. 

In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.  

By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI. 

Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:

  • Monitor – enabling appropriate transparency into AI agent behavior and performance
  • Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed

 …and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework. 

Highlights / Skip to:

  • 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications. 
  • 01:27 The importance of trust in AI systems and how it is linked to user adoption
  • 03:06 Cultural shifts, AI hype, and growing AI skepticism
  • 04:13  Human centered design practices for agentic AI  
  • 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
  • 11:32 Measuring success of agentic applications with UX outcomes
  • 15:26 Introducing the first two of five MIRRR framework control points:
    • 16:29 M is for Monitor; understanding the agent’s “performance,” and the right level of transparency end users need, from individual tasks to aggregate views 
    • 20:29 I is for Interrupt; when and why users may need to stop the agent—and what happens next
  • 28:02 Conclusion and next steps
Show More
Rate
List

From The Podcast

Are you an enterprise data or product leader seeking to increase the user adoption and business value of your ML/AI and analytical data products?While it is easier than ever to create ML and analytics from a technology perspective, do you find that getting users to use, buyers to buy, and stakeholders to make informed decisions with data remains challenging?If you lead an enterprise data team, have you heard that a ”data product” approach can help—but you’re not sure what that means, or whether software product management and UX design principles can really change consumption of ML and analytics?My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting product designer’s perspective on why simply creating ML models and analytics dashboards aren’t sufficient to routinely produce outcomes for your users, customers, and stakeholders. My goal is to help you design more useful, usable, and delightful data products by better understanding your users, customers, and business sponsor’s needs. After all, you can’t produce business value with data if the humans in the loop can’t or won’t use your solutions.Every 2 weeks, I release solo episodes and interviews with chief data officers, data product management leaders, and top UX design and research professionals working at the intersection of ML/AI, analytics, design and product—and now, I’m inviting you to join the #ExperiencingData listenership. Transcripts, 1-page summaries and quotes available at: https://designingforanalytics.com/edABOUT THE HOSTBrian T. O’Neill is the Founder and Principal of Designing for Analytics, an independent consultancy helping technology leaders turn their data into valuable data products. He is also the founder of The Data Product Leadership Community. For over 25 years, he has worked with companies including DellEMC, Tripadvisor, Fidelity, NetApp, Roche, Abbvie, and several SAAS startups. He has spoken internationally, giving talks at O’Reilly Strata, Enterprise Data World, the International Institute for Analytics Symposium, Predictive Analytics World, and Boston College. Brian also hosts the highly-rated podcast Experiencing Data, advises students in MIT’s Sandbox Innovation Fund and has been published by O’Reilly Media. He is also a professional percussionist who has backed up artists like The Who and Donna Summer, and he’s graced the stages of Carnegie Hall and The Kennedy Center. Subscribe to Brian’s Insights mailing list at https://designingforanalytics.com/list.

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more
Do you host or manage this podcast?
Claim and edit this page to your liking.
,