AI, Testing and Red Teaming, with Peter Garrighan

AI, Testing and Red Teaming, with Peter Garrighan

Released Thursday, 24th July 2025
Good episode? Give it some love!
AI, Testing and Red Teaming, with Peter Garrighan

AI, Testing and Red Teaming, with Peter Garrighan

AI, Testing and Red Teaming, with Peter Garrighan

AI, Testing and Red Teaming, with Peter Garrighan

Thursday, 24th July 2025
Good episode? Give it some love!
Rate Episode
List

Artificial intelligence is often described as a "black box". We can see what we put in, and what comes out. But not how the model comes to its results.

And, unlike conventional software, large language models are non-deterministic. The same inputs can produce different results.

This makes it hard to secure AI systems, and to assure their users that they are secure.

There is already growing evidence that malicious actors are using AI to find vulnerabilities, carry out reconnaissance, and fine-tune their attacks.

But the risks posed by AI systems themselves could be even greater.

Our guest this week has set out to secure AI, by developing red team testing methods that take into account both the nature of AI, and the unique risks it poses.

Peter Garraghan is professor at Lancaster University, and founder and CEO at Mindgard.

Interview by Stephen Pritchard

Show More
Rate
List

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more
Do you host or manage this podcast?
Claim and edit this page to your liking.
,