Alignment Anxieties & Persuasion Problems

Alignment Anxieties & Persuasion Problems

Released Tuesday, 13th May 2025
Good episode? Give it some love!
Alignment Anxieties & Persuasion Problems

Alignment Anxieties & Persuasion Problems

Alignment Anxieties & Persuasion Problems

Alignment Anxieties & Persuasion Problems

Tuesday, 13th May 2025
Good episode? Give it some love!
Rate Episode
List

Dónal and Ciarán continue the 2025 season with a second quarterly update that looks at some recent themes in AI development. They're pondering doom again, as we increasingly grapple with the evidence that AI systems are powerfully persuasive and full of flattery at the same time as our ability to meaningfully supervise them seems to be diminishing.

Topics in this episode

  • Can we see how reasoning models reason? If AI is thinking, or sharing information and it's not in human language, how can we check that it's aligned with our values. 
  • This interpretability issue is tied to the concept of neuralese - inscrutable machine thoughts!
  • We discuss the predictions and prophetic doom visions of the AI-2027 document
  • Increasing ubiquity and sometimes invisibility of AI, as it's inserted into other products. Is this more enshittification
  • AI is becoming a persuasion machine - we look at the recent issues on Reddit's r/ChangeMyView, where researchers skipped good ethics practice but ended up with worrying results
  • We talk about flattery, manipulation, and Eli Yudkowsky's AI-Box thought experiment

Resources & Links

You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

Show More
Rate
List

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more
Do you host or manage this podcast?
Claim and edit this page to your liking.
,