Why do AI’s fabricated memories “feel” so true?
Hotel Bar Sessions is currently between seasons and while our co-hosts are hard at work researching and recording next season’s episodes, we don’t want to leave our listeners without content! So, as we have in the past, we’ve given each co-host the opportunity to record a “Minibar” episode– think of it as a shorter version of our regular conversations, only this time the co-host is stuck inside their hotel room with whatever is left in the minibar… and you are their only conversant!
AI engineers and designers are currently, and rightly, focused on minimizing the deleterious effects of AI’s three primary “memory problems”– hallucinations, catastrophic forgetting, and bias– but in this Minibar episode, HBS co-host Leigh M. Johnson argues that none of these problems can be design-engineered away. They are, according to Johnson, baked-in and unavoidable structural elements of any language-based system reliant on an archive.
Borrowing from Jacques Derrida’s work on archives, language, and memory, Johnson argues that we should think more seriously about the manner in which LLM’s outputs come to us cloaked in the garb of memory. We take AI hallucinations, for example, to be true because they inspire in us a feeling of nostalgia… something that we could have remembered, perhaps even should have remembered, but didn’t.
Or didn’t we?
In this episode, Leigh references the following thinkers/ideas/texts/etc.:
- Marcel Proust, Remembrance of Things Past, Volume 1 (1913)
- Our Hotel Bar Sessions episode (Season 14, Episode 209) on “Nostalgia”
- Jacques Derrida, Spectres of Marx (1993)
- Jacques Derrida, Archive Fever: A Freudian Impression (1995)
- AI “hallucinations”
- “You Can’t Lick a Badger Twice: Google’s Failures Highlight a Fundamental AI Failure” (Wired, 2025)
- AI “grounding”
- AI “catastrophic forgetting”
- Thórisson, Kristinn R., Jordi Bieger, Xiang Li, and Pei Wang. “Cumulative Learning.” In Artificial General Intelligence: 12th International Conference, AGI 2019, Shenzhen, China, August 6–9, 2019, Proceedings, edited by Patrick Hammer, Pulin Agrawal, Ben Goertzel, and Matthew Iklé, 198–208.
- Jacques Derrida, Margins of Philosophy (1982)
- LLM “context windows”
- AI “bias”
- Joy Buolamwini: Unmasking AI: My Mission to Protect What is Human in a World of Machines (2023)
- Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” (Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018)
- Timnit Gebru and Remi Denton, Beyond Fairness in Computer Vision: A Holistic Approach to Mitigating Harms and Fostering Community-Rooted Computer Vision Research (2024)
- Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (2019)
- Ruha Benjamin, How We Grow the World We Want (2022)
- Ruha Benjamin, Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life (2019)
- “Responsible AI”
- “Controllable AI”
- “hauntology”
- Plato, Phaedrus
- Our Hotel Bar Sessions episode (Season 6, Episode 81), with special guest Michael Naas, on “Hospitality”
Like and Follow Hotel Bar Sessions!
Stay current with our most recent episodes, behind-the-scenes updates, announcements, and more! Follow us on your favorite platforms below:
Support Us on Patreon!
Enjoying our conversations? Keep them going by supporting Hotel Bar Sessions on Patreon. Your support helps us bring fresh content, deeper discussions, and exclusive perks for our community.
