AI does not dream. Nor does it imagine. It doesn't autonomously generate other-worldly subconscious mashups of childhood memories and unfinished thoughts, and it never wakes up wondering why it was fighting a giant shrimp in its old high school. However, in its own way, AI hallucinates—and that is where things get interesting.
Neural networks and digital hallucinations
AI works by recognising patterns. Feed it enough data, and it starts predicting what comes next. But sometimes, it over-interprets—seeing things that are not there, filling in gaps with its own twisted logic. That is how Google’s DeepDream often turns ordinary pictures into psychedelic nightmares, cranking up pattern recognition until everything looks like a swirling mass of eyes, gore and strange organic shapes.
Other AI models “dream” in different ways. Language AI predicts text sequences, sometimes spitting out surreal, poetic nonsense. Generative adversarial networks (GANs) produce faces of people who never existed. This is not memory or imagination—it is probability gone rogue.
How generative AI annoys purist artists everywhere
- AI-generated art: Neural networks like DeepDream and DALL·E twist ordinary images into surreal landscapes. It is impressive, unsettling and sometimes feels like looking into a machine’s subconscious.
- Music that writes itself: AI tools like AIVA are composing symphonies, mimicking everything from Beethoven to synthwave. They do not “feel” music, but they know what should come next.
- AI in film and storytelling: Directors and writers use AI-generated scripts as creative fuel. Some results are hauntingly strange. Others are just bad, but give it time.
Will AI ever truly dream?
Dreams are not just random subconscious concoctions—they are emotions, memories, fears and desires, all stitched together. AI does not have the life experience to manifest any of that. But scientists are experimenting with AI sleep cycles, running simulated rest states where networks reorganise themselves, much like a human brain.
If future AI gets closer to cognitive processing, it might start forming its own version of dreams—patterned, logic-driven, but still alien to us. Until then, AI’s “dreams” are really just high-tech hallucinations. I posed the question—appropriately in this case—to everyone's go-to large language model, ChatGPT: "Do you dream—or will you ever develop the capacity to dream"?
The answer was inconclusive:
"Nope. No subconscious, no late-night existential crises, no REM sleep. But I can simulate dream logic, spin surreal ideas or mash up concepts in ways that feel dreamlike.
"Could I ever develop the capacity to dream? Maybe—if we figure out neuromorphic computing, where AI does not just analyse data but processes it like a brain. That is the long game. For now, AI’s dreams are just cold, calculated guesses wrapped in digital weirdness."
Neuromorphic computing—or "brain-inspired" computing is the subject of much debate and highly varied research methods. But IBM, among other observers, suggests that some developers aren't that far off. For now, the jury is out. Although, if AI maintains the momentum it has over the last five years, as some commentators posit, we may see The Singularity within this decade—a sobering thought.
Stephen Hawking said to the BBC, “The development of sull artificial intelligence could spell the end of the human race... It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
Open AI CEO, Sam Altman, is more optimistic, saying at Davos in 2024: "Let society and the technology co-evolve, and sort of step-by-step with a very tight feedback loop and course correction, build these systems that deliver tremendous value while meeting safety requirements."
If AI ever develops the capacity to dream, perhaps developers should consider the distinction between dreams and nightmares. Let's hope for the former.