Do Large Language Models Dream of AI Agents?
August 21, 2025

(Wired) – For AI models, knowing what to remember might be as important as knowing what to forget. Welcome to the era of “sleeptime compute.”
Large language models can typically only “recall” things if information is included in the context window. If you want a chatbot to remember your most recent conversation, you need to paste it into the chat.
Most AI systems can only handle a limited amount of information in the context window before their ability to use the data falters and they hallucinate or become confused. The human brain, by contrast, is able to file away useful information and recall it later.
“Your brain is continuously improving, adding more information like a sponge,” says Charles Packer, Letta’s CEO. “With language models, it’s like the exact opposite. You run these language models in a loop for long enough and the context becomes poisoned; they get derailed and you just want to reset.” (Read More)