Google DeepMind wants to know if chatbots are just virtue signaling
February 19, 2026

(MIT Tech Review) – We need to better understand how LLMs address moral questions if we’re to trust them with more important tasks.
Google DeepMind is calling for the moral behavior of large language models—such as what they do when called on to act as companions, therapists, medical advisors, and so on—to be scrutinized with the same kind of rigor as their ability to code or do math.
As LLMs improve, people are asking them to play more and more sensitive roles in their lives. Agents are starting to take actions on people’s behalf. LLMs may be able to influence human decision-making. And yet nobody knows how trustworthy this technology really is at such tasks. (Read More)