Google’s New Tool Lets Large Language Models Fact-Check Their Responses
September 13, 2024
(MIT Technology Review) – It could assist the company in its efforts to embed AI in more and more of its products.
As long as chatbots have been around, they have made things up. Such “hallucinations” are an inherent part of how AI models work. However, they’re a big problem for companies betting big on AI, like Google, because they make the responses it generates unreliable.
Google is releasing a tool today to address the issue. Called DataGemma, it uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. (Read More)