What if AI doesn’t just keep getting better forever?
November 12, 2024
(Ars Technica) – New reports highlight fears of diminishing returns for traditional LLM training.
For years now, many AI industry watchers have looked at the quickly growing capabilities of new AI models and mused about exponential performance increases continuing well into the future. Recently, though, some of that AI “scaling law” optimism has been replaced by fears that we may already be hitting a plateau in the capabilities of large language models trained with standard methods.
A weekend report from The Information effectively summarized how these fears are manifesting amid a number of insiders at OpenAI. (Read More)