Will progress in artificial intelligence continue to accelerate, or have we already hit a plateau? Computer scientist Jennifer Golbeck interrogates some of the most high-profile claims about the promises and pitfalls of AI, cutting through the hype to clarify what’s worth getting excited about — and what isn’t.
YES
The transforms these LLMs are built on are not as efficient as they are novel. Without repeatability there is little hope for improvement. There isn’t enough energy in the world to get to an AGI using a transform model. We’re also running out of LLM free datasets to train on.
I really love that training llms on LLM output has been proven to cause it to unravel into nonsense. And rather than thinking about that before releasing, all these mega corps had to make profit in the short term first, and now the Internet is polluted with LLM output everywhere. I don’t know that they will be able to generate a newer version than 2021
We all want progress. But progress means getting nearer to the place where you want to be. And if you have taken a wrong turning then to go forward does not get you any nearer. If you are on the wrong road progress means doing an about-turn and walking back to the right road
Stupid title. LLM are less AI than a trivial control loop