“The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.”
It’s getting worse. And because it’s a black box model they don’t know why. The computer science professor here likens it to how human students make mistakes… but human students make mistakes because they don’t have perfect recall, mishear things being told to them, are tired and/or not paying attention… A bunch of reason that basically relate to having a human body that needs food, rest and water. A thing a computer does not have.
The only reason ChatGPT should be getting math wrong is that it’s getting inputs that are wrong, but without view into it they can’t figure out where it’s getting it wrong and who told it the wrong info.
It’s enshittification, then.
I mean, they’ve gotta to be blowing absurd amounts of money at it. It’s not remotely cheap to build a massively complicated web service at that scale, and eventually the numbers need to start adding up. I’m sure they have several good monetization plans, but not every instance of a business attempting to stop hemorraghing money is a conspiracy. You’d be doing the exact same thing in their shoes.
Enshittification is not a conspiracy because a conspiracy requires communication and planning. Enshittification is just how idiots act when trying to make money.
i wouldn’t even say that, it’s just the logical end point of capitalism
Yes.
And extra infuriating they want to roll this stuff out after making it LESS reliable.
Suddenly Sheev