David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 3 months agoDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comexternal-linkmessage-square104fedilinkarrow-up1263arrow-down10
arrow-up1263arrow-down1external-linkDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 3 months agomessage-square104fedilink
minus-squarehex@programming.devlinkfedilinkEnglisharrow-up63·3 months ago Facts are not a data type for LLMs I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.
minus-squareCleoTheWizard@lemmy.worldlinkfedilinkEnglisharrow-up28·3 months agoThey’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices. What results is essentially if you made a Venn diagram of human language and only ever used the center of it.
minus-squarehex@programming.devlinkfedilinkEnglisharrow-up15·3 months agoYes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.
I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.
They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.
What results is essentially if you made a Venn diagram of human language and only ever used the center of it.
Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.