David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 1 year agoDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comexternal-linkmessage-square102linkfedilinkarrow-up1263arrow-down10
arrow-up1263arrow-down1external-linkDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 1 year agomessage-square102linkfedilink
minus-squarehex@programming.devlinkfedilinkEnglisharrow-up63·1 year ago Facts are not a data type for LLMs I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.
minus-squareCleoTheWizard@lemmy.worldlinkfedilinkEnglisharrow-up28·1 year agoThey’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices. What results is essentially if you made a Venn diagram of human language and only ever used the center of it.
minus-squarehex@programming.devlinkfedilinkEnglisharrow-up15·1 year agoYes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.
I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.
They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.
What results is essentially if you made a Venn diagram of human language and only ever used the center of it.
Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.