AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 day agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square340linkfedilinkarrow-up1864arrow-down140file-text
arrow-up1824arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 day agomessage-square340linkfedilinkfile-text
minus-squareEl Barto@lemmy.worldlinkfedilinkEnglisharrow-up31arrow-down5·2 days agoLLMs deal with tokens. Essentially, predicting a series of bytes. Humans do much, much, much, much, much, much, much more than that.
minus-squareZexks@lemmy.worldlinkfedilinkEnglisharrow-up2arrow-down5·1 day agoNo. They don’t. We just call them proteins.
minus-squarestickly@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down1·1 day agoYou are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.
minus-squareEl Barto@lemmy.worldlinkfedilinkEnglisharrow-up2arrow-down1·1 day ago“They”. What are you?
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
No. They don’t. We just call them proteins.
You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.
“They”.
What are you?