My guess is that people tend to anthropomorphise many things, especially those they don’t really understand. A language model comes across as “smart” because we can converse with it in human ways. Thinking about material discovery is so distant for most that they don’t really grasp impressive and impactful this work can be.
Now, to me what’s happening here is extremely impressive and I’ve been a fan of deepmind their stem related work for a while. Seems like we could see some big acceleration in stem fields over next years, which will arguably have a bigger impact on people their lives than the things LLMs are used for right now.
My guess is that people tend to anthropomorphise many things, especially those they don’t really understand. A language model comes across as “smart” because we can converse with it in human ways. Thinking about material discovery is so distant for most that they don’t really grasp impressive and impactful this work can be.
Now, to me what’s happening here is extremely impressive and I’ve been a fan of deepmind their stem related work for a while. Seems like we could see some big acceleration in stem fields over next years, which will arguably have a bigger impact on people their lives than the things LLMs are used for right now.