What do people think super-AI is going to do? All it can do is print letters on the screen. Flip a switch, it’s gone. It can’t actually DO anything; it has no body, no thumbs. The smartest AI conceivable can’t do a thing if I take a hammer to it.
I hit the computer it’s running on. This is not rocket science, people. The only thing an LLM can do is spit out characters to a terminal. It can’t kill you or make planes fly into buildings or build a robot army or launch nuclear weapons. It can’t do anything.
All these downvotes and not one counterexample. HOW can an LLM endanger anyone? Simple and serious question. I mean, someone start up a local instance of Llama and use it to start a fire or kill a child or something and prove me wrong here. You just hit ctrl-C and the LLM dies and people are acting like they’re Skynet.
It’s 2029, you’re made it inside the amazon datacenter. You have a revolver with four bullets left, a crowbar, a can of soda.
“I’m in” Alcalde says into a walkman recording his heroism for posterity.
around you, server racks stretch in every direction, seemingly into infinity. The AI is hacking global GPS, weather, and airport radar computers, changing positional values into nonsense because some idiot told the AI that his dad is going to kill him when he gets home from his trip. Obviously if the plane crashes, the boy’s physical safety will be secured. You want your wife’s plane to land safely
You’re being intentionally obtuse here. Right now LLMs are harmless because we only let them print characters to the screen. But suppose you have an assistant version that you allow the ability to execute code. And you ask it to write some code to process an excel file and run it, but while it does that it also decides to copy itself to an external server you don’t know about and starts doing anything there. Without reviewing every thing it does, you can’t be certain that it’s not doing something malicious. But if you have to review everything it does, then it’s not nearly as powerful and helpful for automating tasks as it could be.
You say you can destroy it by destroying the computer it’s on. But you can’t do that. You have no idea what or where any given EC2 instance is located, and if you did, you wouldn’t be able to get there before the AI transfers itself to another computer within a few minutes or seconds.
A truly rogue, intelligent, sentient AI hell bent on damaging the world, unleashed onto the internet could do untold damage to our society.
What do people think super-AI is going to do? All it can do is print letters on the screen. Flip a switch, it’s gone. It can’t actually DO anything; it has no body, no thumbs. The smartest AI conceivable can’t do a thing if I take a hammer to it.
What are people scared of???
By producing letters on a screen it can do everything you’re able to do on the Internet, except at scale and faster.
What exactly are you going to hit with your hammer?
I hit the computer it’s running on. This is not rocket science, people. The only thing an LLM can do is spit out characters to a terminal. It can’t kill you or make planes fly into buildings or build a robot army or launch nuclear weapons. It can’t do anything.
All these downvotes and not one counterexample. HOW can an LLM endanger anyone? Simple and serious question. I mean, someone start up a local instance of Llama and use it to start a fire or kill a child or something and prove me wrong here. You just hit ctrl-C and the LLM dies and people are acting like they’re Skynet.
Take your argument further: All any computer can do is maths and spit out letters and numbers.
Yet I’m sure we can agree that computers can be used to control and manage systems remotely that can be used to wreak some havoc when abused.
Generative AI/ML can just be used to do it faster and easier than before.
It’s 2029, you’re made it inside the amazon datacenter. You have a revolver with four bullets left, a crowbar, a can of soda.
“I’m in” Alcalde says into a walkman recording his heroism for posterity.
around you, server racks stretch in every direction, seemingly into infinity. The AI is hacking global GPS, weather, and airport radar computers, changing positional values into nonsense because some idiot told the AI that his dad is going to kill him when he gets home from his trip. Obviously if the plane crashes, the boy’s physical safety will be secured. You want your wife’s plane to land safely
Explain your next move.
You’re being intentionally obtuse here. Right now LLMs are harmless because we only let them print characters to the screen. But suppose you have an assistant version that you allow the ability to execute code. And you ask it to write some code to process an excel file and run it, but while it does that it also decides to copy itself to an external server you don’t know about and starts doing anything there. Without reviewing every thing it does, you can’t be certain that it’s not doing something malicious. But if you have to review everything it does, then it’s not nearly as powerful and helpful for automating tasks as it could be.
You say you can destroy it by destroying the computer it’s on. But you can’t do that. You have no idea what or where any given EC2 instance is located, and if you did, you wouldn’t be able to get there before the AI transfers itself to another computer within a few minutes or seconds.
A truly rogue, intelligent, sentient AI hell bent on damaging the world, unleashed onto the internet could do untold damage to our society.