This is nonsense. Passwords might have an interesting distribution, key space is flat. There is nothing to learn.
And I hope you didn’t mean letting an LLM loose on, say, the AES circuit, and expecting it will figure something out.
This is nonsense. Passwords might have an interesting distribution, key space is flat. There is nothing to learn.
And I hope you didn’t mean letting an LLM loose on, say, the AES circuit, and expecting it will figure something out.
You can train AI to crack encryption
Oh do provide details.
It’s bluca, yo.
As a random example, here is bluca breaking suspend-then-hibernate
, then being a complete asshole about it, while other systemd devs are trying to put the fire out. Do read his code reviews on the latter. yuwata and keszybz have nerves of steel.
The current behaviour is fully expected and documented
bluca is cancer.
Yo, setup hibernation and use hybrid sleep as your default sleep.
ln -s /etc/systemd/system/suspend.target ../../../usr/lib/systemd/system/suspend-then-hibernate.target
Now any sleep is hybrid. The machine suspends, then wakes up after a timeout, and enters hibernation. The timeout is configurable in systemd-sleep.conf(5)
.
With this combo I find that I prefer S0 to S3. S0 drains the battery about twice as fast, sure, but it resumes instantaneously, while S3 takes about 30 seconds (!) to resume on this machine. And the thing hibernates and powers off if I leave it for an hour anyway.
You know how to tell that it wasn’t?
It’s using careful hedging language — “could be used to attempt”, “have the potential to”, “more effective”.
AI would just plow through that shit, hallucinating facts like there is no tomorrow.