I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
To me the bigger problem is the fact we don’t have a written standard. Idc what people say, but if you buy a 10TB hard drive, then plug it in and the OS doesn’t show 10TB, then it can be easy to blame the drive manufacturer when the OS is just using a different prefix quantity, but calling it the same. There should be some way to know exactly how many bytes there are on a drive before you buy it, and it should match when you plug it into your computer. I don’t think that’s crazy, but the article is a little overboard for that sentiment