A malfunction that shut down all of Toyota Motor's assembly plants in Japan for about a day last week occurred because some servers used to process parts orders became unavailable after maintenance procedures, the company said.
Sysadmin pro tip: Keep a 1-10GB file of random data named DELETEME on your data drives. Then if this happens you can get some quick breathing room to fix things.
Why not both? Alerting to find issues quickly, a bit of extra storage so you have more options available in case of an outage, and maybe some redundancy for good measure.
A lot of companies have minimal alerting or no alerting at all. It’s kind of wild. I literally have better alerting in my home setup than many companies do lol
I imagine it’s a case where if you’re knowledgeable, yeah it’s free. But if you have to hire people knowledgeable to implement the free solution, you still have to pay the people. And companies love to balk at that!
I think it’s that and any IT employees they have would not be allowed to work on it because they would be working on other stuff because companies wouldn’t prioritize that, since they don’t know how important it is until it’s too late.
There’s cases where disk fills up quicker than one can reasonably react, even if alerts are in place. And sometimes culprit is something you can’t just go and kill.
Had an issue like that a few years back. A stand alone device that was filling up quickly. The poorly designed device could only be flushed via USB sticks. I told them that they had to do it weekly. Guess what they didn’t do. Looking back I should have made it alarm and flash once a week on a timer.
The real pro tip is to segregate the core system and anything on your system that eats up disk space into separate partitions, along with alerting, log rotation, etc. And also to not have a single point of failure in general. Hard to say exact what went wrong w/ Toyota but they probably could have planned better for it in a general way.
It not going to bring the service online, but it will prevent a full disk from letting you do other things. In some cases SSH won’t work with a full disk.
Sysadmin pro tip: Keep a 1-10GB file of random data named DELETEME on your data drives. Then if this happens you can get some quick breathing room to fix things.
Also, set up alerts for disk space.
Removed by mod
Why not both? Alerting to find issues quickly, a bit of extra storage so you have more options available in case of an outage, and maybe some redundancy for good measure.
A system this critical is on a SAN, if you’re properly alerting adding a bit more storage space is a 5 minute task.
It should also have a DR solution, yes.
A system this critical is on a hypervisor with tight storage “because deduplication” (I’m not making this up).
This is literally what I do for a living. Yes deduplication and thin provisioning.
This is still a failure of monitoring or slow response to it.
You keep your extra capacity handy on the storage array, not with some junk files on the filesystem.
You also need to know how over provisioned you are and when you’re likely to run out of capacity… you know this from monitoring.
Then when management fails to react promptly to your warnings. Shit like this happens.
The important part is that you have your warnings in writing, and BCC them to a personal email so you can cover your ass
Exactly, I was being sarcastic about management’s “solution”
Yes, alert me when disk space is about to run out so I can ask for a massive raise and quit my job when they dont give it to me.
Then when TSHTF they pay me to come back.
That high hourly rate is really satisfying, I guess…not been there.
A lot of companies have minimal alerting or no alerting at all. It’s kind of wild. I literally have better alerting in my home setup than many companies do lol
It’s certainly cheaper to not have any but it will limit growth substantially
I have free monitoring I set up myself though lol
I imagine it’s a case where if you’re knowledgeable, yeah it’s free. But if you have to hire people knowledgeable to implement the free solution, you still have to pay the people. And companies love to balk at that!
I think it’s that and any IT employees they have would not be allowed to work on it because they would be working on other stuff because companies wouldn’t prioritize that, since they don’t know how important it is until it’s too late.
There’s cases where disk fills up quicker than one can reasonably react, even if alerts are in place. And sometimes culprit is something you can’t just go and kill.
That’s what the Yakuza is for.
Had an issue like that a few years back. A stand alone device that was filling up quickly. The poorly designed device could only be flushed via USB sticks. I told them that they had to do it weekly. Guess what they didn’t do. Looking back I should have made it alarm and flash once a week on a timer.
The real pro tip is to segregate the core system and anything on your system that eats up disk space into separate partitions, along with alerting, log rotation, etc. And also to not have a single point of failure in general. Hard to say exact what went wrong w/ Toyota but they probably could have planned better for it in a general way.
10GB is nothing in an enterprise datastore housing PBs of data. 10GB is nothing for my 80TB homelab!
It not going to bring the service online, but it will prevent a full disk from letting you do other things. In some cases SSH won’t work with a full disk.
It’s all fun and games until tab autocomplete stops working because of disk space
The real apocalypse
Even better, cron job every 5 mins and if total remaining space falls to 5% auto delete the file and send a message to sys admin
Sends a message and gets the services ready for potential shutdown. Or implements a rate limit to keep the service available but degraded.
Also, if space starts decreasing much more rapidly than normal.
Or make the file a little larger and wait until you’re up for a promotion…
500Gb maybe.