A malfunction that shut down all of Toyota Motor's assembly plants in Japan for about a day last week occurred because some servers used to process parts orders became unavailable after maintenance procedures, the company said.
Why not both? Alerting to find issues quickly, a bit of extra storage so you have more options available in case of an outage, and maybe some redundancy for good measure.
A lot of companies have minimal alerting or no alerting at all. It’s kind of wild. I literally have better alerting in my home setup than many companies do lol
I imagine it’s a case where if you’re knowledgeable, yeah it’s free. But if you have to hire people knowledgeable to implement the free solution, you still have to pay the people. And companies love to balk at that!
I think it’s that and any IT employees they have would not be allowed to work on it because they would be working on other stuff because companies wouldn’t prioritize that, since they don’t know how important it is until it’s too late.
There’s cases where disk fills up quicker than one can reasonably react, even if alerts are in place. And sometimes culprit is something you can’t just go and kill.
Had an issue like that a few years back. A stand alone device that was filling up quickly. The poorly designed device could only be flushed via USB sticks. I told them that they had to do it weekly. Guess what they didn’t do. Looking back I should have made it alarm and flash once a week on a timer.
The answer here is not storage it is better alerting.
Why not both? Alerting to find issues quickly, a bit of extra storage so you have more options available in case of an outage, and maybe some redundancy for good measure.
A system this critical is on a SAN, if you’re properly alerting adding a bit more storage space is a 5 minute task.
It should also have a DR solution, yes.
A system this critical is on a hypervisor with tight storage “because deduplication” (I’m not making this up).
This is literally what I do for a living. Yes deduplication and thin provisioning.
This is still a failure of monitoring or slow response to it.
You keep your extra capacity handy on the storage array, not with some junk files on the filesystem.
You also need to know how over provisioned you are and when you’re likely to run out of capacity… you know this from monitoring.
Then when management fails to react promptly to your warnings. Shit like this happens.
The important part is that you have your warnings in writing, and BCC them to a personal email so you can cover your ass
Exactly, I was being sarcastic about management’s “solution”
Yes, alert me when disk space is about to run out so I can ask for a massive raise and quit my job when they dont give it to me.
Then when TSHTF they pay me to come back.
That high hourly rate is really satisfying, I guess…not been there.
A lot of companies have minimal alerting or no alerting at all. It’s kind of wild. I literally have better alerting in my home setup than many companies do lol
It’s certainly cheaper to not have any but it will limit growth substantially
I have free monitoring I set up myself though lol
I imagine it’s a case where if you’re knowledgeable, yeah it’s free. But if you have to hire people knowledgeable to implement the free solution, you still have to pay the people. And companies love to balk at that!
I think it’s that and any IT employees they have would not be allowed to work on it because they would be working on other stuff because companies wouldn’t prioritize that, since they don’t know how important it is until it’s too late.
There’s cases where disk fills up quicker than one can reasonably react, even if alerts are in place. And sometimes culprit is something you can’t just go and kill.
That’s what the Yakuza is for.
Had an issue like that a few years back. A stand alone device that was filling up quickly. The poorly designed device could only be flushed via USB sticks. I told them that they had to do it weekly. Guess what they didn’t do. Looking back I should have made it alarm and flash once a week on a timer.