A bad customer pattern and why mediocrity reigns supreme
Not all of my dealings with web hosting customers were sunshine, rainbows, and unicorns. There were a good number of them who were just broken in some way beyond the point of repair. You just had to put up with them and get things done, then get them off the phone and go back to life.
I started noticing a pattern in this. Usually, it would be a "two man team", and I mean that literally - it was always two guys. One of them was paying the bills, and that person would be the admin and billing contact on the account. He wasn't technical at all.
Then there would be the technical contact. This is the one who would usually wind up opening tickets with us, at least initially. This was generally someone who knew just enough to screw things up. What happened then was seriously annoying.
More often than not, the tech guy would hose their web site and/or platform, and then would blame us for it. This would happen in some private conversation between the two of them, and then the admin guy would pick up the phone and call us and start yelling. Obviously, his tech guy could do no wrong, so it must have been our fault.
I first saw this on a certain notorious account, but then saw that it happened in a few other places. In one case, they were blaming us at the hosting company for how badly their site was running. The account manager wound up taking the brunt of this and came to my "meta-support" team saying "it would be nice if we knew about this".
So, I pulled up the account, and ... wow. There were tons of automatically-generated monitoring tickets saying the site was down. They had the "gold" level of monitoring which would watch services and then open a ticket to the customer when something went down. This would let them find out they had a problem and act on it. I make this distinction in levels because only "platinum" level monitoring alerts the web hosting support people (us), and naturally it costs a good bit more.
I told this account manager that the reason for no notification from support was that support never knew about it in the first place. These alerts were only directed at the customer, and they would have to respond to it to request help. Clearly, the customer had been missing, ignoring, or just not responding to the alert tickets. "Not hearing about the problem" was on the customer, in other words.
So I decided we could in fact do something about it. After all, I had already written a "hall of shame" report which showed the top 10 biggest monitoring alert offenders for a given span of time regardless of their service level. The idea was to get people paying attention and caring about services which were stupidly flaky. They should be fixed or the monitoring should be disabled. Having thousands (!) of alerts fire in a week just means nobody really cares about a given alert.
I figured I could just hook it up so that membership in the "hall of shame" earned you an alert on my audit console. Yes, this is the same audit console which was already being ignored. It would just be one more thing to be ignored, in other words.
But hey, at least then, we could say "we're watching it" (in software), have the company continue to ignore my audit console, and go right back to the status quo.
At that point, I decided to not commit to doing anything if there wasn't an obvious use case with people who would use it. I was tired of writing things which genuinely needed to exist but were still ignored by the rest of the world.
What seemed amazing at the time is that it didn't affect the company much. They were and are functioning just fine without it. Oh, granted, they've probably pissed off some customers by missing obvious problems, suffered a bit more churn than they needed to, and had to curtail certain perks for employees, but they're still very much alive.
After all, it's a short trip from excellence to mediocrity, but the road from mediocrity to being out of business is intolerably long.