“Months would be nice.”
That’s what Andrew Slavitt replied when asked how much time would have been necessary to properly test the Affordable Care Act website before launching it earlier this month.
As it turned out, they were only given two weeks.
Slavitt, Executive Vice President of Optum, was one of several government contractors questioned during an inquiry hearing convened to sort out just what, exactly, went wrong with the Affordable Care Act website launch.
You may have heard that it was kind of a disaster, which has made for all kinds of political squabbling and intrigue.
Behind every epic disaster is typically a long list of smaller blunders. One of which, in this case, is that Slavitt’s team, as well as another contractor group from CGI Federal, had reported known technical problems and concerns about contracted testing time far in advance and were pressed by the Centers for Medicare and Medical Services to launch anyway. Cheryl Campbell, of CGI Federal, noted in the hearing, “We’re there to support our client. It is not our position to tell our client whether they should go live or not go live.”
There are all kinds of problems here. I just want to deal with two.
1. Testing Time
Let’s review a bit of the hearing’s Q&A.
Rep. Greg Walden asked Andrew Slavitt, “What’s the standard protocol? What’s the recommended industry standard for end-to-end tests before rolling up a major website like this?”
Slavitt replied, “Months would be nice.”
Nice? I’m sure! But I have never experienced a website project where months are given to testing. Sad but true. Walden wants an industry standard, and from his perspective, I can completely understand why. Unfortunately, I’m not sure there is one, other than rushing it.
We can do better than that. There should be such a standard, and it should be one of the few things that in the course of a web project cannot be squeezed. Ever!
Perhaps. But you might say, “Our website isn’t expected to perform nearly to the scale of the Affordable Care Act’s website. If they needed months, surely we’d need less.” That sounds pretty logical to me. The volume of use this site was expected to receive far out-scales what most private industry sites expect. But what’s interesting about this is that against any other issue — and I’ve personally had this conversation countless times — comparing a pending website to a much larger one for the purpose of reducing expectations or concerns never works. Never. Search, for example, almost always gets this conversation going. Typically, the question is, “why can’t our search work like [insert Google-sized example here]?” And the answer, of course, is “It’s not that it can’t be done, it’s that it can’t be done with the resources we have available for this project. [Insert Google-sized example here] spends your entire budget in a day.” It’s the only answer. That doesn’t make it satisfying, though.
You put these two perspectives together and what you have is an expectation for the most sumptuous banquet of all time without thinking about who’s going to do the dishes. You don’t want to overestimate what should be done and then underestimate how long you have to do it. Accuracy matters, on both ends.
The attention being given to this botched launch is unprecedented. In a way, I’m glad for it, as it’s exposing the world to the realities of the web — testing and going live among them. I certainly never thought I’d hear phrases like “end-to-end testing” or “go live” in national media.
But this hearing I’ve been referencing — its only purpose is to find someone to blame. That’s what we do in politics when something goes wrong. And so, Campbell, clearly under great pressure, falls back upon a pretty lame position: “It is not our position to tell our client whether they should go live or not go live.”
Really? If you can’t, then who can?
On the one hand, she (and Slavitt) pointed out that they had passed warnings up the chain about technical performance issues and lack of time to resolve them. That’s their expertise at work. They’re clearly saying that problems remain and they don’t have enough time to fix them. But then, in the hot seat, they want to say that they’re not in the position to close the loop and make a recommendation to delay the launch. Please. That’s just word games! I can imagine the Clintonian soundbyte: “It depends upon what you mean by tell.” Or “client.” Or “should.” It kind of falls apart, doesn’t it?
If there’s going to be an industry standard for testing time, then someone in the industry is going to have to take preserving it seriously!
Anyway, you can read the full transcript of the proceedings for yourself. I hope your takeaway is this:
Schedule lots of time for testing. More than seems reasonable. And when everything else takes longer than you’d planned, have the courage to adjust your launch date rather than eating away at the one thing that will guarantee that your launch goes well.
P.S.: In the early days of a project, when everyone is feeling very optimistic about things, it’s not uncommon for sober recommendations for lengthy production and testing periods to be met with the ever welcome and glib, “Awww, c’mon. Let’s not overcomplicate things. This isn’t brain surgery!” Well, there were 55 different contractors working on the Affordable Health Care website. That’s 55 different groups, not individual people; who knows just what the actual headcount was. So, right. It’s not brain surgery. I’d venture to say it’s far more complicated than that. After all, when was the last time you heard about 55 doctors crowding around the brain of one patient? Just sayin’.