Is it possible to deliberately set out to create a disruptive innovation in an existing market? I believe it is, if the tools laid out in Clayton Christensens theory are applied properly. I’m going to describe an eight-year-old innovative idea of my own that used the tools to do just that. The idea never actually made it due to poor execution (the company I was with failed to raise the necessary finance for it), but the approach was valid. In my next blog I will cover the tools and how I use them.
I had started working with a very small IT support business with a terrible product portfolio but some strong technical skills. One of the things that the company provided was an early version of online data backup for a couple of customers. While talking to potential customers for this service I asked about their overall disaster recovery (DR) strategy, and discovered an opportunity.
Their ideal DR service was a safe alternate location that they could move their business to as soon as an emergency was declared, while they recovered from whatever problem had occurred. This also meant that their IT infrastructure had to be replicated somewhere, and that the company data had to be copied to this backup system regularly to be as up-to-date as possible when they got to the facility, because most companies ran their own IT systems at the time. This type of DR service is provided by Sungard and IBM, among others, which run huge disaster recovery facilities. Once the service was invoked the customers IT system can be quickly set up and populated with data, and Sungard also provide desks where staff can work during the recovery phase. These services proved invaluable for Manhattan’s finance businesses during, for instance, the 9/11 terrorist attacks in the USA.
The problems for smaller customers are that the service is expensive and big customers pay extra for pre-emption rights. This means that, even if the smaller company had paid for the service they could not guarantee they could use it, because if a big customer invoked at the same time the smaller customer could be pre-empted and have nowhere to go. The high price meant that many companies could not afford the service at all, or took a reduced service.
There were alternative services available. One was a truck full of computer hardware that would turn up to a site and work from the car park. The problem was that the hardware had to be installed and configured, which took many days even with expert help. Another service was DR insurance, which paid for excess costs during the recovery period. However, the insurance company refused to cover the most expensive part, the first couple of days after the disaster. The recovery period depended on the staff they had available, who may rarely or never have been trained how to deal with a disaster properly. There was no perfect solution, so could I come up with a lower cost solution that met these client’s needs?
What job were these smaller businesses trying to get done? They were trying to ensure their survival in the case of an emergency by getting their key systems and staff back up as quickly as possible, but at a reasonable cost. The longer a business is not working, the greater the chance that it will never restart. Loss of data is increasingly damaging, with more and more supply chain integration being based on IT systems. It was more important to them to be certain of getting back up rather than having all of their systems and staff relocated.
I realised that businesses have a range of needs. Their most important systems have to be back up immediately, but other systems can wait for an hour or two, a day or two, or even a few weeks. They can send most of their staff to work from home and only need the DR facility for critical workers. Most importantly, they needed additional skilled people to help with the invocation and the recovery.
I worked with a datacentre-based managed services company to come up with an alternative model. We would provide a range of recovery options, ranging from complete synchronisation of critical servers, guaranteed 1 hour recovery using an early virtualisation approach, through 2 hour guaranteed recovery using disk swapping and 1 day guaranteed recovery to ordering and delivery of replacement hardware within 7 days. We would provide an online data backup service to ensure their data was current. Besides the most expensive live datacentre services, we would maintain a stock of standard servers which could be installed and brought up quickly. Most importantly we would have a team of IT/DR specialists who would be trained constantly to respond quickly to a customer invocation, a bit like the emergency services. For staff accommodation we maintained a small number of desks in the datacentre for emergency IT workers, we did a deal with a managed office business to reserve some space as an emergency facility, and we would provide a VPN for the majority of the staff to work from home. We would assist with expertise throughout the recovery, which opened another opportunity – we agreed with a big insurer that they would cover the first few days as well, because we could reduce the excess cost of working and the recovery time. So the customer’s needs were met while, with the range of options available, their costs were greatly reduced.
I had several customers ready to sign LoIs, including one with 200 servers, by the time the CEO announced that he had failed to raise the finance necessary and was going to liquidate the company.
Why was this potentially disruptive? The big DR facilities rely on a number of customers paying for the facility on the assumption that no more than one or two would invoke a disaster at the same time. If I took the smaller customers away then their costs would have to be shared among fewer big customers – but they could not close these DR facilities without losing their big customers that paid most of their costs. We would also be sharing facilities, but on a different basis, so with careful planning we could ensure that our customers could be guaranteed service in the event of a disaster that affected a larger area. We were not good enough to attract the DR facilities’ bigger customers because we would only cover the basic Linux and Windows operating systems, not mainframes or specialist systems – to do otherwise would have increased the size of our IT support teams and our cost base. We may also have been able to attract the least-important part of the bigger company’s systems, reducing the services that the bigger companies would require from the big DR facilities.