Monthly Archives: July 2012

Is it possible to deliberately set out to create a disruptive innovation in an existing market? I believe it is, if the tools laid out in Clayton Christensens theory are applied properly.  I’m going to describe an eight-year-old innovative idea of my own that used the tools to do just that. The idea never actually made it due to poor execution (the company I was with failed to raise the necessary finance for it), but the approach was valid. In my next blog I will cover the tools and how I use them.

I had started working with a very small IT support business with a terrible product portfolio but some strong technical skills. One of the things that the company provided was an early version of online data backup for a couple of customers. While talking to potential customers for this service I asked about their overall disaster recovery (DR) strategy, and discovered an opportunity.

Their ideal DR service was a safe alternate location that they could move their business to as soon as an emergency was declared, while they recovered from whatever problem had occurred. This also meant that their IT infrastructure had to be replicated somewhere, and that the company data had to be copied to this backup system regularly to be as up-to-date as possible when they got to the facility, because most companies ran their own IT systems at the time. This type of DR service is provided by Sungard and IBM, among others, which run huge disaster recovery facilities. Once the service was invoked the customers IT system can be quickly set up and populated with data, and Sungard also provide desks where staff can work during the recovery phase. These services proved invaluable for Manhattan’s finance businesses during, for instance, the 9/11 terrorist attacks in the USA.

The problems for smaller customers are that the service is expensive and big customers pay extra for pre-emption rights. This means that, even if the smaller company had paid for the service they could not guarantee they could use it, because if a big customer invoked at the same time the smaller customer could be pre-empted and have nowhere to go. The high price meant that many companies could not afford the service at all, or took a reduced service.

There were alternative services available. One was a truck full of computer hardware that would turn up to a site and work from the car park. The problem was that the hardware had to be installed and configured, which took many days even with expert help. Another service was DR insurance, which paid for excess costs during the recovery period. However, the insurance company refused to cover the most expensive part, the first couple of days after the disaster. The recovery period depended on the staff they had available, who may rarely or never have been trained how to deal with a disaster properly. There was no perfect solution, so could I come up with a lower cost solution that met these client’s needs?

What job were these smaller businesses trying to get done? They were trying to ensure their survival in the case of an emergency by getting their key systems and staff back up as quickly as possible, but at a reasonable cost. The longer a business is not working, the greater the chance that it will never restart.  Loss of data is increasingly damaging, with more and more supply chain integration being based on IT systems. It was more important to them to be certain of getting back up rather than having all of their systems and staff relocated.

I realised that businesses have a range of needs. Their most important systems have to be back up immediately, but other systems can wait for an hour or two, a day or two, or even a few weeks. They can send most of their staff to work from home and only need the DR facility for critical workers. Most importantly, they needed additional skilled people to help with the invocation and the recovery.

I worked with a datacentre-based managed services company to come up with an alternative model. We would provide a range of recovery options, ranging from complete synchronisation of critical servers, guaranteed 1 hour recovery using an early virtualisation approach, through 2 hour guaranteed recovery using disk swapping and 1 day guaranteed recovery to ordering and delivery of replacement hardware within 7 days. We would provide an online data backup service to ensure their data was current. Besides the most expensive live datacentre services, we would maintain a stock of standard servers which could be installed and brought up quickly. Most importantly we would have a team of IT/DR specialists who would be trained constantly to respond quickly to a customer invocation, a bit like the emergency services. For staff accommodation we maintained a small number of desks in the datacentre for emergency IT workers, we did a deal with a managed office business to reserve some space as an emergency facility, and we would provide a VPN for the majority of the staff to work from home. We would assist with expertise throughout the recovery, which opened another opportunity – we agreed with a big insurer that they would cover the first few days as well, because we could reduce the excess cost of working and the recovery time. So the customer’s needs were met while, with the range of options available, their costs were greatly reduced.

I had several customers ready to sign LoIs, including one with 200 servers, by the time the CEO announced that he had failed to raise the finance necessary and was going to liquidate the company.

Why was this potentially disruptive? The big DR facilities rely on a number of customers paying for the facility on the assumption that no more than one or two would invoke a disaster at the same time. If I took the smaller customers away then their costs would have to be shared among fewer big customers – but they could not close these DR facilities without losing their big customers that paid most of their costs. We would also be sharing facilities, but on a different basis, so  with careful planning we could ensure that our customers could be guaranteed service in the event of a disaster that affected a larger area. We were not good enough to attract the DR facilities’ bigger customers because we would only cover the basic Linux and Windows operating systems, not mainframes or specialist systems – to do otherwise would have increased the size of our IT support teams and our cost base. We may also have been able to attract the least-important part of the bigger company’s systems, reducing the services that the bigger companies would require from the big DR facilities.

Naveen Jain, who is the founder of the World Innovation Institute, Moon Express, iNome and Infospace, takes on Malcolm Gladwell’s “Outliers” in this article on Forbes. Outliers proposes that expertise is a key requirement for success. Naveen, who is also a trustee of Singularity University, believes that the people who will come up with the creative solutions that will change our planet will NOT be the experts in their fields. He believes that experts are best at incremental improvements, not disruption. His reasons are:

  • myopic thinking – those who are down in the detail can’t easily see the big picture
  • increasing pace of obsolescence of expertise and the increasing availability of information to non-experts

I agree with Naveen. When your expertise is in widgets, the solution to every problem will look like a widget. Most experts have spent years building up their knowledge and have developed shortcuts – rules of thumb and implicit assumptions that they judge everything by. They are not often inclined to re-think these and come up with a different answer.

I love to identify and challenge implicit assumptions, and it’s a great way of encouraging innovative thinking.

Experts are also not going to be able to bring expertise from other fields into new thinking about solving problems. One thing I learned very early on was to keep a very wide watching brief on what others are doing in other fields. I like spotting links between how one field works and another – it helps in understanding, and it often leads to new ways of thinking about a problem. For instance, thinking about IT architectures as a response to the costs of the component parts, and how that changes as the performance of the component parts improves, has generated some useful insights.

Finally, experts don’t tend to go back to basics and rethink things from the ground up. I’ve found several useful techniques for generating new ideas. One is to extrapolate growth curves, like Moore’s law, data storage growth and storage costs, to see what happens as these diverge – eventually, things break and the opportunity for a disruptive innovation appears. Moores’ law has generated several generations of disruptive technology on its own. Another is to take things to extremes –  for instance, the most efficient waste-to-energy system will generate more electricity and more money – what happens if we try and maximise efficiency and how might we do this?

Finally we have to run these innovative ideas through the innovation filter – who will want it and why, what does it compete with (do nothing and substitutes included), how do we take it to market, etc. Which is the subject of this blog.

I found this article on the International Society for Professional Innovation Management (ISPIM) site quite inspiring. It’s based on the 1997 Apple “Think Different” ad campaign, which goes:

Here’s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They’re not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can’t do is ignore them. Because they change things. They push the human race forward. While some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do. 

The author, Gijs Van Wulfen, is the inventor of an innovation methodology called Forth. He is talking about the failure rate of new product ideas. Only 1 in 7 new product ideas is a success in the market, a woeful failure rate considering the cost of new product introduction. Nowhere else in business would this cost of failure be regarded as acceptable. Image

Forth addresses the innovation process as far as the business case, but doesn’t deal with the issues of the market proposition. While various techniques have been developed to address the innovation side of product development, from Systems Thinking through TRIZ to Forth,  the market-facing side of innovation has been little improved since the second world war. Or rather, the research and some of the critical thinking has been done, but it is never applied in the real world.

But with a failure rate of 6 in 7, given the high cost of product development and the fragmentation of markets that have made goods cheaper but sales much more complex, shouldn’t much more attention be paid to exactly how the product is taken to market?