Sign up for our newsletter and get the latest big data news and analysis.

Why Aren’t We Learning from Project Failures?

Simon Moss CEO PneuronIn this special guest feature, Simon Moss, Chief Executive Officer for Pneuron Corporation, suggests that if we are truly to learn from our project failures and improve, then it’s time we considered that our fundamental approach might be flawed. Simon Moss was previously CEO of Avistar and CEO at Mantas, later acquired by Oracle. He served as Partner at Price Waterhouse Coopers, and was co-Founder of the Risk Management Services Practice at IBM. Simon is also on the Board of Directors for C6 Intelligence.

A fresh perspective on where we’re doing wrong

If our attempts to solve business problems are consistently late, over-budget, and prone to failure, then perhaps it’s time to change the way we tackle them. The problem is that businesses lack basic data strategies.

A 2015 report by Iron Mountain and PwC that surveyed 1,800 senior business leaders at large enterprises found that only four percent reported efficient data management practices. “Data is the lifeblood of the digital economy, it can give insight, inform decisions and deepen relationships,” says Richard Petley, director of PwC Risk and Assurance. “It can be bought, sold, shared and even stolen — all things that suggest that data has value. Yet when we conducted our research very few organizations can attribute a value and, more concerning, many do not yet have the capabilities we would expect to manage, protect and extract that value.”

The report also found that 43% of companies “obtain little tangible benefit from their information,” while 23% “derive no benefit whatsoever” from their company data.  Adding insult to injury, large IT projects with initial budgets in excess of $15 million typically run 45% over budget and 7% over time, while delivering 56% less value than expected, according to McKinsey&Company.

Why are large IT projects running so inefficiently? What is wrong with the process that we are not learning from it? In our own meetings with Wall Street banks, the IT group claims the backlog of application development is in excess of $100 million. Not because of the talent shortage, but simply because of the way we build and deploy them.

Many companies are keen to benefit from the latest, cutting edge technology, but find integrating with legacy systems is a major hassle. They’re sold on the idea of extracting valuable insights from big data, but struggle to pull disparate sources together. They want to be agile and innovative, but get bogged down by massive commitments to IT projects that promise the world and then turn out to be monolithic failures.

Stop reinventing the wheel

When a new system fails to deliver what was expected, many businesses throw the whole thing out and go back to the drawing board. The next project spends 70% of its budget on identifying, normalizing, moving, storing, and optimizing data, before it can deliver a single penny of value. Factor in an average application lifetime of three to five years before replacement or major upgrade is required and you’re spending an awful lot of time treading water as a company.

Being able to innovate and pivot in the market requires speed. Fail or succeed, you need to be able to do it sooner than your competitor. It’s not easy to reconcile purpose-built, inflexible applications with new data, application, and infrastructure services. An optimal mix of what you have and what you need, bringing internal and external components together, is possible. But to get there quickly we need to successfully reuse, not recreate, existing data, applications, and infrastructure, while fully embracing “as-a-Service” offerings.

Think about adoption

Every new system necessitates some compromises on speed, cost, and quality, so you need to consider these things at the outset and use them to help you make an informed decision. Every business has to overcome many of the same challenges.

  • How will you integrate? – A non-invasive approach, which leverages existing data, apps, and analytics without requiring changes to those targeted systems could ease adoption.
  • Is it robust and resilient? – It needs to be able to identify and access available processing resources, both internally and externally, through a distributed, fault-tolerant network. Self-managed resiliency can drastically cut overheads and ensure reliable operation.
  • What about compliance? – It needs to be fully configurable, taking into account existing applications, processes, security standards, and governance models.
  • What about standards compatibility? – Broad compatibility with existing industry standards will enable you to leverage existing staff skills, reuse previously developed solutions, and easily blend in third-parties for further development or maintenance.
  • Is it scalable and affordable? – It has to be easy to deploy and connect new users to your existing platform. A solid licensing system should allow you to add and remove at will, so you pay for what you’re actually using.
  • Can you develop on top? – Don’t get locked into a vendor’s roadmap. You need self-sufficiency to avoid the situation where the vendor becomes a bottleneck, because you’re reliant on them to create new functionality.

Solving the distribution problem

Faced with a range of different databases, systems, applications, and processes we have been focusing on getting everything in one place before we filter and analyze it. To keep the spotlight on business value and solve this distributed problem, we need to look beyond a centralized method.

Let’s stop building monolithic, multi-year integration initiatives that end up obsolete before they generate any value. Let’s cut down on time and cost by going directly to the source.

A distributed architecture could cut through costly pre-processing layers of migration, translation, and normalization. It could allow local processing by hosting functionality near the source systems. Instead of building more infrastructure and wasting time and resources on continually rebuilding foundations, extract the precise data you need and apply business logic automatically. Filter, process, and analyze, then review.

If we are truly to learn from our project failures and improve, then it’s time we considered that our fundamental approach might be flawed. Innovation is often about viewing familiar problems from a fresh perspective, and not just accepting received wisdom, let’s apply that same logic to our overall approach to solving business problems.

For further reading on this subject by the author: “Do you have a big data issue or is it a data diversity issue?”; And: “Heterogeneous analytics and making sandwiches.”

 

Sign up for the free insideBIGDATA newsletter.

Leave a Comment

*

Resource Links: