Sign up for our newsletter and get the latest big data news and analysis.

How to Lift Hadoop Out of the Trough of Disillusionment

George CorugedoIn this special guest feature, George Corugedo, CTO of RedPoint Global Inc., takes a look at Hadoop adoption and how this might evolve in 2016. A mathematician and seasoned technology executive, George has over 20 years of business and technical expertise.  As Co-Founder and COO, George is responsible for leading the development of RedPoint’s Convergent Marketing Platform. He holds a B.S. and a B.A. in Geology & Mathematics, respectively, from the University of Miami, and an M.S. in Applied Mathematics from the University of Arizona.

If you’ve perused the big data pages of the tech press, you know that Hadoop has been taking heat as of late. As the initial excitement over its potential faded, concerns over its adoption in the real world have reached a peak this year: In May, Gartner found that more than half of the IT and business leaders surveyed had no plans to invest in the technology at all, and those who had were not championing Hadoop to their peers. Hadoop, the Big Hope of Big Data has landed squarely in the Trough of Disillusionment. Can it emerge triumphant?

First, we need to understand why Hadoop has gained this perception. Of the various factors to blame, two stand out – a lack of technical skills in the workforce, and unrealistic expectations of the technology.

There has been an impression around Hadoop that it is Big Data’s magic wand; add Hadoop to your legacy systems and reap the benefits of your data. But this was a simplistic view of a complex technology. To take advantage of Hadoop in its current form, you need a specific set of skills and you need to know exactly what your business objective is. Without those, Hadoop is expensive, exotic and not worth the effort – and this has largely been the cause of many ailed Hadoop implementations recently.

Some of the biggest companies out there have made Hadoop work – Disney and Nokia, to name a few – and in doing so, these companies have sucked dry an already limited talent pool of skilled data scientists. Without those data scientists, the rest of us have had our hands tied, and Hadoop adoption has been either unattainable or unsuccessful.

The solution to these problems, and the route out of Hadoop’s Trough of Disillusionment, is the next great moment in Big Data – the Age of Applications. Businesses need applications that take the need for specialized skills out of the Hadoop process, and bring the business objective front and center. No longer will businesses need to work out exactly what they need Hadoop to do for them – they will be able to pick their application based on the business objectives they need to meet, and use it seamlessly, without expensive training.

This in turn touches on an important development I expect we’ll see in 2016, which is that powerful resources like Hadoop will become more or less invisible to the business operations. That doesn’t mean it’s not a crucial piece of the puzzle, but the applications will take over as the touchpoint for data-driven insights. This is likely the reason why the industry is seeing this supposed decline in Hadoop adoption. Maybe it’s not so much declining as it’s falling behind the curtain to leave the mainstage for the data management applications.

Hadoop’s business potential should never be in doubt. Its short-term difficulties are the result of application challenges, not technological shortcomings, and as we overcome those challenges through the adoption of intuitive, seamless front-end applications, the power of Hadoop can truly be realized.

 

Download insideBIGDATA: An Insider’s Guide to Apache Spark

Leave a Comment

*

Resource Links: