Big Data and Analytics – An Evolving Ecosystem

Print Friendly, PDF & Email

Jeffry NimeroffIn this special guest feature, Dr. Jeffry Nimeroff, CIO at Zeta Interactive, examines how 2016 is shaping up to be a be great year for BigFast Data and analytics, and how this is the year that user-friendly, real-time, converged platforms will become prevalent. Dr. Jeffry Nimeroff is currently the Chief Information Officer at Zeta Interactive. His scope includes 1) overseeing global/enterprise technology, 2) harmonizing technologies across Zeta’s business units, 3) driving repeatable, scalable processes in product management, project execution, and technical delivery, 4) managing integration of new acquisitions, and 5) architecting the next generation of Zeta’s people-based marketing platform. Dr. Nimeroff earned his Ph.D. in Computer and Information Science from the University of Pennsylvania in 1997, concentrating on the design of large multimedia systems.  He earned an M.S.E. from the university in 1993 as well. Dr. Nimeroff has also earned B.A. and M.A. degrees in Computer Science from Boston University.

“Big Data” is pretty big. 2.5 quintillion (2.5 * 1018) bytes of data per day big. It’s so big that 90% of the world’s data has been created in the last 2 years alone. And the growth of Big Data is accelerating.

Whether it’s the disruptive Internet of Things (IoT), or a more gentle change introduced by the current set of prevalent devices (smart phones, smart watches, etc), a multi-dimensional acceleration of data acquisition is occurring. Combining this expanding data footprint with an ever-increasing desire for real-time data utilization, it is not hard to see how our current approaches to technology and tools are struggling to keep “pace.”

Big Data is fundamentally transformed by this new pace. In 2016, Fast is the new Big. We will need both evolutionary AND revolutionary changes to remain performant. The need for change in processes and platform (tech and tools) is real, and is driven by a change in people, who are getting more data-savvy and increasing in numbers.

On the platform side, 2016 is shaping up as the year that true headway is made in the convergence of transactional and analytical systems. Our user population doesn’t care where the data resides. They want to be able to ask intricate data-related questions and get accurate answers. Specific solutions like Map-Reduce (Hadoop) have evolved to general applicability in platforms such as Cloudera (with Impala), and Hortonworks (with Yarn), and products by MemSQL, and MarkLogic are very intriguing in the way they cut across the database technology landscape combining in-memory speed, SQL syntax, linear scaling, a broad set of integrations, and operational management in one unified offering. Layering in technologies such as Apache Storm, Kafka, and Spark to help orchestrate the environment and make it user-friendly provides an ecosystem that supports that real-time processing of stream, transactional, and analytical data in a manner that requires less technical acumen and intervention than previously needed.

From a people perspective, more users are getting directly involved in data usage, and the subject matter expertise is growing. Business users are becoming more data savvy, learning data management and data science techniques, and becoming more directed in their tool usage. Data platforms and tools often have a steep learning curve. In 2016, that will change. Cognitive computing, natural language interfaces, and comprehensive user experience design are bridging the tool applicability gap. IBM Watson Analytics is a great example of where the future lies. More traditional, but still exciting, tools by Tableau, Alteryx, Looker, Domo, Chartio, and ClearStory do a great job in making data and insights more accessible in an intuitive manner.

Finally, with all this focus on platforms designed for maximum data throughput, we can often overlook the impact of algorithmic change (the process). The fundamental change a faster algorithm can provide is unmatched. Defining algorithms that produce accurate results (correct) while using an existing model (incremental), and only processing a small working set of new data (local) is a way to facilitate fundamental change. In 2015 we saw the Slider framework utilize sliding window analytics, and before that we saw SASH for HBase utilize change sets. I believe in 2016 we will see new, incremental and local approaches to data processing that deftly balance performance and accuracy.

Whether we focus on people, process, or platform (technology and tools), 2016 is shaping up to be a be great year for BigFast Data and analytics. With increasingly more skilled end-users, the constant pressure to remain performant, and the broad breadth of the data available, we are at an intriguing intersection. I believe that 2016 is the year that user-friendly, real-time, converged platforms will become prevalent.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*

Comments

  1. Michael Watson says

    The perennial IT challenge being, how do we accomplish business goal x with the greatest amount of efficiency. Efficiency leads to simplicity, or, less complexity which, hopefully, leads to a solution with a lower OPEX while providing greater business value. The more solutions [vendors] the less simplicity and the more complexity and cost and the less business value that is derived. You mention solutions that ingest and correlate data separately from solutions that make that data usable to the business (a.k.a dashboarding). Why not implement solutions like Splunk that do both?