The Latest Big Changes in Big Data

Print Friendly, PDF & Email

Big Data has been a buzzword for a while. In Gartner’s hype cycle the term has been prevalent for years. From its beginnings in the early 2000s, it moved swiftly into the peak of inflated expectations, weathered its way through the trough of disillusionment and is now prevalent in our lives – somewhere between the slope of enlightenment and the plateau of productivity.

As the hype and awareness of Big Data has grown, so has the pace of change of the technology behind it. Just a couple of weeks ago two of the big players in big data changed hands.

EMC, which offers consultancy in Hadoop as part of the EMC “Big Data Portfolio” merged with Dell in a multi-billion-dollar deal and Micro Focus, a relatively unknown UK consulting business, bought most of HPE’s software business, including analytic database Vertica, for a similar sized deal.

Vertica started out as a small independent company in 2005 and after initial success it was acquired by Hewlett Packard in 2011.  In 2015 the parent company was spun off to form HPE and now, less than a year later, it has changed hands ago to be part of Micro Focus – Vertica’s customers must be wondering what’s going on.

This new turn of events follows quickly on the heels of latest development coming out of Actian, who acquired ParAccel in 2013.  It revealed that the company is now pulling the plug on its Actian Analytics Platform, which includes analytic databases Actian Matrix and Actian Vector, in order to focus on its core business.

Mergers and acquisitions are nothing new in technology. By its nature it’s a fast-moving business. However, in data analytics there is another facet to consider: the huge interest in open source big data tools. Tools such as Hadoop and Hive and more recently, Spark.

This growth in open source technology has forced vendors in the data analytics space to think deeply about the relevance of their value propositions and more often than not to hedge their bets by integrating the open source tools such as Hadoop into their offerings. Or as EMC did, build a proposition around open source tools.

Vendors have been extolling the benefits of integrating Hadoop into their products for a number of years. Indeed, EXASOL has included Hadoop support for its database for some three years now.  That’s because Hadoop is a most sensible approach for storing and retrieving your data. What some people refer to as a “data lake”. But here’s the thing: if you need to run interactive queries on your data you still need a fast database.  And if high performance matters to your business, then you need a very fast database.

It’s all about finding the right tools for the job and combining EXASOL’s MPP in-memory database with Hadoop can be a match made in heaven, as many of our clients find.

All these mergers and acquisitions in the news makes me reflect on the pace of change we are seeing in technology.

Data analytics is a good example of this because there always seems to be another technology around the corner that promises to be better than the last, especially when it comes to open source. Apache Spark, the open source cluster computing framework for data processing, appears to be the current flavour of the month. It’s only about two years old but has already made a name for itself – partly because of the claim that it replaces the short-comings of Hadoop MapReduce with its fast query times and bolt-on SQL query engine.

It sounds great but the truth is, it’s a very technical offering that starts out as a general-purpose platform and requires a not-inconsiderable amount of time and effort to tune and optimise it.

Anyone trying to stay up-to-date with the latest thinking in data analytics is in for a tough time. Vendors change, strategies are abandoned, tools quietly cease to be supported and the “in vogue” open source data project changes month by month.

EXASOL, in comparison, has stayed true to its roots. When we started out we concentrated on just one single thing: creating a fast analytic database. One that is self-tuning and allows our customers to concentrate on doing their job. We have remained completely independent and the product today is lean and easy to use.

We saw, before it was in fashion, that a column-oriented intelligent in-memory MPP database was the best solution for fast data analytics. In recent years the market has become closer aligned to our way of thinking – for instance with the release of SAP HANA in 2012 and Oracle In-Memory in 2014. However, EXASOL is still the fastest and the rise of data analytics and the proliferation of business intelligence tools makes EXASOL more relevant than ever.

aaron_auldContributed by: Aaron Auld, CEO of EXASOL. Aaron is in charge of coordinating the EXASOL¹s strategic direction and positioning as well as its international expansion. He has a law degree from Munich University and an M.B.L. in international business law from the University of St. Gallen in Switzerland where he finished top of his year. He has served as General Counsel for Ocè¹s global software business (acquired by Canon) and managed the legal execution of primion¹s IPO on the Frankfurt Stock Exchange as well as several merger and acquisition deals before joining EXASOL in 2006. At EXASOL, Aaron was previously responsible for Operations before becoming CEO in 2013. He is Scottish by upbringing but has embraced German virtues where he lives and works today.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*