Sign up for our newsletter and get the latest big data news and analysis.

Powering Digital Transformation with In-Memory Computing and Open Source Software

In this special guest feature, Abe Kleinfeld, president & CEO of GridGain, looks at the rise of in-memory computing as the critical computing platform sitting at the nexus of several key computing trends that have converged in recent years, from big data to IoT and real-time analytics, in-memory computing has emerged as the lynch pin for those computing initiatives. Abe joined GridGain as president & CEO in 2013 and has transformed the company into the leading open source in-memory computing platform provider. Since joining GridGain, the company has averaged triple digit annual sales growth and raised $16M in Series B venture financing. The company is also known for founding and producing the annual In-Memory Computing Summit, the world’s first and only in-memory computing conference. He holds a bachelor’s degree in computer science from State University of NY at Oswego.

For most organizations, digital transformation initiatives are driven by the need to dramatically improve the customer and end user experience and streamline systems by leveraging new technologies to remake their business processes. Some companies launch web-scale applications to allow direct, 24×7 access to their services. Others change their business models, shifting to software-as-a-service (SaaS) to engage with customers in a more flexible, cost effective and personalized way. One of the most exciting developments is the rise of the Internet of Things (IoT), where sensor and mobile device data is collected and analyzed to power improved or even entirely new business-to-business (B2B) and business-to-consumer services (B2C).

The success of all three of these digital transformation strategies depends on the ability to transact and analyze huge amounts of data in real-time. Failing to achieve real-time performance inevitably leads to user frustration, lost customers, and the inability to deliver the promised return on investment. As a result, companies are deploying several technologies designed to deliver the required performance.

Today, eliminating latency and dramatically improving application performance is easier than ever through the use of in-memory computing (IMC). The desire to use IMC has been around for decades, but memory has been too expensive for large-scale deployments in most use cases. That is no longer true. Memory costs have dropped significantly in recent years and are now only slightly more expensive than disk-based storage. Given the huge performance gains – and the ability to provide the required customer experience – IMC offers an extraordinary value proposition and should be considered for any digital transformation initiative.

Leading IMC platforms support simultaneously transacting and analyzing huge amounts of data in real-time on a massively scalable architecture. The in-memory computing platform is inserted between the application and data layers, offering massive parallel processing across a high availability, distributed computing cluster with ACID transaction support. The underlying RDBMS, NoSQL or Apache Hadoop database is maintained in the RAM of the distributed cluster, delivering a tremendous performance boost for transaction and analytics processing.

The design of an IMC platform also makes it easy to scale. Leading systems automatically utilizes the RAM of new nodes added to the cluster and rebalance the dataset across the nodes, providing extreme scalability and high availability.

A new, third generation of in-memory computing platforms has recently been introduced which feature memory-centric architectures that allow tiered memory approaches to data management. In these solutions, the full dataset is maintained by the solution on disk and a subset of the dataset is held in memory. However, the system can transact and analyze data across the entire dataset, whether on disk or in memory. These new memory-centric, distributed, transactional SQL database alternative can be scaled out across thousands of servers.

In-memory computing platforms can be used to power hybrid transactional/analytical processing (HTAP) use cases. HTAP offers the option of performing analytics on live transactional data in a single unified OLTP and OLAP environment without impacting the transaction processing performance. HTAP is a powerful solution strategy for IoT applications – in areas such as energy distribution, healthcare, smart cities, weather tracking, and more – that require real-time analysis of sensor and other external data sources in order to react to real-time conditions. HTAP can also drive down costs for other use cases which can benefit from real-time analysis of transaction data, such as for inventory management, routing driverless cars, hospital patient and security monitoring, and more.

Companies using in-memory computing platforms today typically see a 1,000x or more increase in OLTP and OLAP processing speeds compared to their legacy applications built on disk-based databases. In many cases, IMC platforms are also able to support hundreds of millions of transactions per second.

In-memory computing platforms also offer one other significant advantage for companies engaged in or planning digital transformation initiatives. Today’s top IMC platforms are being developed using the open source model. This means users can participate in the IMC platform’s development and have a direct impact on the platform’s roadmap. It also means that innovation occurs much faster than with the typically release cycles of proprietary solutions. This speed of innovation is critical for companies transforming their business processes. It allows them to keep pace with the rapid evolution of their customers’ expectations and deliver services and solutions that will continue to differentiate them from the competition.

 

Sign up for the free insideBIGDATA newsletter.

Leave a Comment

*

Resource Links: