Re-Imagining Ultimate Performance with In Memory Computing
With the cost of system memory dropping 30% every 12 months, in-memory computing has become the first choice for a variety of workloads across all industries. In-memory computing can provide a lower TCO for data processing systems while providing an unparalleled performance advantage. In-memory computing technologies take many forms, ranging from in-memory data caches on a single server to in-memory databases (IMDBs), inmemory data grids (IMDGs), and comprehensive in-memory computing platforms (IMCPs). High performance inmemory computing technologies can even allow real-time analytics to run on operational datasets, enabling the implementation of hybrid transactional/analytical processing (HTAP) systems that can provide significant cost and complexity savings. This white paper provides an overview of in-memory computing technology with a focus on in-memory data grids. It discusses the advantages and uses of in-memory data grids and introduces the GridGain In-Memory Data Fabric. Finally, it presents a deep dive on the capabilities of the GridGain solution.
In-Memory Data Grids: Bringing High Performance to Big Data
Traditional approaches to application architecture are based on spinning disk technologies, which struggle to keep up with the expanding data volumes and velocities inherent in today's enterprise applitations. To meet the need for a faster, scalable alternative, organizations are increasingly considering in-memory data grids (IMDGs) as the cornerstone of their next-generation architectures. This section will discuss how IMDGs are revolutionizing data processing – turning Big Data into Fast Data – and how their advantages are increased with added features that turn them into comprehensive in-memory computing platforms.
Why In-Memory Data Grids Are Faster and More Scalable Than Disk-Based Storage
An in-memory data grid stores all of its data in-memory, as opposed to traditional Database Management Systems that use disks as their primary storage mechanism. By making use of system memory rather than spinning disks, IMDGs are typically between a thousand and a million times faster than traditional DBMS systems. Keeping data in memory is not the only reason IMDGs perform significantly faster than disk-based databases. Architectural differences are the main reason for the performance improvement. IMDGs use a memory-first and disk-second approach. This approach uses memory as primary storage and disk as secondary storage for backup and persistence. Since memory is a more limited resource than disk, IMDGs are built to scale horizontally: you can add nodes on demand in real-time. IMDGs linearly scale to hundreds of nodes. To reduce redundant data movement, they provide strong semantics for data locality and affinity data routing.