How To Spend Less Time Processing Queries And More Time Gaining Insights

Print Friendly, PDF & Email

Here’s a guide on how to find a GPU database that’s just right for you

Everyone knows that GPUs (Graphics Processing Units) have moved quickly from just being used in gaming technology; they’re now rapidly emerging as a solution for accelerating data processing. In fact, GPU accelerated computing database solutions are moving into enterprise data centers and replacing CPUs as the primary data cruncher.                               

Companies are tiring of sluggish data analytics projects where queries often have to run overnight to yield a solution, only to find out a basic assumption had been missed and suddenly it’s 48 hours or more before the original question can be properly answered. It’s not really an ideal solution, and it’s far from the promised land of real-time analytics that most companies now desire as they look to maximize the utility of their vast databases.

GPU Accelerated Databases: A Quick Refresher Guide

A GPU is a Graphics Processing Unit. A GPU has thousands of cores, designed to handle split-second movement of graphics on a large screen. When you point the cores at rows and columns of data instead, a GPU accelerated database becomes incredibly fast at analytics processing. They’re ideally suited to accelerate the task of processing database SQL queries.

There are many early adopters of GPU accelerated computing. For instance weather forecasts, where GPUs are being used for real-world modeling applications, must simultaneously process the influence of multiple variables.

What you should look for when identifying the right GPU accelerated database

Only a handful of companies today are offering a GPU accelerated computing database solution that can leverage the power of GPUs and deal with relational, structured analytics – let alone do all this with an SQL interface, providing a solution for any enterprise IT Data center. Before a company goes into identifying which GPU Accelerated database is best for them, there are a few things that they should think about:

  1. Does it integrate with the company’s current/existing technologies? The most important thing to know is if the new GPU accelerated database works with existing software. Unless a company has cash to burn, it is unlikely to want to spend any more overhauling its database technology, having already spent a considerable amount in blood, sweat and tears to bring its current tech investments up to date. New technology is released every year and organizations can’t afford to completely re-invest on an annual basis. However, if a GPU-accelerated database solution has been built on PostgreSQL or a similar open source solution then it’s highly possible for it to work with current tech investments.
  2. What is the onboarding time for a new technology? Another cost is the time it takes to onboard and learn a new technology. Solutions built on PostgreSQL or something similar that developers are already familiar with can start to be used immediately, which reduces a lot of training and onboarding time.
  3. Does it connect to current visualization tools? Today, most data analysts have already spent hours upon hours creating custom visualization dashboards using their favorite tools – be it Tableau, Micro Strategy or Microsoft Power BI. Some GPU-accelerated computing solutions work very easily with these visualization tools.
  4. Does the company already have stored procedures and code for processing data? Because GPU databases are a relatively recent development, not all the solutions available are mature enough to match the level of maturity developers have come to take for granted. GPU databases that have been built on PostgreSQL or something similar have far greater functional capability.
  5. Can it scale? Let’s face it, GPUs are expensive. Ideally a company would have the ability to add and remove GPU resources at will, scaling the processing capability to suit the highs and lows of seasons or needs. That can help it reduce its overall data processing costs.

As a starting point it is worth keeping in mind the above five points when identifying a GPU accelerated database that is the best fit for a company.

About the Author

Richard Heyns is CEO of Brytlyt, a GPU Database and Analytics platform that combines extreme performance of GPUs with patent pending IP and integrates with PostgreSQL making it easy to integrate with legacy system and scale while being very functionality rich and easy to use. Richard was responsible for the initial and ongoing research in bringing processing database operations to General Processing on Graphics Processor Units. The intellectual property in natively parallelisable algorithms which emerged from this research forms the foundation of Brytlyt’s high performance data platform. Richard brings 15 years experience working on large business intelligence, Big Data projects and software development.

 

 Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*

Comments

  1. In today’s fast-paced world, the time it takes to process queries can be frustrating. In order to truly benefit from all that data, the right BI tools can help in processing it much more quickly.