Interview: A3CUBE Sets Sights on the Emerging Arena of High Performance Data

Print Friendly, PDF & Email

The worlds of High Performance Computing and Big Data are converging with the rising demand for high performance data analysis. A3CUBE understands this need and takes a new approach to parallel storage and analytics clusters in offering High Performance Data solutions. We caught up with Emilio Billi, CTO and Founder of A3CUBE, to learn more.

insideBIGDATA: A3CUBE’s tagline is “HPC Architecture for Supercharged Storage and Beyond”. What does this mean in a technological sense?

Emilio Billi

Emilio Billi: Our architecture permits tens of thousands of SSDs to be connected together and accessed in a parallel and concurrent way using direct mapping of memory accesses from a local machine to the I/O bus and memory of a remote machine. This feature allows for data transmission between local and remote system memories without the use of operating system services.  It also enables a unique linear scalability of SSDs bandwidth and IOPS and consequently allows computation and data access to scale together linearly. This totally eliminates the bottleneck in bandwidth or IOPS and provides optimal dimensions of performance, capacity, and computation with an unmatched flexibility at a fraction of the costs.

insideBIGDATA:  How would those in the Big Data world benefit from your products?

Emilio Billi: The future of Big Data is largely dependent on High Performance Data across the following metrics: Data Velocity, Volume, Variety, Integrity, Complexity and Mobility. Our technology addresses all these needs. The target industry verticals and use cases for our products include, but are not limited to: Sensor Data Analysis in Oil and Gas, Programmatic Trading in Financial Services, Production Optimization in Manufacturing, Network Analysis in Telecom, Multimedia Production Environments in Entertainment and Media, Grid Optimization for Utility Companies and National Research Agencies.

insideBIGDATA: You recently made an announcement around launching a new networking architecture. What was all of the excitement about here?

Emilio Billi: The excitement is because our technology presents a new architectural approach to parallel storage and analytics oriented clusters that set new industry standards in performance and latency. Specifically, A3CUBE’s Massively Parallel Data Processor platform is a revolutionary advancement in storage merged computational architecture for multi-dimensional data management and analysis. Our technology is designed to deliver scalability of performance and capacity, manageability at scale, as well as cost-effective acquisition and implementation. Computation can be performed for any desired level of redundancy or targeted reliability.

insideBIGDATA: What industry challenges lead you to this “brain inspired” vision?

Emilio Billi: Within critical business requirements in computing environments, a growing performance gap problem equates to greater energy expenses and total cost of management to satisfy the growing number of users. Within the current datacenter infrastructure, the spectrum of analytical workloads is growing exponentially and the current crop of storage equipment in today’s marketplace do not properly address the challenges and the range of analyses that are presented across multiple industries.

insideBIGDATA: Do you have or see any partnerships developing? What’s down the road?

Emilio Billi: We have received very positive feedback from the marketplace and are excited at the prospects of key customer relationships across all the verticals mentioned. We are currently signing up VAR partners and are speaking to the top IHVs.

insideBIGDATA: What about relative to Big Data? What can we expect as this type of technology matures?

Emilio Billi: Historically, computing problems were largely due to processor speeds at a time in which the computing power was used only in specific fields and had a relatively small impact on global social and economic environments. As data demand grew globally, the computing problems evolved to bottlenecks that were centered on the speed of the main memory. Within current computing environments, the problems are currently centered on the speed of data access through the storage environment.

We have effectively evolved from an era of High Performance Computing (HPC) to an era of High Performance Data (HPD) and the key is to migrate from storage expertise to supercomputing expertise that is centered on data.

Speak Your Mind

*