Alluxio (formerly Tachyon), developers of the system that unifies data at memory speed, announced general availability of its product portfolio, Alluxio Enterprise Edition (AEE) and Alluxio Community Edition (ACE).
Alluxio Revolutionizes Enterprise Big Data, Launches Industry’s First Solution to Unify Data at Memory Speed
DriveScale, the company that is pioneering flexible, scale-out computing for the enterprise using standard servers and commodity storage, announced its new release of the DriveScale System.
To make the most of big data, enterprises must evolve their IT infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and integrate them with the pre- existing enterprise data to be analyzed.
Wal-Mart handles more than a million customer transactions each hour and imports those into databases estimated to contain more than 2.5 petabytes of data.
Radio frequency identification (RFID) systems used by retailers and others can generate 100 to 1,000 times the data of conventional bar code systems.
Facebook handles more than 250 million photo uploads and the interactions of 800 million active users with more than 900 million objects
(pages, groups, etc.) – each day.
More than 5 billion people are calling, texting, tweeting and browsing on mobile phones worldwide.
Organizations are inundated with data – terabytes and petabytes of it . To put it in context, 1 terabyte contains 2,000 hours of CD-quality music and 10 terabytes could store the entire US Library of Congress print collection . Exabytes, zettabytes and yottabytes definitely are on the horizon .
Data is pouring in from every conceivable direction: from operational and transactional systems, from scanning and facilities management systems, from inbound and outbound customer contact points, from mobile media and the Web .
According to IDC, “In 2011, the amount of information created and replicated will surpass 1 .8 zettabytes (1 .8 trillion gigabytes), growing by a factor of nine in just five years . That’s nearly as many bits of information in the digital universe as stars in the physical universe .” (Source: IDC Digital Universe Study, sponsored by EMC, June 2011 .)
The explosion of data isn’t new . It continues a trend that started in the 1970s . What has changed is the velocity of growth, the diversity of the data and the imperative to make better use of information to transform the business .
The hopeful vision of big data is that organizations will be able to harvest and harness every byte of relevant data and use it to make the best decisions . Big data technologies not only support the ability to collect large amounts, but more importantly, the ability to understand and take advantage of its full value .
We are awash in a flood of data today. In a broad range of application areas, data is being collected at unprecedented scale. Decisions that previously were based on guesswork, or on painstakingly constructed models of reality, can now be made based on the data itself. Such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences.
In almost every organization, SQL is at the heart of enterprise data used in transactional systems, data warehouses, columnar databases and analytics platforms to name just a few examples. Additionally, a vast number of commercial and in-house developed tools used to access, manipulate and visualize data rely on SQL. SQL is lifeblood of the modern transaction and decision support systems.
When used effectively, data analytics can help to save lives, improve efficiencies, reduce costs, and help government deliver better citizen services. This special GovLoop report explores how data analytics is changing Government.
This paper provides the definitive guide on the critical areas of importance to bring data lake organization, governance, and security to the forefront of the conversation.
Businesses are discovering the huge potential of big data analytics across all dimensions of the business, from defining corporate strategy to managing customer relationships, and from improving operations to gaining competitive edge. The open source Apache Hadoop project, a software framework that enables high-performance analytics on unstructured data sets, is the centerpiece of big data solutions. Hadoop is designed to process data-intensive computational tasks, in parallel and at a scale, that previously were possible only in high-performance computing (HPC) environments.
This study was designed to document key perceptions, challenges, and successes by focusing on data organization, integration, security, and definitional clarification to address key areas of concern and interest in ongoing data lake adoption. The intent of the survey and this corresponding report is to understand and share the current and planned adoption of technologies in the Hadoop ecosystem, intended specifically for a data lake strategy, and to learn how adopting companies are addressing critical data lake success factors, including rethinking data for the long-term, establishing governance first, and tackling security needs upfront. The survey and report also identify emergent areas of concern and new areas of clarification needed for data lake maturity.