Earlier this week, I attended a very informative event sponsored by the LA RUG (Los Angeles R User Group meetup) that featured the topic “Extending the R language to the enterprise with TERR & Spotfire.”
In this video from GTC 2014, Todd Mostak from MapD demonstrates the company’s GPU-powered in-memory relational database software for Big Data. The Cambridge, Mass., based startup has built a high-speed GPU in-memory database that brings interactivity to big data. It can, for example, track more than a billion tweets worldwide at a time – and provide real-time visual analysis of the data. MapD was also announced as the winner of the GPU Technology Conference’s Early Stage Challenge this year, and they will be coming home with a cool $100,000 check.
In this video from the GPU Technology Conference 2014, Ami Gal from Sqream Technologies describes the company’s innovative Big Data processing technology. “Can you compare the technology of today with the technology of tomorrow? Yes, with SQream Technologies you can. This is because SQream Technologies uses GPUs to capture, store and process Big Data within seconds, resulting in 100x faster insights. Big Data analytics, once considered unattainable, can now be achieved in a matter of seconds with SQream’s hassle-free, robust analytic database.”
Last night I attended the Los Angeles Hadoop users Group (LA-HUG) meeting hosted by Shopzilla. The topic for the evening was “An Overview of Hulu’s Data Platform” presented by Prasan Samtani and Tristan Reid of Hulu. From all indications, Hulu is a significant player in the Hadoop user community and this talk documented the team’s command of big data technology.
“SAS In-Memory Statistics for Hadoop software enables multiple users to concurrently manage and prepare data stored in Hadoop, explore and visualize this data, develop accurate statistical and machine learning models quickly, as well as access, deploy and execute these models in their Hadoop ecosystem.”
Last week saw evidence for the big data industry steamroller effect as the Strata Conference 2014 in Santa Clara came and went. With thousands of attendees, an abundance of informative presentations, and a very healthy exhibitor ecosystem, the show defined the current state-of-the-art for all that is big data. If you missed the big event, O’Reilly Media has graciously made available the slides and videos for some of the presentations.
Our own Rich Brueckner will present on Big Data at the Technology Convergence Conference next week in Santa Clara, California. “While the term Big Data has become pervasive in Information Technology, many in the industry are still puzzled by how to make money from this phenomenon. In this talk, Brueckner will look at what’s really behind Big Data as an engine for change and describe case studies that are bringing the full potential of Big Data home.”
The Strata Conference 2014 takes place this week, February 11-13, in Santa Clara, Calif. – the heart of Silicon Valley. The theme for the big event is “Making Data Work.” Strata Conference is the leading event for the people and technology driving the data revolution. The home of data science, Strata brings together practitioners, researchers, IT leaders and entrepreneurs to discuss big data, Hadoop, analytics, visualization and data markets.
DK Panda from Ohio State University presented this talk at the Stanford HPC & Exascale Conference. “As InfiniBand is getting used in scientific computing environments, there is a big demand to harness its benefits for enterprise environments for handling big data and analytics. This talk will focus on high-performance and scalable designs of Hadoop using native RDMA support of InfiniBand and RoCE.”
“Big data techniques offer a way to analyze data pooled across many patients: their specific disease mutations, biological markers, the treatments, and outcomes — in order to identify unexpected ways that existing therapies can be applied and combined to create personalized treatments that dramatically improve the chances of survival.”