Sign up for our newsletter and get the latest big data news and analysis.

Key Challenges for Commercial Text Miners

In biomedical research and development, researchers use text mining tools to extract and interpret facts, assertions, and relationships from vast amounts of published information. Mining accelerates the research process, increases discovery of novel findings, and helps companies identify potential safety issues in the drug development process. However, despite the many benefits of text mining, researchers face a number of obstacles before they even get a chance to run queries against the body of biomedical literature.

Introduction to Spark Webinar

This Introduction to SPARK webinar will feature Daniel Gutierrez, Managing Editor of insideBIGDATA.

In the past year, the Apache Spark distributed computing architecture has continued its upward trajectory amongst the big data players. Its growth has been fueled by several innovative differentiators for big data applications, such as MapReduce 2.0 (or YARN), provisions for analytic workflows, and efficient use of memory. Databricks’ recent 2015 Spark industry survey reports that Spark adoption is outpacing Hadoop because of its accelerated access to big data. In support of this new computing architecture.

Efficiency: Big Data Meets HPC in Financial Services

Converging High Performance Computing (HPC) and Lustre* parallel file systems with Hadoop’s MapReduce for Big Data analytics can eliminate the need for Hadoop’s infrastructure and speeding up the entire analysis. Convergence is a solution of interest for companies with HPC already in their infrastructure, such as the financial services Industry and other industries adopting high performance data analytics.

Text Analytics without Tradeoffs

The pace at which the world creates data will never be this slow again. And much of this new data we’re creating is unstructured, textual data. Emails. Word documents. News articles. Blogs. Reviews. Research reports… Understanding what’s in this text – and what isn’t, and what matters – is critical to an organization’s ability to understand the environments in which it operates. Its competitors. Its customers. Its weaknesses and its opportunities.

Enabling Value for Converged Commercial HPC and Big Data Infrastructures through Lustre*

A number of industries rely on high-performance computing (HPC) clusters to process massive amounts of data. As these same organizations explore the value of Big Data analytics based on Hadoop, they are realizing the value of converging Hadoop and HPC onto the same cluster rather than scaling out an entirely new Hadoop infrastructure.

Hadoop for HPC—It Just Makes Sense

Building out a Hadoop cluster with massive amounts of local storage is a considerably extensive and expensive undertaking, especially when the data already resides in a POSIX compliant Lustre file system. Now companies can adopt analytics written for Hadoop and run them on their HPC clusters.

Enterprise Grade Lustre in the Clouds

With the release of Intel® Cloud Edition for Lustre software in collaboration with key cloud infrastructure providers like Amazon Web Services (AWS), commercial customers have an ideal opportunity to employ a production-ready version of Lustre—optimized for business HPDA—in a pay-as-you-go cloud environment.

10 Ways IBM Platform Computing Saves You Money

IBM Platform Computing products can save an organizations money by reducing a variety of direct costs associated with grid and cluster computing. Your organization can slow the rate of infrastructure growth and reduce the costs of management, support, personnel and training—while also avoiding hidden or unexpected costs.

Understanding Active Risk Management with High Performance Data

This webinar is focus on understanding active risk management with high performance data and grid management.

Big Data Analytics: Gain a Performance Edge with a Hybrid Solution

With a hybrid approach to big data storage, companies can combine the high performance and speed capabilities of in-memory while solving the storage issues by putting the vast historical data sets on disk. By bridging available technologies, companies can deliver on all counts – including cost.