How Big Data Helps Scientists Ask Bigger Questions

Print Friendly, PDF & Email

You have to wonder what Albert Einstein would be up to in this era of big data. Writing in Quartz, Gartner analyst Chris Guan notes that in 1905, Einstein, working with just a handful of data points, discovered that light was made up of particles – a breakthrough that completely changed the course of physics.

A few decades later, Erwin Schrödinger derived an equation that explained many of the new ideas in the fledgling field of quantum mechanics; but the processing power to solve the equation wasn’t available at the time.

Today, with affordable supercomputers, cloud computing, and the ability to move massive amounts of data with Hadoop, scientists have the processing power they need to solve even the most intractable problems. Guan cites the example of a University of Wisconsin researcher who created a massive database of stem cells using over a million processing hours. He finished his study in a week for less than $20,000.

In computer science there are two laws, Amdahl’s and Gustafson’s. Amdahl’s shows how much faster a given problem is answered when more processing power is thrown at it,” says Guan. Gustafon’s turns Amdahl’s on its head by defining how big of a problem can be answered, given a fixed amount of time, when more resources are available. In other words, given an hour, what can be solved with more computers vs. with less. Science, like Gustafson suggested, often opts for bigger questions vs. saved time. The ability to translate larger amounts of data into cogent explanations can help remove old barriers from scientific pursuits. In genetic research, that could mean unlocking the causes of diseases, developing new cures, and finding the parts of the genetic blueprint that make us human.”

Read the Full Story.

Speak Your Mind

*