Why Databases are Failing the Modern Economy

Print Friendly, PDF & Email

Big data powers the world’s most powerful companies and most promising startups.

From transportation giant Uber’s data on 3.9 million eligible drivers to entertainment giant Netflix’s collection of 155 million subscribers’ streaming data, the ability to gather and store huge amounts of data is one of the most critical drivers of our modern economy. It will also define competitive advantage for decades to come. Tesla’s billions of miles in autonomous driving experience, for example, are already setting the stage for category leadership in the future.

Surely, then, the databases storing all of this information — perhaps our society’s most precious currency — are secure, reliable, and built using the most up-to-date techniques available to experts. Right? Wrong.

Today’s databases are built for a bygone era, and they’re failing the modern economy. It’s time for the industry to acknowledge that inferior databases expose companies and consumers alike to serious risks; our security and economic prosperity depend on it.

Modern Databases Entrench Old-world Limitations

The first ever databases were designed around the limits of the first ever computers: computing was slow, expensive, and capacity-limited. The only people who could access information were those who were on-premises, so computers and databases were inherently secure as long as you limited physical access.

Today, we’re living in a different world; the shackles have come off, and the limitations of early computers are no longer relevant. Computer power has surged while price has plummeted, so the amount of data that companies process on a daily basis has soared. That data — created constantly from our mobile phones, social media, sensors, and more — has become critical to powering our economy. A company’s ability to store, retrieve and interpret data can make or break its success.

Unfortunately, our databases have not caught up. In spite of the massive increase in demand for data, the industry never underwent a complete overhaul of the way databases work. Instead, programmers have used patchwork retrofitting to adjust databases to modern-day needs, solving problems with countless point solutions instead of rethinking the way databases work from the ground up.

To distract from how complicated those endless quick fixes make underlying databases, programmers simply tack on another layer to make pulling data more user-friendly, whether it’s software or an Application Programming Interface (API). Instead of eliminating the old-world limitations of databases, we’ve allowed software bloat and API bloat to entrench them and distract us from the fact that we’ve been using bandaids when we really need surgery. 

The Cost of Point Solutions: Data Breaches and Excessive Management Costs

Never-ending point solutions come with a serious cost, mainly in the form of an increased risk of data breaches.

With each new effort to compensate for databases built for a different era, we introduce new vulnerabilities into the system. In an effort to make information easier to access, we make user management broad and non-specific, rendering companies unable to govern permissions effectively. In an effort to make information harder to lose, we send backups to redundant data warehouses, exposing information to more nodes and therefore more risk. In an effort to avoid being overwhelmed by the sheer amount of data available, we build more features into an API layer vulnerable to hacks.

The worst part for everyday users is that no matter how risky the environment, it’s nearly impossible to “opt out” of the information economy. In order to interact online, users are forced to sign away the right to their data, while understanding that they’ll have little oversight or control over how it gets used.

These security concerns are well-known to IT teams, leading to another unintended consequence of implementing too many point solutions: excessive IT management costs. The high-threat environment forces companies to develop robust and expensive anti-fraud practices like data audits, which overburdens IT and substantially raises the cost of innovation.

Redesigning Databases from the Ground Up Using Blockchain

It’s time to acknowledge that we can no longer take a piecemeal approach to updating the way we deal with databases. As AI and IoT applications continue to push databases to new limits, we need new approaches that offer real solutions, not quick fixes. It’s time to tear down the crumbling foundations and rebuild the house.

In doing so, many industries are discovering an option that offers both security and transparency: blockchain-based databases.

A blockchain-based database is inherently secure; its cryptographic signature protects data from being accessed by hackers undetected. It provides digital proof of ownership, builds trust through decentralization, and offers continuous operation. Perhaps most importantly, a blockchain-based database finally puts data first when it comes to IT.

That data-first approach will finally allow IT teams take a proactive approach to applications, where applications can be used as a tool for innovation — not just a way to compensate for underlying problems. With the power, creativity, and resources of IT departments unleashed, databases can help drive new initiatives instead of limiting them. 

About the Author

Andrew “Flip” Filipowski is the Co-Founder and Co-CEO at Fluree, PBC. Flip was the former COO of Cullinet, the largest software company of the 1980’s and was also the founder and CEO of PLATINUM technology, inc. Flip grew PLATINUM into the 8th largest software company in the world at the time of its sale to Computer Associates for $4 billion dollars, the largest such transaction for a software company at the time. Upside Magazine named him one of the Top 100 Most Influential People in Information Technology. A recipient of Entrepreneur of the Year Awards from both Ernst & Young and Merrill Lynch, Flip has also been awarded the Young Presidents Organization Legacy Award and the Anti-Defamation League’s Torch of Liberty award for his work fighting hate on the Internet.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*

Comments

  1. Blockchain is suitable for some database applications, but to state that it can (or should) replace relational databases is patently false.

    Blockchain is suited for Merkle Tree solutions, chained or transactional needs… cases where integrity is important and confidentiality may be less important. Bottom line, databases are better suited for storing and accessing certain data than blockchain and the argument posited here is based in falsehood, and non-information. Never is the purported ‘flaw’ in databases cited specifically. Because there isn’t one, it’s a straw man argument.

    There are strengths and weaknesses to using relational databases as well as blockchain for storing data. NoSQL has its own place as well in the context of data storage. Use the right tool for the right job, that’s the only rule that matters. If there was a takeaway from this article for me it would be that blockchain MUST be included in our software architectures when and where it makes sense and that means more technologists need to be aware of this powerful new means of storing data.