Paul Sonderegger from Oracle Endeca writes that the biggest bottleneck in making Big Data productive is labor.
In a big data world, data modeling, integration, and performance tuning are governors of data use because they rely on relatively slow manual processes done by relatively expensive specialists. In an ironic twist, the substitution of computing capital for labor that transformed other business processes (such as inventory management, manufacturing, and accounting) will do the same to information management itself. Take the relatively simple case of a data mart with fast-growing volume. As the volume of data grows, query performance tuning becomes both more important and more difficult. Performance tuning requires trade-offs. For example, pre-aggregating the data improves query response but cuts off the user from detailed data which may be valuable for certain investigations. As data volume grows, more data aggregation may be required, eliminating levels of detail that used to be available. When the users rebel, the BI team has to haggle over remediation and strike a new balance. This time-consuming approach is simply unaffordable in a big-data world.
Read the Full Story.