The Government Needs Fast Data: Why is the Federal Reserve Making World-altering Decisions on Stale Data? 

Print Friendly, PDF & Email

Back in May of this year, the Federal Reserve was deciding whether to hike interest rates yet again. Evercore ISI strategists said in a note that, “The absence of any such preparation [for a raise] is the signal and gives us additional confidence that the Fed is not going to hike in June absent a very big surprise in the remaining data, though we should expect a hawkish pause.”

Well, they were right. The Federal Reserve ultimately decided to keep its key interest rate at about 5% after ten consecutive meetings during which it was hiked. This brings about an important question: Should there ever be “very big surprises” (or any surprises, for that matter) in the data on which the Fed bases these critical decisions? 

In my opinion, the answer is no. There shouldn’t ever be a question of making an incorrect economic decision because the right data is indeed available. But the truth is, the Federal Reserve has been basing most of its decisions on stale, outdated data. 

Why? The Fed uses a measure of core inflation to make its most important decisions, and that measure is derived from surveys conducted by the Bureau of Labor Statistics. While they may also have some privileged information the public isn’t privy to, by nature, surveys take a while to administer. By the time the data is processed and cleaned up, it’s essentially already a month old. 

Everyone can agree that having faster, more up-to-date data would be ideal in this situation. But the path to getting there isn’t linear: It’ll require some tradeoffs, taking a hard look at flaws in current processes, and a significant shift in mindset that the Fed may not be ready for. 

Here are some things to consider:

Fast vs accurate: We need to find a happy medium

At some point, the Fed will need to decide whether it’s worth trying a new strategy of using fast, imperfect data in place of the data generated by traditional survey methods. The latter may offer more statistical control, but it becomes stale quickly.

Making the switch to using faster data will require a paradigm shift: Survey data has been the gold standard for decades at this point, and many people find comfort in its perceived accuracy. However, any data can fall prey to biases.  

Survey data isn’t a silver bullet

There’s a commonly held belief that surveys are conducted very carefully and adjusted for biases, while fast data that comes from digital sources can never be truly representative. While this may be the case some of the time, survey biases are a well-documented phenomenon. No one solution is perfect, but the difference is that the problems associated with survey data have existed for decades and people have become comfortable with them. When confronted with the issues posed by modern methods, they are much more risk-averse.

In my mind, the Fed’s proclivity toward survey data has a lot to do with the fact that most people working within the organization are economists, not computer scientists, developers, or data scientists (who are more accustomed to working with other data sources). While there’s a wealth of theoretical knowledge in this space, there’s also a lack of data engineering and data science talent, which may soon need to change. 

A cultural shift needs to occur

We need a way to balance both accuracy and forward momentum. What might this look like? To start, it would be great to see organizations like the U.S. Census, the Bureau of Labor Statistics, and the Bureau of Economic Analysis (BEA) release more experimental economic trackers. We’re already starting to see this here and there: For example, the BEA released a tracker that monitors consumer spending.

Traditionally, these agencies have been very conservative in their approach to data, understandably shying away from methods that might produce inaccurate results. But in doing so, they’ve been holding themselves to an impossibly high bar at the cost of speed. They may be forced to reconsider this approach soon, though. For years, there’s been a steady decline in federal survey response rates. How can the government collect accurate economic data if businesses and other entities aren’t readily providing it? 

When it comes down to it, we’ve become accustomed to methodologies that have existed for decades because we’re comfortable with their level of error. But by continuing to rely solely on these methods, we may actually end up incurring more error as things like response rates continue to fall. We need to stay open to the possibility that relying on faster, external data sources might be the necessary next step to making more sound economic decisions. 

About the Author

Alex Izydorczyk is the founder and CEO of Cybersyn, the data-as-a-service company making the world’s economic data available to businesses, governments, and entrepreneurs on Snowflake Marketplace. With more than seven years of experience leading the data science team at Coatue, a $70 billion investment manager, Alex brings a wealth of knowledge and expertise to the table. As the architect of Coatue’s data science practice, he led a team of over 40 people in leveraging external data to drive investment decisions. Alex’s background in private equity data infrastructure also includes an investment in Snowflake. His passion for real-time economic data led him to start Cybersyn in 2022. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*