Don't Predict the Future With the Past
May 8, 2014
In a recent article, NYU professors Gary Marcus and Ernest Davis shed light on some of the dangerous assumptions we make about big data. One of those was making future bets based on the past while your measurement tool is changing.
The example they gave was Google Flu Trends, which predicts influenza outbreaks in 25 countries around the world. Launched in 2008, it became particularly popular during the 2009 H1N1 flu pandemic, where it predicted flu outbreaks faster than the CDC.
Google Flu Trends struck fear and awe into the hearts of those watching the colors on the map deepen from green to red. How magical that search data could be harnessed for purposes like these. The only problem: in the past couple of years, it’s been more wrong than right.
From 2011-2013, Google Flu Trends overestimated the prevalence of the flu, as compared to CDC records of actual doctor visits. There are several reasons why this happened, one of which is that GFT’s algorithm is dependent upon Google Search: another algorithm that is constantly changing.
A March 2014 article in the journal Science outlines some of these issues. For example, Google began making suggestions based on the searcher’s keywords, which could have led to more people searching for generic symptoms like “fever” to search for the flu instead, artificially increasing the number of flu-related searches.
Using the same algorithm from 2008 to make predictions about flu outbreaks depended upon Google search algorithms and user search behavior being roughly the same as they were in 2008, which they weren’t. As a result, GFT recently updated its algorithm to try to improve upon its accuracy. There’s an auxiliary problem here too, which is that most people using GFT never knew whether the algorithm was correct. Thousands of people probably told their friends about where the flu would break out next, or believed the magnitude of the outbreak was worse than it was. But there was no accountability for GFT’s predictions.
These problems with big data: both making predictions when the landscape itself is evolving, and the lack of accountability, pose dangers to data-fueled predictions about startups, as well. Perhaps nowhere else are algorithms for success more volatile than in technology-based startups. The way we used the internet in 2000 is not how we use the internet in 2010. The way startups raised money in 2005 is not how startups raise money in 2014.
We don’t know what will make the next startup successful based on what’s worked in the past, because the standards for success change so rapidly.
Almost by definition, the “next big thing” is going to be something we aren’t prepared for. Many people have written about this subject, including Chris Dixon and Dustin Curtis.
But even answers to questions like “what makes a good founder” might be changing as we speak. Having a highly technical founder may not be as important for certain types of companies anymore. Being a young founder used to be a liability until investors like YCombinator took a chance on them. Even the classic founder archetype who’s “a little bit crazy” has been debated as more and more startups begin to resemble small businesses.
Then we have the problem of metrics. Social metrics like Twitter mentions or Facebook followers, used to measure the momentum of a startup, didn’t even exist before Twitter or Facebook. We don’t know what the best metrics are to measure the next generation of startups, because we don’t yet know how they will be measured. Revenue probably once sounded like a reliable metric 20 years ago, but now, it’s possible for a company to have zero revenue and be valued at over $1B.
And finally, the role of luck in predicting successful startups gives us a crutch in holding data-driven decision makers accountable. If a startup defies the algorithm, we just say they got lucky (and indeed, many billion-dollar companies were outliers when they first started). Most people won’t hold these algorithms accountable or check to see whether the startups they predicted would be successful actually became so. The long waiting period from founding to exit - say, 7 years - means that although people might be paying attention to a wildly successful company throughout that trajectory, the ones who fail will be forgotten in 6 months.
In such a young industry, using data to predict successful technology-based startups may become more accurate as we get more data points, but we also need to remain cautious, understanding that the ground is moving beneath our feet even as we try to build upon it.