With the help of Harvard, Google has created a model to find unsafe restaurants more efficiently than traditional methods.
Credit: Google data can be used to find the source of food poisoning. photo: wildpixel/Getty Images

Determining the cause of food poisoning can be deceptively tricky. Sure, something about that sandwich may have tasted suspicious, but the fact remains that most of us eat three meals a day, plus snacks, and some types of food poisoning can take days if not weeks before they present themselves. So though, yes, the obvious culprit might be the answer (Why did I eat that raw chicken I found in a dumpster?), other times, the cause can be tough to pinpoint.

But technology is hoping to help battle foodborne illness. Back in January, we saw how research from Columbia University demonstrated that Yelp could help track down unsafe restaurants. And now Google and Harvard’s T.H. Chan School of Public Health believe they have a computer model that’s better at identifying the sources of food poisoning better than traditional methods.

The study, which was conducted in Chicago and Las Vegas, applied machine learning to a mix of anonymous search and location data from Google users to try to match words related to foodborne illnesses with restaurants those users had recently visited. For instance, searches for terms like “stomach cramps” would be matched with location data showing where people who were searching that term had been eating. Once correlations had been found, health inspectors were sent to these restaurants. Researchers then compared the inspections suggested by the computer model to those chosen based on the usual method of consumer complaints. The computer model won out: In Chicago, 52.1 percent of the 71 inspections prompted by the algorithm resulted in a restaurant being deemed unsafe versus 39.4 percent of inspections spawned by complaints.

Additionally, in both Chicago and Las Vegas, the computer model beat out routine inspections by a landslide. 52.3 percent of restaurants pinpointed by Google and Harvard had issues, whereas the usual method of routine inspections found just 22.7 percent of restaurants as being unsafe.

But potentially the most revealing finding of the study was that, as insinuated in the opening, the computer model found that 38 percent of the time the potential cause of food poisoning wasn’t the restaurant people had visited most recently, hammering home the point that the most obvious cause isn’t always the correct one.

“In this study, we have just scratched the surface of what is possible in the realm of machine-learned epidemiology,” Evgeniy Gabrilovich, senior staff research scientist at Google and a co-author of the study, stated. “I like the analogy to the work of Dr. John Snow, the father of modern epidemiology, who in 1854 had to go door to door in Central London, asking people where they took their water from to find the source of a cholera outbreak. Today, we can use online data to make epidemiological observations in near real-time, with the potential for significantly improving public health in a timely and cost-efficient manner.”

Overall, researchers suggest that these sorts of models – which performed better in precision, scale, and turnaround time – could work not only to supplement existing inspection methods at health departments, but could also be used by restaurants to monitor their own food safety. That seems like a lot safer bet than Googling “Why are all of my customers getting sick?”