There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people’s eating habits. In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.
“In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”
Increasingly, players in the food industry are embracing artificial intelligence (AI) to better understand the dynamics of flavour, aroma and other factors that go into making a food product a success.
Earlier this year, IBM became a surprise entrant to the food sector, announcing a partnership with seasonings maker McCormick to “explore flavour territories more quickly and efficiently using AI to learn and predict new flavor combinations” by collecting data from millions of data points.
New York-based Analytical Flavour Systems uses AI to create a model or “gastrograph” of flavour, aroma and texture to predict consumer preference of food and beverage products.
The platform, which recently raised US$4 million (S$5 million) in funding, aims to help companies “create better, more targeted and healthy products for consumers,” according to founder Jason Cohen.
It is not clear how much funding is going into AI food ventures, although overall food-technology investment amounted to US$16.9 billion last year, according to data from investment platform AgTech Funder.