Keeping track of what you eat has proven to be a pretty effective method in aiding weight loss. Studies show that food diaries not only help people manage their daily caloric intake, but it also just helps them be more aware of what they’re putting in their bodies. Unfortunately, noting down your every meal can be tedious and time-consuming. But what if you could do so just by taking a photo of your food? That’s exactly what Lose It, a food-tracking app, is trying to do with a brand new feature called Snap It. Using a combination of machine learning and Lose It’s own vast database, the app aspires to figure out what you’re eating based on your photo alone.
Now, the feature is still in beta, so it’s not perfect. For one thing, it’s not fully automated — you can’t just take a photo and it’ll know exactly what’s in the food. But for what it is, Snap It comes pretty close. What you do is take a photo of what you’re eating, and then the app will analyze the image and spit out a list of suggestions of what it thinks it is. Pick the option that fits it the most, and then you’ll be brought to a screen where you can add more details, like whether that piece of fried chicken was a thigh or a breast and how much of it you ate. If there are multiple foods on the plate or if the app just didn’t guess the food correctly, you can also just enter it manually via the Add Food button at the bottom of the photo.
I tried out the app for a week and while it didn’t always recognize the foods I ate, it did well enough where I was still impressed. I found that the list of suggestions based on the photos almost always brought up at least one correct answer. When I took a photo of a bucket of fried chicken, for example, the very first suggestion was “Fried Chicken”, followed by “Chicken Thigh” and “Pork Chop.” Sure, that last one wasn’t right, but it was still a pretty good guess. The same happened when I took a photo of strawberries — it got it right the first time.
Where it got a little tricky was when it was a photo of multiple foods. I snapped an image of a plate of chicken, collard greens and mashed potatoes. The app could only spot the chicken and the mashed potatoes, but not the greens. But in a photo of fried rice, spinach and chicken, the app was able to recognize all three instantly. In yet another picture of a culotte steak drenched in a cheese-based sauce, the app was pretty stumped as to what the sauce was, but did recognize that there was a steak. And it missed the arugula and tomatoes that were underneath the steak altogether, because, of course, they weren’t visible on camera.
It’s issues like this that make food tracking via photography such an inexact science. A photo of a bowl of curry won’t be enough for you to figure out exactly what kind of vegetables and spices are in it and just looking at a salad dressing won’t be able to tell you if it has any sugar. “That’s why we’re doing this semi-automated to start,” says CEO Charles Teague. “The idea that you could look at a picture and instantly know what it is, it just wasn’t going to work all the time.”
With the assumption that it was never going to be 100 percent accurate, the goal of the Snap It feature, at least for now, is simply to make it easier to log your diet. And I have to say I found to be true in my case. Prior to using the app, I wasn’t a fan of keeping a food diary exactly because it seemed like such a hassle. But using the camera to snap my food and having at least a little bit of automation made it easy enough that I found myself logging my diet all the time.
That, Teague says, is the point of Lose It in the first place. The company started around 2008 with only around 50,000 foods in its database. Now, it has millions of entries. Recently, Lose It added the ability to add foods by scanning a bar code. It even has location services to see if you’re within walking distance of a restaurant it recognizes — usually a chain — and when you go to make an entry in the app, it’ll instantly suggest the kinds of foods you can get at that restaurant. The next step is machine learning, and though the Snap It feature is still a little rough around the edges, it certainly has promise.
“I think our strategy is great because when we start getting all these photos of food, it becomes a dataset we can use,” Teague says. “We’ll have photo data, food data and eventually, location data. There’ll be a lot more context around the user that can make the app a lot smarter.” So even if a photo might not indicate that a dish is a curry, for example, the fact that it was taken in the vicinity of an Indian restaurant might teach the app to at least suggest it as a possibility. “It could be the combination of the photo and the location that could reveal very specifically what it is.”
The food identification might be semi-automated right now, but Teague ensures me that he’s pretty ambitious in what he thinks the technology will eventually be able to do. “Our expectations are that this will generate huge amounts of data that we can use to continue training and improving the machine learning,” he says. “That’s going to drive more accuracy in what we recognize, and the ability to recognize even more things.”
“Each photo that you log will become a piece of data that we use to train the next generation of the app,” he says. “We might even be able to estimate serving sizes.”