Home / Software & Service News / Google trained its AI camera with help from pro photographers

Google trained its AI camera with help from pro photographers

When Google unveiled its $249 Clips camera back in October 2017, it was easy to question Google’s motives. Lifelogging cameras weren’t a new idea, nor were they particularly successful, and given the rise in smartphone imaging and video quality, it was a tough ask to let a wearable camera automatically capture important moments.

With Clips expected to debut in the coming weeks, Google has penned a blog post (first detailed by The Verge) detailing how it’s trained its algorithms to identify the best shots. In order to do that, its AI needed to learn from something or someone, so Google called in photography experts from various different backgrounds and supplied their model with some of the best photography available.

“We ended up discovering—through trial and error and a healthy dose of luck—a treasure trove of expertise in the form of a documentary filmmaker, a photojournalist, and a fine arts photographer,” said Josh Lovejoy, Senior Interaction Designer at Google. “Together, we began gathering footage from people on the team and trying to answer the question, ‘What makes a memorable moment?'”

Some of that learning comes down to principles that you may have learnt as you’ve struggled to get to grips with a new smartphone camera or point-and-shoot. Understanding focus, particularly depth of field, and the rule of thirds are key, but so are some more “common sense” suggestions. Everybody knows to keep fingers out of the shot and to not make quick movements, but machine learning algorithms have no such understanding.

“We needed to train models on what bad looked like,” said Lovejoy. “By ruling out the stuff the camera wouldn’t need to waste energy processing (because no one would find value in it), the overall baseline quality of captured clips rose significantly.”

Google admits that while it trained its AI to appreciate “stability, sharpness, and framing,” Clips won’t always get it right. It can ensure that it’s framed a shot well and has a family member in focus, but it won’t know that the big shiny ring on someone’s finger is what everyone will want to see.

“Success with Clips isn’t just about keeps, deletes, clicks, and edits (though those are important),” Lovejoy notes. “It’s about authorship, co-learning, and adaptation over time. We really hope users go out and play with it.”

Via: The Verge

Source: Google Design

Click Here For Original Source Of The Article

About Ms. A. C. Kennedy

Ms. A. C. Kennedy
My name is Ms A C Kennedy and I am a Health practitioner and Consultant by day and a serial blogger by night. I luv family, life and learning new things. I especially luv learning how to improve my business. I also luv helping and sharing my information with others. Don't forget to ask me anything!

Check Also

Chris Messina: Alexa leads the ‘god bot’ wars because Amazon gets the most interest from developers

A little over two years ago, before Facebook Messenger or iMessage opened to third-party bots, and before the arrival of Google Assistant, conversational AI champion Chris Messina helped coin the term “conversational commerce.” Messina, who is perhaps best known as the creator of the hashtag, has since 2015 examined trends like chat apps surpassing social […]

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php