Machine Learning to Play a Role in Google Glass Future

Print Friendly, PDF & Email

Google_glass_gradImagine you’re attending your college graduation ceremony while wearing a Google Glass device to record the festivities. A typical graduation can stretch into an all-day affair so the amount of video that results is considerable not only in terms of size (many, many GB) but also scene length. How would you go about summarizing the “egocentric” video content recorded?

A group of researchers at the University of Texas at Austin developed a technique that uses machine learning to automatically analyze recorded videos and assemble a short “story” of the footage. The method is called “story-driven” video summarization which takes a very long video and automatically condenses it into very short video clips, or a series of stills, that convey the essence of the story.  The method involves using machine learning techniques to teach the system to “score” the significance of objects in view based on egocentric factors such as how often the objects appeared in the center of the frame, this establishing the focus of the wearer.

Read the Full Story.

 

Speak Your Mind

*