In the talk below, Recursive Deep Learning for Modeling Compositional and Grounded Meaning, Richard Socher, Founder, MetaMind describes deep learning algorithms that learn representations for language that are useful for solving a variety of complex language tasks. He focuses on 3 projects: (i) Contextual sentiment analysis (e.g. having an algorithm that actually learns what’s positive in this sentence: “The Android phone is better than the iPhone”); (ii) Question answering to win trivia competitions (like IBM Watson’s Jeopardy system but with one neural network); (iii) Multimodal sentence-image embeddings to find images that visualize sentences and vice versa (with a fun demo!). All three tasks are solved with a similar type of recursive neural network algorithm.
Richard Socher obtained his PhD from Stanford where he worked with Chris Manning and Andrew Ng. His research interests are machine learning for NLP and vision. He is interested in developing new deep learning models that learn useful features, capture compositional structure in multiple modalities and perform well across different tasks. He was awarded the 2011 Yahoo! Key Scientific Challenges Award, the Distinguished Application Paper Award at ICML 2011, a Microsoft Research PhD Fellowship in 2012 and a 2013 “Magic Grant” from the Brown Institute for Media Innovation.
Sign up for the free insideBIGDATA newsletter.