Interview: Andy Horng, Co-Founder and Head of AI, Cultivate

Print Friendly, PDF & Email

I recently caught up with Andy Horng, Co-Founder and Head of AI at Cultivate, to get a sense for the technology underlying the company’s AI-powered leadership development platform. NLP plays an important role, and as a result they’re using the RoBERTa language model for very good results. Andy has a background in data science and has worked as a software engineer building machine learning tools for legal document analysis and medical research. He holds a BS in Electrical Engineering/Computer Science and a BA in Cognitive Science, both from the University of California Berkeley.

insideBIGDATA: Apparently the area of AI that Cultivate uses is NLP. That’s a broad area of course, but is the company using any of the latest language models like GPT, GPT-2, or the latest GPT-3 (with zero-shot or few-shot learning)?

Andy Horng: First, some background: Cultivate is a highly scalable platform that provides employees of highly digital/distributed workplaces with their own personalized AI assistants. These assistants are able to tune in to the specific collaboration dynamics of each user’s team, and to each user’s specific roles and relationships within the team. With this knowledge, the assistants either coach managers to lead optimally or help individual contributors achieve maximal effectiveness. To do this, we’ve built a system which acts as a second set of eyes for the user in the digital space, scanning over their own emails, chat messages, and calendar events to pick up on conversational nuances and quantifying each of their digital relationships. Note that Cultivate is opt-in and private for individual employees, as an empowerment tool to strengthen workplace relationships.

Technically, this involves deploying a number of different text sequence classification models to not only identify table stakes such as topic, sentiment, mood and politeness, but also various collaborative intents (committing to work, sharing completed work, scheduling a meeting, etc.) and personality tendencies.

We rely largely on fine-tuning pre-trained language models to specific classification tasks. We’ve found the BERT family of models (specifically RoBERTa) to be a great trade-off between model size, inference speed, and fine-tuned performance.

insideBIGDATA: What motivated the Cultivate technology early on … Word2vec, GloVe, or RNNs with seq2seq? Or is your tech based more on a transformer model like BERT, or CTRL?

Andy Horng: Prior to the advent of BERT, training text classifiers without large amounts of data was a much more arduous task. Our early models relied mainly on GloVe embeddings and bidirectional LSTMs. We also invested significant time into data set development, i.e. building active learning processes and integrating various weak and programmatic supervision signals.

insideBIGDATA: What’s your training time, and what compute resources are you using? How many parameters are in your model?

Andy Horng: Compared to the resources poured into training the latest language models (e.g. GPT-3), our operations are relatively small scale. The RoBERTa language model we often use contains ~125M parameters. We do our model training on Google Cloud GPUs (such as NVIDIA Tesla T4s or P100s). Fine-tuning takes anywhere from 20 minutes to 2 hours, depending on the task.

insideBIGDATA: Anything else you can convey that is of a deep dive level would be welcome.

Andy Horng: While our NLP technology is certainly interesting, it’s important to note that it is just one component of our machine learning and data science efforts. After pulling out a large set of behavioral signals from digital conversations, we then turn to interpretation of these signals. We want to understand if certain behaviors are expected or anomalous. We also want to understand how various behaviors are causally linked to upstream variables (user preferences and team norms) and downstream variables (team engagement and manager performance). We’ve developed time series models and structural causal models to understand this. These models pinpoint specific behaviors to nudge users on, to maximally improve work for themselves and those around them.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*