Training and deploying deep learning at scale

I recently joined the O’Reilly Data Show podcast to talk about various challenges associated with developing deep learning at scale. Quoting Ben Lorica from O’Reilly Media:

In this episode of the Data Show, I spoke with Ameet Talwalkar…We discussed using and deploying deep learning at scale. This is an empirical era for machine learning, and, as I noted in an earlier article, as successful as deep learning has been, our level of understanding of why it works so well is still lacking. In practice, machine learning engineers need to explore and experiment using different architectures and hyperparameters before they settle on a model that works for their specific use case. Training a single model usually involves big (labeled) data and big models; as such, exploring the space of possible model architectures and parameters can take days, weeks, or even months. Talwalkar has spent the last few years grappling with this problem as an academic researcher and as an entrepreneur. In this episode, he describes some of his related work on hyperparameter tuning, systems, and more.

Check out the podcast, courtesy of Ben Lorica at O’Reilly Media Inc.:

At Determined AI we are working to tackle these fundamental problems related to training and deploying deep learning applications at scale. Contact us to learn more.