Blogs

APR 01, 2021

Make your deep learning cluster more efficient with resource pools

By Angela Jiang

Use Determined’s “resource pools” feature to train deep learning models more efficiently in your cluster.

LEARN MORE

MAR 25, 2021

ALBERT on Determined: Distributed Training with Spot Instances

By Armand McQueen

How to use distributed training and spot instances to train ALBERT faster and save money at the same time.

LEARN MORE

FEB 25, 2021

AI chips in the real world: Interoperability, constraints, cost, energy efficiency, and models

By Evan Sparks

We recap our CEO Evan Sparks’ “Orchestrate All The Things” podcast recording on the importance of interoperability in AI and developing hardware and software in tandem.

LEARN MORE

FEB 09, 2021

Managing ML Training Data with DVC and Determined

By Derrick Mwiti

Tracking machine learning data sets made easy with Data Version Control (DVC) and Determined.

LEARN MORE

FEB 03, 2021

Object Detection with Keras and Determined

By Samhita Alla

An in-depth tutorial on training and tuning object detection models using Keras and Determined.

LEARN MORE

JAN 28, 2021

Algorithmia and Determined: How to train and deploy deep learning models with the Algorithmia-Determined integration

By Hoang Phan

Training and deploying deep learning models has never been easier with Determined and Algorithmia.

LEARN MORE

DEC 29, 2020

Hyperparameter Search on FasterRCNN

By Dave Troiano, Angela Jiang

Training better object detection models with Determined AI and hyperparameter search.

LEARN MORE

DEC 21, 2020

First-Ever Determined Community Sync

By Vishnu Mohan

A recap of Determined’s first ever community sync a roundtable filled with deep learning use cases and product feedback.

LEARN MORE

NOV 30, 2020

Data Scientists Don't Care About Kubernetes

By Neil Conway, David Hershey

Kubernetes is revolutionary for software engineering but painful for deep learning model development.

LEARN MORE

NOV 24, 2020

Optimizing Horovod with Local Gradient Aggregation

By Aaron Harlap, Neil Conway

Announcing local gradient aggregation, a new performance optimization for open-source distributed training with Horovod!

LEARN MORE