Takeshi Kevin Musgrave

PhD, Computer Science
Cornell University

I am a machine-learning developer advocate at Hewlett Packard Enterprise.

I recently graduated with a PhD focused on machine learning. During my PhD, I developed open-source software that has been used by thousands of researchers and engineers.

Previously, I worked as an intern at Facebook AI and Intel, as a teaching assistant at Cornell, and as a research assistant at McGill University. You can view my resume here.

Picture of Kevin Musgrave

My Open Source Code Projects

PyTorch Metric Learning

PyTorch Metric Learning

I published this code library as an open-source project on GitHub, where it now has over 5000 stars.

The purpose of this library is to simplify metric learning. Metric learning is a type of machine-learning algorithm used in applications like image retrieval and natural language processing.

This library offers a unified interface for metric-learning losses, miners, and distance metrics. It includes code for measuring data-retrieval accuracy and for simplifying distributed training. It also includes an extensive test suite and thorough documentation.

PyTorch Adapt

PyTorch Adapt

I built this library for training and validating domain-adaptation models. Domain adaptation is a type of machine-learning algorithm that repurposes existing models to work in new domains. For this library, I designed a system of lazily-evaluated hooks for efficiently combining algorithms that have differing data requirements. The library also includes an extensive test suite.

Powerful Benchmarker

Powerful Benchmarker

This library contains tools I developed to facilitate experiment configuration, hyperparameter optimization, large-scale slurm-job launching, as well as data logging, visualization, and analysis.

My Research Papers

A Metric Learning Reality Check

A Metric Learning Reality Check

Many metric learning papers from 2016 to 2020 report great advances in accuracy, often more than doubling the performance of methods developed before 2010. However, when compared on a level playing field, the old and new methods actually perform quite similarly. We confirm this in our experiments, which benefit from significantly improved methodology.

Evaluating the Evaluators

Three New Validators and a Large-Scale Benchmark Ranking for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) is a machine-learning sub-field that holds considerable potential, but this potential has not yet been realized. This is for two reasons:

The first problem is that most UDA papers do not evaluate their algorithms in an unsupervised setting. This experiment-design flaw yields misleading results. To correct this error, we evaluate UDA algorithms using validators that are specially designed to work in unsupervised settings.

However, the effectiveness of these validators in real-world applications is unknown. This is the second problem. To tackle this second problem, we conduct the largest empirical study of validators to date, to determine which validators are most effective.

In addition to analyzing these two issues, we also introduce three new UDA validators, two of which achieve state-of-the-art performance in various settings. Surprisingly, our experiments also show that in many cases, the state-of-the-art is obtained by a simple baseline method.

My Music

Here is a video of me playing some Chopin.

And here is one of my own pieces.

See my Youtube channel for more.

Contact

Please use this form to contact me.

You can also find me on GitHub and LinkedIn.

Site by Jeff Musgrave