Knowledge Distillation with Helen Byrne
Knowledge Distillation is the podcast that brings together a mixture of experts from across the Artificial Intelligence community.
We talk to the world’s leading researchers about their experiences developing cutting-edge models as well as the technologists taking AI tools out of the lab and turning them into commercial products and services.
Knowledge Distillation also takes a critical look at the impact of artificial intelligence on society – opting for expert analysis instead of hysterical headlines.
We are committed to featuring at least 50% female voices on the podcast – elevating the many brilliant women working in AI.
Host Helen Byrne is a VP at the British AI compute systems maker Graphcore where she leads the Solution Architects team, helping innovators build their AI solutions using Graphcore’s technology.
Helen previously led AI Field Engineering and worked in AI Research, tackling problems in distributed machine learning.
Before landing in Artificial Intelligence, Helen worked in FinTech, and as a secondary school teacher. Her background is in mathematics and she has a MSc in Artificial Intelligence.
Knowledge Distillation is produced by Iain Mackenzie.
Knowledge Distillation with Helen Byrne
Papers of the Month with Charlie Blake, Research Engineer at Graphcore
Charlie Blake from Graphcore’s research team discusses their AI Papers of the Month for January 2024.
Graphcore research has been collating and sharing a review of the most consequential AI papers internally, every month, for a number of years.
Now – for the first time – the research team is making this valuable resource public, to help the wider AI community keep up-to-date with the most exciting breakthroughs.
Papers of the Month for January 2024 (with some work from December 2023) includes:
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
https://arxiv.org/abs/2312.05328
Authors: Talfan Evans, Shreya Pathak, Hamza Merzic, et al. (Google DeepMind, UCL)
Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws
https://arxiv.org/abs/2401.00448
Authors: Nikhil Sardana and Jonathan Frankle (MosaicML)
Analyzing and Improving the Training Dynamics of Diffusion Models
https://arxiv.org/abs/2312.02696
Authors: Tero Karras et al. (Nvidia, Aalto University)
Solving olympiad geometry without human demonstrations
https://www.nature.com/articles/s41586-023-06747-5
Authors: Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He and Thang Luong (Google DeepMind, New York University)
To read about January’s Papers of the Month, visit the Graphcore blog.
https://www.graphcore.ai/posts/great-teachers-and-beyond-chinchilla-papers-of-the-month-jan-2024