The University of Southampton
University of Southampton Institutional Repository

Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher

Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher
Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher
Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network. It has been a successful method of model compression and knowledge transfer. However, currently knowledge distillation lacks a convincing theoretical understanding. On the other hand, recent finding on neural tangent kernel enables us to approximate a wide neural network with a linear model of the network’s random features. In this paper, we theoretically analyze the knowledge distillation of a wide neural network. First we provide a transfer risk bound for the linearized model of the network. Then we propose a metric of the task’s training difficulty, called data inefficiency. Based on this metric, we show that for a perfect teacher, a high ratio of teacher’s soft labels can be beneficial. Finally, for the case of imperfect teacher, we find that hard labels can correct teacher’s wrong prediction, which explains the practice of mixing hard and soft labels.
Neural Information Processing Systems Foundation
Ji, Guangda
05a0e15f-d7f7-4d7f-a49b-54359b9090c3
Zhu, Zhanxing
e55e7385-8ba2-4a85-8bae-e00defb7d7f0
Larochelle, H.
Ranzato, M.
Hadsell, R.
Balcan, M.F.
Lin, H.
Ji, Guangda
05a0e15f-d7f7-4d7f-a49b-54359b9090c3
Zhu, Zhanxing
e55e7385-8ba2-4a85-8bae-e00defb7d7f0
Larochelle, H.
Ranzato, M.
Hadsell, R.
Balcan, M.F.
Lin, H.

Ji, Guangda and Zhu, Zhanxing (2020) Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher. Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F. and Lin, H. (eds.) In Advances in Neural Information Processing Systems 33. Neural Information Processing Systems Foundation. 11 pp .

Record type: Conference or Workshop Item (Paper)

Abstract

Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network. It has been a successful method of model compression and knowledge transfer. However, currently knowledge distillation lacks a convincing theoretical understanding. On the other hand, recent finding on neural tangent kernel enables us to approximate a wide neural network with a linear model of the network’s random features. In this paper, we theoretically analyze the knowledge distillation of a wide neural network. First we provide a transfer risk bound for the linearized model of the network. Then we propose a metric of the task’s training difficulty, called data inefficiency. Based on this metric, we show that for a perfect teacher, a high ratio of teacher’s soft labels can be beneficial. Finally, for the case of imperfect teacher, we find that hard labels can correct teacher’s wrong prediction, which explains the practice of mixing hard and soft labels.

This record has no associated files available for download.

More information

Published date: 2020
Venue - Dates: Thirty-fourth Conference on Neural Information Processing Systems, virtual, 2020-12-06 - 2020-12-12

Identifiers

Local EPrints ID: 486051
URI: http://eprints.soton.ac.uk/id/eprint/486051
PURE UUID: 5c855adb-7e4c-4501-9235-777fd9c4bb3a

Catalogue record

Date deposited: 08 Jan 2024 17:34
Last modified: 17 Mar 2024 06:43

Export record

Contributors

Author: Guangda Ji
Author: Zhanxing Zhu
Editor: H. Larochelle
Editor: M. Ranzato
Editor: R. Hadsell
Editor: M.F. Balcan
Editor: H. Lin

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×