Sign in

On Compressing U-net Using Knowledge Distillation

By Karttikeya Mangalam and Mathieu Salzamann
We study the use of knowledge distillation to compress the U-net architecture. We show that, while standard distillation is not sufficient to reliably train a compressed U-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process. This allows us to... Show more
December 1, 2018
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
On Compressing U-net Using Knowledge Distillation
Click on play to start listening