A simple explanation of what knowledge distillation is
Knowledge distillation is a model compression technique where a small model (the "student") is trained to replicate the behavior of a large, complex model (the "teacher") or an ensemble of models.
The core principle involves training the student not only on the true target labels but, more crucially, on the teacher's soft predictions (probabilities). This utilizes a softened "softmax" output (using a higher temperature parameter) from the teacher, which conveys richer information about class similarities and inter-relationships compared to just hard labels. The student learns to match these softened probabilities through a specialized loss function, typically combining knowledge distillation loss and standard supervised loss.
This technique allows deploying powerful models on platforms like mobile devices and edge systems where computational resources and memory are constrained. It significantly reduces model size and inference time while preserving a substantial portion of the teacher model's accuracy and generalization capability.
関連する質問
Is there a big difference between fine-tuning and retraining a model?
Fine-tuning adapts a pre-existing model to a specific task using a relatively small dataset, whereas retraining involves building a new model architec...
What is the difference between zero-shot learning and few-shot learning?
Zero-shot learning (ZSL) enables models to recognize or classify objects for which no labeled training examples were available during training. In con...
What are the application scenarios of few-shot learning?
Few-shot learning enables models to learn new concepts or perform tasks effectively with only a small number of labeled examples. Its core capability...
What are the differences between the BLEU metric and ROUGE?
BLEU and ROUGE are both automated metrics for evaluating the quality of text generated by NLP models, but they measure different aspects. BLEU primari...