FAQに戻る
Enterprise Applications

What is the principle of knowledge distillation?

Knowledge distillation is a technique that transfers knowledge from a large, complex model (teacher) to a smaller, simpler one (student). It achieves model compression or performance improvement by training the student to mimic the teacher's behavior.

The core principle involves the student learning to replicate the teacher's output distributions, particularly the softened output probabilities ("soft targets") generated using a high temperature parameter in the final softmax layer. This captures the teacher's nuanced inter-class relationships better than just hard labels. A weighted loss function, typically combining distillation loss (KL divergence between teacher and student soft targets) and standard supervised training loss, guides the learning. The process requires access to training data and a pre-trained teacher model.

Knowledge distillation is widely applied to create deployable models. It significantly reduces model size and computational demands while preserving much of the teacher's accuracy, enabling efficient inference on resource-constrained devices like mobile phones and embedded systems. The distilled student model offers a practical balance between performance and efficiency in production environments.

関連する質問