What is knowledge distillation
Knowledge distillation is a model compression technique where a small, efficient student model learns to replicate the behavior of a larger, more complex teacher model. The core objective is achieving comparable performance with reduced size and computational cost. The teacher transfers its knowledge to the student primarily through its soft targets—output probabilities softened using a higher temperature parameter, which reveal richer inter-class relationships than hard labels. Key factors include selecting compatible architectures, tuning the distillation temperature, and balancing the loss between mimicking the teacher's soft labels and standard training loss.
This technique enables deploying powerful deep learning models on devices with limited resources, such as mobile phones or embedded systems. Its primary business value lies in significantly reducing model size and inference latency while maintaining high accuracy, leading to lower operational costs and broader application possibilities. It is crucial for efficient scaling and real-time AI applications.
関連する質問
Is there a big difference between fine-tuning and retraining a model?
Fine-tuning adapts a pre-existing model to a specific task using a relatively small dataset, whereas retraining involves building a new model architec...
What is the difference between zero-shot learning and few-shot learning?
Zero-shot learning (ZSL) enables models to recognize or classify objects for which no labeled training examples were available during training. In con...
What are the application scenarios of few-shot learning?
Few-shot learning enables models to learn new concepts or perform tasks effectively with only a small number of labeled examples. Its core capability...
What are the differences between the BLEU metric and ROUGE?
BLEU and ROUGE are both automated metrics for evaluating the quality of text generated by NLP models, but they measure different aspects. BLEU primari...