FAQに戻る
Enterprise Applications

How many samples are needed for few-shot learning?

Few-shot learning typically requires only 1 to 5 labeled examples per class or category. Its core objective is achieving effective model performance with extremely limited annotated data. The precise number varies significantly and depends on factors such as task complexity, model architecture, and base model pretraining. Simpler tasks might succeed with one sample per class, while complex tasks (like fine-grained image recognition) often need more. Leveraging large pretrained models via techniques like prompt engineering or adapter modules is generally essential. The diversity and representativeness of the provided examples are critical, as poor samples drastically reduce effectiveness. There is no single fixed number suitable for all contexts. Few-shot learning enables models to quickly adapt to new tasks with minimal annotation cost, unlocking value in scenarios where data collection is expensive or impractical (e.g., rare disease diagnosis, niche product classification). Implementers should define the task precisely, select a robust base model (like a large language model), and provide a few highly representative examples per class. Typical starting benchmarks use 1, 2, 3, 5, or 10 examples.

関連する質問