Fine-tuning involves training a pre-trained AI model on a labeled dataset specific to a particular task or domain, adapting it to industry terminology and requirements. This process adjusts the model’s parameters to better fit the target use case, such as understanding specialized vocabulary or meeting domain-specific needs.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Fine-tuning allows you to adapt a pre-trained foundation model to your specific use case by training it on a labeled dataset. This technique is commonly used to customize models forindustry-specific terminology, improving their accuracy for specialized tasks."
(Source: AWS Bedrock User Guide, Model Customization)
Detailed Explanation:
Option A: Data augmentationData augmentation involves generating synthetic data to expand a training dataset, typically for tasks like image or text generation. It does not specifically adapt models to industry terminology or requirements.
Option B: Fine-tuningThis is the correct answer. Fine-tuning trains a pre-trained model on a labeled dataset tailored to the target domain, enabling it to learn industry-specific terminology and requirements, as described in the question.
Option C: Model quantizationModel quantization reduces the precision of a model’s weights to optimize it for deployment (e.g., on edge devices). It does not involve training on labeled datasets or adapting to industry terminology.
Option D: Continuous pre-trainingContinuous pre-training extends the initial training of a model on a large, general dataset. While it can improve general performance, it is not specifically tailored to industry requirements using labeled datasets, unlike fine-tuning.
[References:, AWS Bedrock User Guide: Model Customization (https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html), AWS AI Practitioner Learning Path: Module on Model Training and Customization, Amazon SageMaker Developer Guide: Fine-Tuning Models (https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html), , , , ]