Skip to content

10.6 大语言模型:Transformers 与现代 NLP

从 Transformer 到 GPT:NLP 的范式转变

在上一节,我们从零实现了 Transformer 架构,理解了其数学原理。现在,让我们进入工业级应用——如何使用预训练的大语言模型解决实际问题。

💡 范式转变

传统 NLP:特征工程 + 任务特定模型

文本 → 词袋/TF-IDF → 逻辑回归/SVM → 预测

现代 NLP:预训练 + 微调

预训练(海量无标注数据)→ 大模型 → 微调(少量标注数据)→ 特定任务

这一章我们将学习

  1. Hugging Face Transformers 生态系统
  2. BERT vs GPT 架构对比
  3. Tokenization 技术(BPE、WordPiece)
  4. Fine-tuning 策略(全参数、LoRA、Prompt Tuning)
  5. Prompt Engineering 科学基础
  6. 实战:文本分类、问答系统、文本生成

Hugging Face Transformers:现代 NLP 的瑞士军刀

1. 生态系统概览

python
"""
Hugging Face 核心库:

1. transformers:预训练模型库
2. datasets:数据集库
3. tokenizers:高效分词器
4. accelerate:分布式训练
5. PEFT:高效微调方法
"""

# 安装
# pip install transformers datasets tokenizers accelerate

from transformers import (
    AutoTokenizer,
    AutoModel,
    AutoModelForSequenceClassification,
    AutoModelForCausalLM,
    pipeline
)

# 最简单的使用方式:pipeline
classifier = pipeline("sentiment-analysis")
result = classifier("I love this movie!")
print(result)
# [{'label': 'POSITIVE', 'score': 0.9998}]

# 文本生成
generator = pipeline("text-generation", model="gpt2")
result = generator("Once upon a time", max_length=30, num_return_sequences=1)
print(result[0]['generated_text'])

2. 三行代码实现情感分析

python
from transformers import pipeline

# 1. 加载预训练模型
classifier = pipeline(
    "sentiment-analysis",
    model="distilbert-base-uncased-finetuned-sst-2-english"
)

# 2. 推理
texts = [
    "This movie is absolutely fantastic!",
    "Worst film I've ever seen.",
    "It was okay, nothing special."
]

# 3. 批量预测
results = classifier(texts)

for text, result in zip(texts, results):
    print(f"Text: {text}")
    print(f"Label: {result['label']}, Score: {result['score']:.4f}\n")

BERT vs GPT:编码器 vs 解码器

1. BERT:双向编码器

架构:Transformer Encoder 训练任务

  • Masked Language Modeling (MLM)
  • Next Sentence Prediction (NSP)
python
from transformers import BertTokenizer, BertModel
import torch

# 加载预训练 BERT
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')

# 输入文本
text = "The cat sat on the mat."

# 分词
inputs = tokenizer(text, return_tensors='pt')
print("输入 ID:", inputs['input_ids'])
print("Token 列表:", tokenizer.convert_ids_to_tokens(inputs['input_ids'][0]))

# 前向传播
with torch.no_grad():
    outputs = model(**inputs)

# 获取输出
last_hidden_state = outputs.last_hidden_state  # (1, seq_len, 768)
pooler_output = outputs.pooler_output  # (1, 768) - [CLS] token

print(f"\n隐藏状态形状: {last_hidden_state.shape}")
print(f"池化输出形状: {pooler_output.shape}")

BERT 的特点

  • 双向上下文:每个词都能看到左右两侧的信息
  • 适用任务:分类、问答、命名实体识别
  • 不适合生成:因为是双向的,无法用于自回归生成

2. GPT:自回归解码器

架构:Transformer Decoder(带 Causal Mask) 训练任务:Next Token Prediction

python
from transformers import GPT2Tokenizer, GPT2LMHeadModel

# 加载 GPT-2
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')

# 设置 pad_token
tokenizer.pad_token = tokenizer.eos_token

# 文本生成
prompt = "In a galaxy far, far away"
inputs = tokenizer(prompt, return_tensors='pt')

# 生成文本
with torch.no_grad():
    outputs = model.generate(
        inputs['input_ids'],
        max_length=50,
        num_return_sequences=3,
        temperature=0.8,
        top_k=50,
        top_p=0.95,
        do_sample=True
    )

# 解码生成的文本
print("生成的文本:\n")
for i, output in enumerate(outputs):
    text = tokenizer.decode(output, skip_special_tokens=True)
    print(f"{i+1}. {text}\n")

3. BERT vs GPT 对比表

特性BERTGPT
架构Encoder-onlyDecoder-only
注意力类型双向注意力Causal(单向)注意力
训练目标MLM + NSPNext Token Prediction
最佳应用理解任务(分类、问答)生成任务(文本生成)
输入-输出全句 → 表示向量前缀 → 续写
典型模型BERT, RoBERTa, ELECTRAGPT-2, GPT-3, GPT-4

Tokenization:从文本到数字

1. 为什么需要 Tokenization?

神经网络只能处理数字,我们需要将文本转换为数字序列。

三种主流方法

  1. Word-level:每个词一个 ID(词表太大)
  2. Character-level:每个字符一个 ID(序列太长)
  3. Subword-level:介于词和字符之间(最佳平衡)

2. BPE(Byte Pair Encoding)

GPT 系列使用的分词算法。

python
from transformers import GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')

text = "The quick brown fox jumps over the lazy dog."

# 分词
tokens = tokenizer.tokenize(text)
print("Tokens:", tokens)

# 转换为 ID
input_ids = tokenizer.encode(text)
print("Input IDs:", input_ids)

# 解码
decoded = tokenizer.decode(input_ids)
print("Decoded:", decoded)

# 查看词表大小
print(f"\n词表大小: {tokenizer.vocab_size}")

# 处理未知词
oov_text = "supercalifragilisticexpialidocious"
oov_tokens = tokenizer.tokenize(oov_text)
print(f"\nOOV 分词: {oov_tokens}")

3. WordPiece(BERT 使用)

python
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

text = "unhappiness"

# 分词
tokens = tokenizer.tokenize(text)
print("WordPiece tokens:", tokens)
# ['un', '##happiness']

# 特殊标记
special_tokens = ["[CLS]", "[SEP]", "[MASK]", "[PAD]", "[UNK]"]
print("\n特殊标记 IDs:")
for token in special_tokens:
    print(f"{token}: {tokenizer.convert_tokens_to_ids(token)}")

4. 自定义 Tokenizer

python
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import Whitespace

# 创建 BPE tokenizer
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])

# 预分词(按空格)
tokenizer.pre_tokenizer = Whitespace()

# 训练(需要提供文本文件列表)
# tokenizer.train(files=["corpus.txt"], trainer=trainer)

# 保存和加载
# tokenizer.save("my_tokenizer.json")
# tokenizer = Tokenizer.from_file("my_tokenizer.json")

Fine-tuning:让大模型适应特定任务

1. 完整的 Fine-tuning Pipeline

python
from transformers import (
    AutoTokenizer,
    AutoModelForSequenceClassification,
    TrainingArguments,
    Trainer
)
from datasets import load_dataset
import numpy as np
from sklearn.metrics import accuracy_score, f1_score

# 1. 加载数据集
dataset = load_dataset("imdb")

# 查看数据
print(dataset['train'][0])

# 2. 加载 tokenizer 和模型
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(
    model_name,
    num_labels=2  # 二分类
)

# 3. 数据预处理
def tokenize_function(examples):
    return tokenizer(
        examples['text'],
        padding='max_length',
        truncation=True,
        max_length=512
    )

# 应用到整个数据集
tokenized_datasets = dataset.map(tokenize_function, batched=True)

# 4. 创建小的训练/验证集(演示用)
small_train_dataset = tokenized_datasets['train'].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets['test'].shuffle(seed=42).select(range(1000))

# 5. 定义评估指标
def compute_metrics(eval_pred):
    logits, labels = eval_pred
    predictions = np.argmax(logits, axis=-1)

    accuracy = accuracy_score(labels, predictions)
    f1 = f1_score(labels, predictions)

    return {
        'accuracy': accuracy,
        'f1': f1
    }

# 6. 训练参数
training_args = TrainingArguments(
    output_dir='./results',
    num_train_epochs=3,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=64,
    warmup_steps=500,
    weight_decay=0.01,
    logging_dir='./logs',
    logging_steps=100,
    evaluation_strategy='epoch',
    save_strategy='epoch',
    load_best_model_at_end=True,
)

# 7. 创建 Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=small_train_dataset,
    eval_dataset=small_eval_dataset,
    compute_metrics=compute_metrics,
)

# 8. 训练
# trainer.train()

# 9. 评估
# results = trainer.evaluate()
# print(results)

# 10. 保存模型
# model.save_pretrained('./my_finetuned_model')
# tokenizer.save_pretrained('./my_finetuned_model')

2. 冻结部分层

python
# 冻结 BERT 的底层,只训练顶层
model = AutoModelForSequenceClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=2
)

# 冻结前 10 层
for name, param in model.named_parameters():
    if 'bert.encoder.layer' in name:
        layer_num = int(name.split('.')[3])
        if layer_num < 10:
            param.requires_grad = False

# 查看可训练参数
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
total_params = sum(p.numel() for p in model.parameters())

print(f"可训练参数: {trainable_params:,} / {total_params:,} ({100*trainable_params/total_params:.2f}%)")

3. LoRA:高效微调

LoRA (Low-Rank Adaptation) 只训练低秩矩阵,大幅减少可训练参数。

python
from peft import get_peft_model, LoraConfig, TaskType

# 加载基础模型
model = AutoModelForSequenceClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=2
)

# LoRA 配置
lora_config = LoraConfig(
    task_type=TaskType.SEQ_CLS,
    r=8,  # 低秩维度
    lora_alpha=32,
    lora_dropout=0.1,
    target_modules=["query", "value"]  # 只在 attention 的 Q, V 上应用
)

# 应用 LoRA
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# Output: trainable params: 294,912 || all params: 67,584,002 || trainable%: 0.4363%

# 训练(使用相同的 Trainer)
# trainer = Trainer(model=model, ...)
# trainer.train()

Prompt Engineering:与 LLM 对话的艺术

1. Zero-Shot Learning

python
from transformers import pipeline

# 使用大模型进行零样本分类
classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")

text = "The new iPhone has an amazing camera and long battery life."
candidate_labels = ["technology", "sports", "politics", "entertainment"]

result = classifier(text, candidate_labels)
print(f"Text: {text}\n")
print("Predictions:")
for label, score in zip(result['labels'], result['scores']):
    print(f"  {label}: {score:.4f}")

2. Few-Shot Learning

python
# GPT 风格的 few-shot prompt
prompt = """
Classify the sentiment of the following reviews:

Review: "This movie was absolutely fantastic!"
Sentiment: Positive

Review: "Waste of time and money."
Sentiment: Negative

Review: "It was okay, nothing special."
Sentiment: Neutral

Review: "I loved every minute of it!"
Sentiment:
"""

generator = pipeline("text-generation", model="gpt2")
result = generator(prompt, max_length=len(prompt.split()) + 5, num_return_sequences=1)
print(result[0]['generated_text'])

3. Prompt 设计原则

python
"""
好的 Prompt 应该:

1. 清晰明确
   ❌ "告诉我关于狗的事"
   ✅ "列出 5 个适合家庭饲养的狗品种,并说明每个品种的特点"

2. 提供上下文
   ❌ "翻译这个"
   ✅ "将以下英文技术文档翻译成简体中文,保持专业术语的准确性"

3. 使用示例(Few-shot)
   包含 2-3 个输入-输出示例

4. 指定输出格式
   "以 JSON 格式输出,包含 'name' 和 'description' 字段"

5. 设置约束
   "回答控制在 100 字以内"
   "只使用提供的上下文信息,不要编造"
"""

# 实用的 Prompt 模板
def create_classification_prompt(text, labels):
    """创建分类任务的 prompt"""
    prompt = f"""
Task: Classify the following text into one of these categories: {', '.join(labels)}

Text: {text}

Category:"""
    return prompt

# 示例
text = "The stock market crashed today, losing 500 points."
labels = ["business", "sports", "politics", "entertainment"]
prompt = create_classification_prompt(text, labels)
print(prompt)

实战项目 1:文本分类系统

python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from typing import List, Dict
import torch

class TextClassifier:
    """
    通用文本分类器

    支持:
    - 多类别分类
    - 批量预测
    - 自定义阈值
    """

    def __init__(self, model_name: str = "distilbert-base-uncased", num_labels: int = 2):
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.model = AutoModelForSequenceClassification.from_pretrained(
            model_name,
            num_labels=num_labels
        )
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        self.model.to(self.device)

    def predict(self, texts: List[str], batch_size: int = 32) -> List[Dict]:
        """
        批量预测

        Args:
            texts: 待分类文本列表
            batch_size: 批量大小

        Returns:
            预测结果列表,每个元素包含 {label, score}
        """
        self.model.eval()
        results = []

        for i in range(0, len(texts), batch_size):
            batch_texts = texts[i:i+batch_size]

            # 分词
            inputs = self.tokenizer(
                batch_texts,
                padding=True,
                truncation=True,
                max_length=512,
                return_tensors='pt'
            ).to(self.device)

            # 推理
            with torch.no_grad():
                outputs = self.model(**inputs)
                logits = outputs.logits
                probabilities = torch.softmax(logits, dim=-1)

            # 解析结果
            for prob in probabilities:
                pred_label = prob.argmax().item()
                pred_score = prob.max().item()

                results.append({
                    'label': pred_label,
                    'score': pred_score
                })

        return results

    def fine_tune(self, train_texts: List[str], train_labels: List[int], epochs: int = 3):
        """
        微调模型

        Args:
            train_texts: 训练文本
            train_labels: 训练标签
            epochs: 训练轮数
        """
        from torch.utils.data import Dataset, DataLoader
        from torch.optim import AdamW

        # 自定义数据集
        class TextDataset(Dataset):
            def __init__(self, texts, labels, tokenizer):
                self.texts = texts
                self.labels = labels
                self.tokenizer = tokenizer

            def __len__(self):
                return len(self.texts)

            def __getitem__(self, idx):
                encoding = self.tokenizer(
                    self.texts[idx],
                    padding='max_length',
                    truncation=True,
                    max_length=512,
                    return_tensors='pt'
                )

                return {
                    'input_ids': encoding['input_ids'].flatten(),
                    'attention_mask': encoding['attention_mask'].flatten(),
                    'labels': torch.tensor(self.labels[idx])
                }

        # 创建数据加载器
        dataset = TextDataset(train_texts, train_labels, self.tokenizer)
        dataloader = DataLoader(dataset, batch_size=16, shuffle=True)

        # 优化器
        optimizer = AdamW(self.model.parameters(), lr=2e-5)

        # 训练循环
        self.model.train()
        for epoch in range(epochs):
            total_loss = 0

            for batch in dataloader:
                # 移动到设备
                input_ids = batch['input_ids'].to(self.device)
                attention_mask = batch['attention_mask'].to(self.device)
                labels = batch['labels'].to(self.device)

                # 前向传播
                outputs = self.model(
                    input_ids=input_ids,
                    attention_mask=attention_mask,
                    labels=labels
                )
                loss = outputs.loss

                # 反向传播
                optimizer.zero_grad()
                loss.backward()
                optimizer.step()

                total_loss += loss.item()

            avg_loss = total_loss / len(dataloader)
            print(f"Epoch {epoch+1}/{epochs}, Loss: {avg_loss:.4f}")

# 使用示例
classifier = TextClassifier()

texts = [
    "This product is amazing!",
    "Terrible experience, would not recommend.",
    "It's okay, nothing special."
]

results = classifier.predict(texts)
for text, result in zip(texts, results):
    print(f"Text: {text}")
    print(f"Label: {result['label']}, Score: {result['score']:.4f}\n")

实战项目 2:问答系统

python
from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering
import torch

class QASystem:
    """
    问答系统(基于 BERT)

    支持:
    - 抽取式问答(从给定文本中抽取答案)
    - 批量问答
    - 置信度评估
    """

    def __init__(self, model_name: str = "deepset/roberta-base-squad2"):
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.model = AutoModelForQuestionAnswering.from_pretrained(model_name)
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        self.model.to(self.device)

    def answer(self, question: str, context: str, top_k: int = 3):
        """
        回答问题

        Args:
            question: 问题
            context: 上下文(包含答案的文本)
            top_k: 返回前 k 个答案

        Returns:
            答案列表
        """
        # 分词
        inputs = self.tokenizer(
            question,
            context,
            return_tensors='pt',
            truncation=True,
            max_length=512
        ).to(self.device)

        # 推理
        with torch.no_grad():
            outputs = self.model(**inputs)

        # 获取答案起始和结束位置
        start_logits = outputs.start_logits
        end_logits = outputs.end_logits

        # 找到最可能的答案
        start_idx = torch.argmax(start_logits)
        end_idx = torch.argmax(end_logits)

        # 提取答案
        answer_tokens = inputs['input_ids'][0][start_idx:end_idx+1]
        answer = self.tokenizer.decode(answer_tokens, skip_special_tokens=True)

        # 计算置信度
        start_score = torch.softmax(start_logits, dim=-1)[0, start_idx].item()
        end_score = torch.softmax(end_logits, dim=-1)[0, end_idx].item()
        confidence = (start_score + end_score) / 2

        return {
            'answer': answer,
            'confidence': confidence,
            'start': start_idx.item(),
            'end': end_idx.item()
        }

# 使用示例
qa_system = QASystem()

context = """
The Transformer architecture was introduced in the paper "Attention Is All You Need"
by Vaswani et al. in 2017. It revolutionized natural language processing by replacing
recurrent neural networks with self-attention mechanisms. The key innovation was the
ability to process all positions in the input sequence in parallel, making training
much faster than RNNs.
"""

questions = [
    "When was the Transformer introduced?",
    "Who introduced the Transformer?",
    "What did the Transformer replace?"
]

for question in questions:
    result = qa_system.answer(question, context)
    print(f"Q: {question}")
    print(f"A: {result['answer']}")
    print(f"Confidence: {result['confidence']:.4f}\n")

实战项目 3:文本生成

python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch

class TextGenerator:
    """
    文本生成器(基于 GPT-2)

    支持:
    - 续写文本
    - 控制生成长度、温度、top-k、top-p
    - 批量生成
    """

    def __init__(self, model_name: str = "gpt2"):
        self.tokenizer = GPT2Tokenizer.from_pretrained(model_name)
        self.model = GPT2LMHeadModel.from_pretrained(model_name)

        # 设置 pad_token
        self.tokenizer.pad_token = self.tokenizer.eos_token

        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        self.model.to(self.device)

    def generate(
        self,
        prompt: str,
        max_length: int = 100,
        num_return_sequences: int = 1,
        temperature: float = 1.0,
        top_k: int = 50,
        top_p: float = 0.95,
        do_sample: bool = True
    ) -> List[str]:
        """
        生成文本

        Args:
            prompt: 输入提示
            max_length: 最大长度
            num_return_sequences: 生成几个序列
            temperature: 温度(越高越随机)
            top_k: Top-K 采样
            top_p: Nucleus 采样
            do_sample: 是否采样(False 为贪婪解码)

        Returns:
            生成的文本列表
        """
        # 编码输入
        inputs = self.tokenizer(prompt, return_tensors='pt').to(self.device)

        # 生成
        with torch.no_grad():
            outputs = self.model.generate(
                inputs['input_ids'],
                max_length=max_length,
                num_return_sequences=num_return_sequences,
                temperature=temperature,
                top_k=top_k,
                top_p=top_p,
                do_sample=do_sample,
                pad_token_id=self.tokenizer.eos_token_id
            )

        # 解码
        generated_texts = []
        for output in outputs:
            text = self.tokenizer.decode(output, skip_special_tokens=True)
            generated_texts.append(text)

        return generated_texts

# 使用示例
generator = TextGenerator()

prompts = [
    "In the year 2050, artificial intelligence",
    "The secret to happiness is",
    "Once upon a time in a distant land"
]

for prompt in prompts:
    print(f"Prompt: {prompt}\n")

    # 贪婪解码(确定性)
    greedy = generator.generate(prompt, max_length=50, do_sample=False, num_return_sequences=1)
    print(f"Greedy: {greedy[0]}\n")

    # 高温度采样(更随机)
    creative = generator.generate(prompt, max_length=50, temperature=1.5, num_return_sequences=1)
    print(f"Creative: {creative[0]}\n")

    print("-" * 80 + "\n")

模型选择指南

1. 常用预训练模型

模型参数量任务类型优势推荐场景
BERT-base110M理解双向上下文分类、问答、NER
RoBERTa125M理解BERT改进版高精度分类
DistilBERT66M理解快速推理资源受限环境
GPT-2124M-1.5B生成流畅生成文本续写
T560M-11B任意统一框架多任务学习
BART140M生成Seq2Seq摘要、翻译

2. 如何选择模型?

python
"""
决策树:

1. 任务类型?
   - 理解(分类、问答)→ BERT 系列
   - 生成(续写、对话)→ GPT 系列
   - Seq2Seq(翻译、摘要)→ T5/BART

2. 资源限制?
   - 计算资源充足 → 大模型(RoBERTa-large, GPT-2-large)
   - 资源受限 → 蒸馏模型(DistilBERT, DistilGPT-2)

3. 语言?
   - 英文 → 原始模型
   - 中文 → BERT-Chinese, GPT-Chinese
   - 多语言 → XLM-RoBERTa, mBERT

4. 领域?
   - 通用 → 预训练模型
   - 特定领域(医疗、法律)→ 领域适配模型(BioBERT, LegalBERT)
"""

小结

在本节中,我们学习了:

Hugging Face 生态系统

  • Transformers 库的使用
  • Pipeline API 快速开发

预训练模型

  • BERT vs GPT 架构对比
  • 模型选择指南

Tokenization

  • BPE、WordPiece 原理
  • 自定义 Tokenizer

Fine-tuning 策略

  • 全参数微调
  • 冻结层微调
  • LoRA 高效微调

Prompt Engineering

  • Zero-shot 和 Few-shot 学习
  • Prompt 设计原则

实战项目

  • 文本分类系统
  • 问答系统
  • 文本生成

练习题

基础题

  1. 使用 BERT 实现情感分类(IMDb 数据集)
  2. 用 GPT-2 生成给定主题的短故事
  3. 实现一个简单的问答系统

进阶题

  1. 对比 DistilBERT 和 BERT-base 在相同任务上的性能和推理速度
  2. 使用 LoRA 微调 GPT-2,控制生成风格
  3. 实现多标签文本分类

挑战题

  1. 构建一个 RAG (Retrieval-Augmented Generation) 系统
  2. 实现 Prompt Tuning(只训练 soft prompts)
  3. 部署 Hugging Face 模型到生产环境(FastAPI + Docker)

下一节:10.7 综合实战:构建端到端 AI 应用

在最后一节,我们将整合所有知识,构建一个完整的 AI 应用,包括数据处理、模型训练、部署和监控!

基于 MIT 许可证发布。内容版权归作者所有。