Hugging Face 模型下载支持三种方式:Transformers API、huggingface-cli、手动下载。
方法对比
| 方法 | 适用场景 | 优势 |
|---|
| Transformers API | 日常开发 | 自动下载+缓存 |
| huggingface-cli | 批量下载 | 断点续传、筛选文件 |
| 手动下载 | 网络受限 | 灵活可控 |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| from transformers import AutoModel, AutoTokenizer
model_name = "bert-base-chinese"
# 基础用法
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# 自定义缓存路径
model = AutoModel.from_pretrained(
model_name,
cache_dir="./my_cache",
local_files_only=False # True=离线模式
)
|
huggingface-cli
1
2
3
4
5
6
7
8
9
10
11
12
| pip install huggingface-hub
# 下载完整模型
huggingface-cli download bert-base-chinese --local-dir ./bert-base-chinese
# 仅下载指定文件
huggingface-cli download bert-base-chinese \
--local-dir ./bert-base-chinese \
--include "*.bin" "config.json"
# 私有模型登录
huggingface-cli login
|
手动下载
- 打开模型主页(如 https://huggingface.co/bert-base-chinese)
- 点击「Files and versions」下载所需文件
- 本地加载:
1
| model = AutoModel.from_pretrained("./bert-base-chinese", local_files_only=True)
|
关键配置
1
2
3
4
5
6
| # 缓存目录(默认 ~/.cache/huggingface/hub)
export HF_HOME=/path/to/cache
# 代理加速
export HTTP_PROXY=http://127.0.0.1:7890
export HTTPS_PROXY=http://127.0.0.1:7890
|
1
2
| # 私有模型
model = AutoModel.from_pretrained(model_name, token="your_hf_token")
|
Comments