当前位置: 首页 > news >正文

macos 使用 vllm 启动模型

1、前提

1、需要安装 xcode ,安装 xcode 详情见 macos 安装 xcode。
2、对 python 版本有要求,本文使用的是 Python 3.11.9 如果不想跟之前安装的 python 版本冲突,参考 python 使用 pyenv 管理 python 版本
⚠️警告:首先启动的是 deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B ,可以启动成功,但是调用模型回答问题的时候,出了非常严重的问题,如下:

curl http://localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{"model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B","messages": [{"role": "user", "content": "你好,请介绍一下人工智能"}],"temperature": 0.7}'

输出

{"id":"chatcmpl-7b903da989f343a3b0dd97276cc36ea7","object":"chat.completion","created":1751257361,"model":"deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":512,"completion_tokens":503,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}(mageedu)

2、安装 vLLM

建议使用 uv (一款非常快速的 Python 环境管理器)来创建和管理 Python 环境。请按照文档安装 uv 。安装 uv 后,您可以使用以下命令创建一个新的 Python 环境并安装 vLLM:

uv venv --python 3.12 --seed
source .venv/bin/activate

本文使用的是

python -m venv mageedu
source mageedu/bin/activate
(mageedu)

安装 XCode 和命令行工具(包括 Apple Clang)后,执行以下命令从源代码构建和安装 vLLM。

git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements/cpu.txt
pip install -e .

3、运行模型

VLLM_USE_MODELSCOPE=true vllm serve qwen/Qwen1.5-0.5B-Chat --max-model-len 512 --tensor-parallel-size 1

参数解释:
VLLM_USE_MODELSCOPE=true 代表从 阿里魔搭社区 下载大模型,国内使用这个。参考官网 vllm.envs
–tensor-parallel-size 1 张量并行参数,详情参考官网 Distributed Inference and Serving 和我的博文 vllm 张量并行 tensor-parallel-size 和流水线并行 pipeline-parallel-size
验证模型

curl http://localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{"model": "qwen/Qwen1.5-0.5B-Chat","messages": [{"role": "user", "content": "你好,请介绍一下人工智能"}],"max_tokens": 64}'

输出

{"id":"chatcmpl-868516cd305646c5acdf4bca071b0199","object":"chat.completion","created":1751265182,"model":"qwen/Qwen1.5-0.5B-Chat","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"人工智能是一种模拟人类智能的技术,它可以自动完成一些需要人类智能的任务。它可以通过学习和模式识别等技术来改善机器人的表现,从而实现自动化操作,从而提高效率。","tool_calls":[]},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":23,"total_tokens":63,"completion_tokens":40,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}(mageedu)

继续测试个复杂问题

curl http://localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{"model": "qwen/Qwen1.5-0.5B-Chat","messages": [{"role": "user", "content": "黎曼几何的曲率说的是什么,是真的弯曲还是对距离的度量变化"}],"max_tokens": 64}'

输出

{"id":"chatcmpl-748999c7643745f097185978bc2b2dcd","object":"chat.completion","created":1751267603,"model":"qwen/Qwen1.5-0.5B-Chat","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"黎曼几何中的曲率是一个数学概念,指的是函数在某点的切线与y轴的距离。在三维空间中,如果一个曲线在一点处的切线长度为ρ,那么这个点到曲线中心的距离就是ρ。\n\n黎曼曲率的真实性是通过测量曲线的曲率和","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":37,"total_tokens":101,"completion_tokens":64,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}(mageedu)

说明:可以看到输出突然中断了(先不管这个答案权威不,这个后文再说)。
如何解决这种输出突然中断,一个方法就是加大 token ,使用 “max_tokens”: 128 或者 256。

curl http://localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{"model": "qwen/Qwen1.5-0.5B-Chat","messages": [{"role": "user", "content": "黎曼几何的曲率说的是什么,是真的弯曲还是对距离的度量变化"}],"max_tokens": 256}'

输出

{"id":"chatcmpl-af2d2e64212746b198a2364b0b0a28ad","object":"chat.completion","created":1751267778,"model":"qwen/Qwen1.5-0.5B-Chat","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"黎曼几何中的曲率是指在三维空间中,某点到它的邻域(即两个或多个已知顶点)的夹角。在实数范围内,当点A与点B重合时,它们之间的夹角总是α/2;当点A与点C重合时,它们之间的夹角总是β/3;当点A与点D重合时,它们之间的夹角总是γ/4。\n\n这个曲率被定义为P = π / (r^2),其中r是两点之间的距离。因此,如果一个点A和它的所有邻域的长度之和等于它的弦长(π/4),那么我们可以说点A的曲率是对距离的度量变化。\n\n所以,根据黎曼几何的定义,我们可以得出结论:黎曼几何的曲率是对距离的度量变化。","tool_calls":[]},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":37,"total_tokens":219,"completion_tokens":182,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}(mageedu)

4、报错

说明:这里启动的是 deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B 模型是什么不重要,重要的是这种类型的报错如何处理。
⚠️警告:调小 --max-model-len 为 16384。下文的解决方法是 chatGPT 回答的,不准确,仅供参考。有参考价值就粘贴出来了。

VLLM_USE_MODELSCOPE=true vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B --tensor-parallel-size 1 --max-model-len 32768 
WARNING 06-30 11:49:25 [config.py:3297] Your device 'cpu' doesn't support torch.bfloat16. Falling back to torch.float16 for compatibility.
WARNING 06-30 11:49:25 [config.py:3348] Casting torch.bfloat16 to torch.float16.
tokenizer_config.json: 3.07kB [00:00, 10.0MB/s]
Traceback (most recent call last):File "/Users/zld/mageedu/bin/vllm", line 8, in <module>sys.exit(main())^^^^^^File "/Users/zld/vllm/vllm/entrypoints/cli/main.py", line 65, in mainargs.dispatch_function(args)File "/Users/zld/vllm/vllm/entrypoints/cli/serve.py", line 55, in cmduvloop.run(run_server(args))File "/Users/zld/mageedu/lib/python3.11/site-packages/uvloop/__init__.py", line 105, in runreturn runner.run(wrapper())^^^^^^^^^^^^^^^^^^^^^File "/Users/zld/.pyenv/versions/3.11.9/lib/python3.11/asyncio/runners.py", line 118, in runreturn self._loop.run_until_complete(task)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_completeFile "/Users/zld/mageedu/lib/python3.11/site-packages/uvloop/__init__.py", line 61, in wrapperreturn await main^^^^^^^^^^File "/Users/zld/vllm/vllm/entrypoints/openai/api_server.py", line 1424, in run_serverawait run_server_worker(listen_address, sock, args, **uvicorn_kwargs)File "/Users/zld/vllm/vllm/entrypoints/openai/api_server.py", line 1444, in run_server_workerasync with build_async_engine_client(args, client_config) as engine_client:File "/Users/zld/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 210, in __aenter__return await anext(self.gen)^^^^^^^^^^^^^^^^^^^^^File "/Users/zld/vllm/vllm/entrypoints/openai/api_server.py", line 158, in build_async_engine_clientasync with build_async_engine_client_from_engine_args(File "/Users/zld/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 210, in __aenter__return await anext(self.gen)^^^^^^^^^^^^^^^^^^^^^File "/Users/zld/vllm/vllm/entrypoints/openai/api_server.py", line 180, in build_async_engine_client_from_engine_argsvllm_config = engine_args.create_engine_config(usage_context=usage_context)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/Users/zld/vllm/vllm/engine/arg_utils.py", line 1051, in create_engine_configmodel_config = self.create_model_config()^^^^^^^^^^^^^^^^^^^^^^^^^^File "/Users/zld/vllm/vllm/engine/arg_utils.py", line 941, in create_model_configreturn ModelConfig(^^^^^^^^^^^^File "/Users/zld/mageedu/lib/python3.11/site-packages/pydantic/_internal/_dataclasses.py", line 123, in __init__s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfigValue error, User-specified max_model_len (32768) is greater than the derived max_model_len (max_position_embeddings=16384 or model_max_length=None in model's config.json). This may lead to incorrect model outputs or CUDA errors. To allow overriding this maximum, set the env var VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 [type=value_error, input_value=ArgsKwargs((), {'model': ...attention_dtype': None}), input_type=ArgsKwargs]For further information visit https://errors.pydantic.dev/2.11/v/value_error

你这个错误信息关键点是:

Value error, User-specified max_model_len (32768) is greater than the derived max_model_len (max_position_embeddings=16384 or model_max_length=None in model's config.json). 
This may lead to incorrect model outputs or CUDA errors. To allow overriding this maximum, set the env var VLLM_ALLOW_LONG_MAX_MODEL_LEN=1

4.1、发生了什么?

你启动 vLLM 的时候指定了 --max-model-len 32768(或者配置文件里是 32768),但模型本身的最大上下文长度是 16384(max_position_embeddings=16384),超过了模型设计的最大长度,vLLM 校验发现了这个冲突。


4.2、解决方案:

4.2.1、方案一:使用环境变量强制允许
export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1

然后再启动 vLLM,这会跳过长度检查(但可能导致结果异常或错误)。


4.2.2、方案二:调整启动参数 --max-model-len 不超过模型支持的最大长度

例如改成:

--max-model-len 16384

或更小。


4.3、额外提示

  • 你看到的开头警告

    WARNING 06-30 11:49:25 [config.py:3297] Your device 'cpu' doesn't support torch.bfloat16. Falling back to torch.float16 for compatibility.
    

    是因为 macOS CPU 不支持 bfloat16,自动退回 float16,属于正常情况,不影响启动。


4.4、综合建议

假如你真的需要用超长上下文(>16384),就设置环境变量允许;否则改成模型最大支持长度即可。


如果你告诉我你启动 vLLM 的完整命令,我可以帮你帮你改写示例。

http://www.xdnf.cn/news/1075087.html

相关文章:

  • 【微服务】.Net中使用Consul实现服务高可用
  • 51c大模型~合集144
  • 2025年光学工程、精密仪器与光电子技术国际会议(OEPIOT 2025)
  • 物联网基础
  • Git 常用命令、常用错误的总结
  • 2 大语言模型基础-2.2 生成式预训练语言模型GPT-2.2.2 有监督下游任务微调-Instruct-GPT强化学习奖励模型的结构改造与维度转换解析
  • [论文阅读] Neural Architecture Search: Insights from 1000 Papers
  • 超表面重构卡塞格林望远镜 | 从传统架构到新型光学系统
  • 最大矩形最大正方形-力扣
  • 优雅草蜻蜓HR人才招聘系统v2.0.9上线概要 -优雅草新产品上线
  • 飞算JavaAI 2.0.0深度测评:自然语言编程如何重构开发生产力?
  • 键盘第一下无反应
  • 04密码加密
  • C#程序调用cmd执行命令
  • 卡片跳转到应用页面(router事件)
  • 生成式人工智能实战 | 变分自编码器(Variational Auto-Encoder, VAE)
  • 基于STM32温湿度检测—串口显示
  • HTML5 实现的圣诞主题网站源码,使用了 HTML5 和 CSS3 技术,界面美观、节日氛围浓厚。
  • k8s pod深度解析
  • k8s创建定时的 Python 任务(CronJob)
  • 【c/c++1】数据类型/指针/结构体,static/extern/makefile/文件
  • 机器学习9——决策树
  • 新生代潜力股刘小北:演艺路上的璀璨新星
  • ROS常用的路径规划算法介绍
  • 面试复盘6.0
  • Java面试宝典:基础四
  • SpringSecurity6-oauth2-三方gitee授权-授权码模式
  • 详解快速排序
  • 宏任务与微任务和Dom渲染的关系
  • 左神算法之螺旋打印