【大模型学习】Qwen-2.5-VL制作gradio前端demo页面
文章目录
- 一、封装问答接口
- 二、接口请求
- 三、搭建gradio平台
- 3.1 发送图片,转换为URL格式
- 发送图片的服务端`received.py`如下(作用是接收客户端的图片,并保存到指定目录):
- 接收图片的客户端`send.py`如下(作用是发送图片,这里我直接封装成了方法):
- **有了这两个函数,就可以实现单端文件传输了,不过此时还需要将服务端的图片挂载为URL,代码如下**
- 3.2 gradio平台搭建
一、封装问答接口
封装接口使用的是docker,指令如下:
docker run -it --name Qwen2.5-VL-32B-Instruct-AWQ --gpus '"device=5"' -v /data:/data -p 5058:5058 --ipc=host vllm/vllm-openai:latest --model /data/models/Qwen2.5-VL-32B-Instruct-AWQ --max-num-batched-tokens 65536 --gpu-memory-utilization 0.7 --served-model-name Qwen2.5-VL-32B-Instruct-AWQ --port 5058
docker run -it
:创建并启动一个容器-it
:以交互模型运行(-i 保存STDIN打开,-t分配伪终端)--name Qwen2.5-VL-32B-Instruct-AWQ
:为创建的这个容器指定名称--gpus '"device=5"'
:指定仅使用GPU设备5(需要nvidia容器工具包支持)-v /data:/data
:将宿主机的/data目录挂载到容器的/data目录,用于访问模型文件或者其他文件。-port 5058:5058
:将宿主机的5058接口映射到容器的5058接口,用于API通信--ipc=host
: 共享 宿主机的IPC命名空间,提升容器内进程间通信新能(对VLLM的多进程通信尤为重要)vllm/vllm-openai:latest
:使用vllm作为基础镜像--model /data/models/Qwen2.5-VL-32B-Instruct-AWQ
:指定模型路径--max-num-batched-tokens 65536
:设置每批次最大的tocken数,这个会影响吞吐量(值越大,批次处理能越强)--gpu-memory-utilization 0.7
:限制GPU内存使用率为70%,避免OOM(剩余内存用于临时数据)--served-model-name Qwen2.5-VL-32B-Instruct-AWQ
:设置服务对外暴露的模型名称(API请求时需指定此名称)--port 5058
:指定服务监听的端口号(与 -p 5058:5058 对应)。
二、接口请求
使用docker可以直接启动一个接口,因为不是bash启动的,所以不会在页面上进入容器内部,启动好的IP地址为:
http://10.0.0.0:5058
这里需要把里边的IP地址设置为你自己启动的本地机的IP地址
不过我们最终的IP地址为:http://10.199.197.0:5058/v1/chat/completions
这个请求方式在通义千问
以及deepseek
等官方大模型上都是通用的,它的底层是transformer启动,VLLM只是一个加速框架。
我们的请求体为:
{"model": "Qwen2.5-VL-32B","messages": [{"role": "system","content": "You are a helpful assistant."},{"role": "user","content": [{"type": "image_url","image_url": {"url": "http://10.0.74.58:8810/images/123321.png"}},{"type": "text","text": "识别这张图片上所有的文字"}]}],"max_tokens": 1024,"stream": false
}
这个接口需要接口两个比较关键的参数,分别是model和messages
model
:就是我们上边设置的served-model-name
,填进去就可以。
messages
:是一个输入信息,比较重要,因为会牵涉到一些上下文信息,如果组合错误,可能会导致大模型产生幻觉现象。
max_tokens
:表示最大请求量
这个接口的输出体如下:
{"id": "chatcmpl-703ca6eb944b4444bf58e020f7d241cd","object": "chat.completion","created": 1745550523,"model": "Qwen2.5-VL-7B-Instruct","choices": [{"index": 0,"message": {"role": "assistant","reasoning_content": null,"content": "这是一张展示一个人的特写照片。照片中的人有着深色的长发,佩戴着耳环和项链。背景是浅色的墙壁,整体色调柔和。","tool_calls": []},"logprobs": null,"finish_reason": "stop","stop_reason": null}],"usage": {"prompt_tokens": 3025,"total_tokens": 3064,"completion_tokens": 39,"prompt_tokens_details": null},"prompt_logprobs": null
}
具体的不多说了,大家可以自行使用文心一言等开源大模型平台问一下,具体是什么意思。
三、搭建gradio平台
3.1 发送图片,转换为URL格式
因为请求体中的图片只能是URL模型,所以在搭建平台之前,我们还需要写一个demo,将输入的图片转换一下。
我的思路是先将图片发送到服务器的指定目录,随后用Flask的方法将指定目录下的所有图片都转换为URL的格式。
发送图片的服务端received.py
如下(作用是接收客户端的图片,并保存到指定目录):
from flask import Flask, request
import osapp = Flask(__name__)UPLOAD_FOLDER = '/home/aiadmin/lhp/images/static/'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER@app.route('/upload', methods=['POST'])
def upload_file():new_name = request.form.get("filename")if 'file' not in request.files:return "No file part", 400file = request.files['file']if file.filename == '':return "No selected file", 400if file:# 保存文件到指定目录filepath = os.path.join(app.config['UPLOAD_FOLDER'], new_name)file.save(filepath)print(f"文件已保存至{filepath}")return f"File uploaded successfully to {filepath}", 200if __name__ == '__main__':app.run(host='0.0.0.0', port=8811)
接收图片的客户端send.py
如下(作用是发送图片,这里我直接封装成了方法):
import requests
import os
import mimetypes
import uuiddef upload_image(file_path, server_url="http://10.0.74.58:8811/upload"): # 在这里需要更换为服务端的IP地址ext = os.path.splitext(file_path)[-1]new_name = f"{uuid.uuid4().hex}{ext}"with open(file_path, 'rb') as f:files = {'file': (os.path.basename(file_path), f, 'image/jpeg')}response = requests.post(server_url, files=files, data={"filename": new_name})if response.status_code == 200:print("✅ 上传成功:", response.text)else:print("❌ 上传失败:", response.status_code, response.text)return new_name
有了这两个函数,就可以实现单端文件传输了,不过此时还需要将服务端的图片挂载为URL,代码如下
from flask import Flask, send_from_directory
import osapp = Flask(__name__)# 静态文件夹的路径
IMAGE_FOLDER = 'static'@app.route('/images/<path:filename>')
def serve_image(filename):return send_from_directory(IMAGE_FOLDER, filename)if __name__ == '__main__':# 确保文件夹存在if not os.path.exists(IMAGE_FOLDER):os.makedirs(IMAGE_FOLDER)# 启动服务app.run(host='0.0.0.0', port=8810)
这个函数的同一路径下需要有static
文件夹,假如static目录下有一个图片为1.jpg,那么便可以使用http://10.0.74.58:8810/images/1.jpg
这个网站来访问图片。
3.2 gradio平台搭建
使用的代码如下,支持流式输出,解释图片等
import gradio as gr
import requests
import json
from sendImages import upload_imageAPI_URL = "http://10.199.197.8:5056/v1/chat/completions"
IMAGE_SERVER_URL = "http://10.211.74.58:8810/images" # 提取为配置变量def predict_stream(chatbot, history):if not history:yield chatbotreturnquery = history[-1][0]messages = [{"role": "system", "content": "You are a helpful assistant."}]for q, a in history[:-1]: # 只处理历史记录,不包括当前消息user_msg = []if isinstance(q, tuple):image_path = q[0]try:new_name = upload_image(image_path)user_msg.append({"type": "image_url","image_url": {"url": f"{IMAGE_SERVER_URL}/{new_name}"}})except Exception as e:chatbot[-1] = (query, f"[图片上传错误] {str(e)}")yield chatbotreturnelse:user_msg.append({"type": "text", "text": q})messages.append({"role": "user", "content": user_msg})if a is not None:messages.append({"role": "assistant", "content": a})# 添加当前用户消息current_msg = []if isinstance(query, tuple):image_path = query[0]try:new_name = upload_image(image_path)current_msg.append({"type": "image_url","image_url": {"url": f"{IMAGE_SERVER_URL}/{new_name}"}})except Exception as e:chatbot[-1] = (query, f"[图片上传错误] {str(e)}")yield chatbotreturnelse:current_msg.append({"type": "text", "text": query})messages.append({"role": "user", "content": current_msg})payload = {"model": "Qwen2.5-VL-7B-Instruct","messages": messages,"stream": True}chatbot[-1] = (query, "")yield chatbottry:response = requests.post(API_URL, json=payload, stream=True, timeout=30)response.raise_for_status() # 检查HTTP错误partial = ""for line in response.iter_lines():if line:try:data = line.decode("utf-8")if data.startswith("data:"):data = data[5:].strip()if data == "[DONE]":breakjson_data = json.loads(data)delta = json_data["choices"][0]["delta"].get("content", "")partial += deltachatbot[-1] = (query, partial)yield chatbotexcept Exception as parse_err:print("解析错误:", parse_err)chatbot[-1] = (query, f"[解析错误] {str(parse_err)}")yield chatbotreturnexcept requests.exceptions.RequestException as e:chatbot[-1] = (query, f"[请求错误] {str(e)}")yield chatbotexcept Exception as e:chatbot[-1] = (query, f"[错误] {str(e)}")yield chatbotdef add_text(chatbot, task_history, text):chatbot = chatbot if chatbot else []task_history = task_history if task_history else []chatbot.append((text, None))task_history.append((text, None))return chatbot, task_historydef add_file(chatbot, history, file):chatbot = chatbot + [((file.name,), None)]history = history + [((file.name,), None)]return chatbot, historydef reset_user_input():return gr.update(value="")def reset_state(chatbot, history):return [], []def launch():with gr.Blocks() as demo:gr.Markdown("""<h2 align="center">图文理解问答工具,摸鱼小组专用</h2>""")chatbot = gr.Chatbot(label="Qwen2.5-VL", height=500, type='tuples')task_history = gr.State([])query = gr.Textbox(placeholder="请输入文字...", lines=2, label="Text")with gr.Row():upload = gr.UploadButton("📁 上传图片", file_types=["image"])submit = gr.Button("🚀 发送")clear = gr.Button("🧹 清空对话")submit.click(add_text,[chatbot, task_history, query],[chatbot, task_history]).then(predict_stream,[chatbot, task_history],[chatbot], # ⚠️ 删除 stream=Trueshow_progress=True).then(reset_user_input,[],[query])upload.upload(add_file,[chatbot, task_history, upload],[chatbot, task_history])clear.click(reset_state,[chatbot, task_history],[chatbot, task_history],)demo.queue().launch(server_name="0.0.0.0", server_port=7860)if __name__ == "__main__":launch()