当前位置: 首页 > news >正文

阿里云unbantu、Flask部署模型的一个错误

阿里云服务区unbantu,flask,pytorch(CPU)

[2025-06-11 18:26:48,428] ERROR in app: Exception on /upload [POST]
Traceback (most recent call last):File "/home/myenv/lib/python3.10/site-packages/flask/app.py", line 1511, in wsgi_appresponse = self.full_dispatch_request()File "/home/myenv/lib/python3.10/site-packages/flask/app.py", line 919, in full_dispatch_requestrv = self.handle_user_exception(e)File "/home/myenv/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_requestrv = self.dispatch_request()File "/home/myenv/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_requestreturn self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]File "/home/myenv/flasktest/tonguetest.py", line 45, in upload_filemodel.load_state_dict(torch.load(path,map_location='cpu'))  # 加载模型File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 1516, in loadreturn _load(File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 2114, in _loadresult = unpickler.load()File "/home/myenv/lib/python3.10/site-packages/torch/_weights_only_unpickler.py", line 532, in loadself.append(self.persistent_load(pid))File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 2078, in persistent_loadtyped_storage = load_tensor(File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 2044, in load_tensorwrap_storage = restore_location(storage, location)File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 698, in default_restore_locationresult = fn(storage, location)File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 636, in _deserializedevice = _validate_device(location, backend_name)File "/home/myenv/lib/python3.10/site-packages/torch/serialization.py", line 605, in _validate_deviceraise RuntimeError(
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

错误原因

  1. 模型在 GPU 上训练并保存:原模型训练时使用了 device='cuda',保存的检查点包含 CUDA 张量。

  2. 当前环境无 GPU:你在一个没有 GPU 的机器上加载模型,但未指定 map_location 参数。

 

 

最终解决问题:

在保存模型时,先转到 CPU 再保存,避免后续加载问题:

# 保存模型时(兼容性更强)
torch.save(model.state_dict(), "your_model.pth")  # 不保存设备信息
# 或显式转到 CPU
torch.save(model.cpu().state_dict(), "your_model.pth")

改为保存信息到cpu,直接就可以运行了!

后续加载预测模型的完整代码

from flask import Flask, request, redirect, url_for, render_template
import os
from werkzeug.utils import secure_filename
from torchvision import datasets, transforms
from PIL import Image
import torch
import torch.nn.functional as F
from torch.nn import Conv2d,MaxPool2d,Linear,Sequential,Flatten
from torch import nn
classes = ['disease','norm'] #这个跟你训练时,文件夹的顺序关系很大(第一个是classifidata/disease ;第二个才是classifidata/norm)
d=["你的脾胃可能存在疾病,你的颈椎可能存在疾病,注意休息,心脏处于高负荷状态","身体基本都正常,请继续保持,注意休息"]
app = Flask(__name__)# 配置上传文件夹路径
UPLOAD_FOLDER = 'uploads/'
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDERdef allowed_file(filename):return '.' in filename and \filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS@app.route('/')
def index():return render_template('upload.html')  # 渲染上传页面@app.route('/upload', methods=['POST'])
def upload_file():if 'image' not in request.files:return redirect(request.url)file = request.files['image']if file.filename == '':return redirect(request.url)if file and allowed_file(file.filename):filename = secure_filename(file.filename)file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))#图片预测的结果# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')# model = Model().to(device)model = Model()current_drectly = os.path.dirname(os.path.abspath(__file__))path = os.path.join(current_drectly, "mymodel.pth")model.load_state_dict(torch.load(path,map_location='cpu'))  # 加载模型model.eval()  # 把模型转为test模式# img = Image.open("../result/blend.png")img = Image.open(os.path.join(app.config['UPLOAD_FOLDER'], filename))print(os.path.join(app.config['UPLOAD_FOLDER'], filename))trans = transforms.Compose([transforms.ToTensor(),transforms.Resize((32, 32)),transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))])img = trans(img)# img = img.to(device)img = img.unsqueeze(0)  # 图片扩展多一维,因为输入到保存的模型中是4维的[batch_size,通道,长,宽],而普通图片只有三维,[通道,长,宽]# 扩展后,为[1,1,28,28]output = model(img)prob = F.softmax(output, dim=1)  # prob是10个分类的概率print(prob)value, predicted = torch.max(output.data, 1)  # 按照维度返回最大概率dim = 0 表示按列求最大值,并返回最大值的索引,dim = 1 表示按行求最大值,并返回最大值的索引print(predicted.item())print(value)pred_class = classes[predicted.item()]print(pred_class)dd = d[predicted.item()]print("我们预测的最终结果为:%s" % (dd))return "我们预测的最终结果为:%s" % (dd)return 'Invalid file type.'
class Model(nn.Module):def __init__(self):super().__init__()self.covn1=Conv2d(in_channels=3,out_channels=32,kernel_size=5,padding=2)self.maxpool1=MaxPool2d(kernel_size=2)self.covn2=Conv2d(in_channels=32,out_channels=32,kernel_size=5,padding=2)self.maxpool2=MaxPool2d(kernel_size=2)self.covn3=Conv2d(in_channels=32,out_channels=64,kernel_size=5,padding=2)self.maxpool3=MaxPool2d(kernel_size=2)self.flatten=Flatten()self.linear1=Linear(in_features=1024,out_features=64)self.liear2=Linear(in_features=64,out_features=2)def forward(self,x):x= self.covn1(x)x=self.maxpool1(x)x = self.covn2(x)x = self.maxpool2(x)x =self.covn3(x)x = self.maxpool3(x)x = self.flatten(x)x = self.linear1(x)x = self.liear2(x)return xif __name__ == "__main__":app.run(host="0.0.0.0",port=8080)

注意一下集中模式的选择:

解决方法

在加载模型时,明确指定 map_location 参数,强制将模型加载到 CPU:

方法 1:直接映射到 CPU

model = torch.load("your_model.pth", map_location=torch.device('cpu'))

方法 2:自动选择设备(优先用 GPU,若无则用 CPU)

model = torch.load("your_model.pth", map_location=lambda storage, loc: storage)

 

或:

model = torch.load("your_model.pth", map_location='cpu')  # 强制加载到 CPU

 

验证 CUDA 是否可用

运行以下代码检查环境是否支持 CUDA:

import torch
print(torch.cuda.is_available())  # 输出 False 则无 GPU
print(torch.__version__)         # 检查 PyTorch 版本

 

注意:

常见场景

  1. Colab/Jupyter Notebook:如果在 Colab 中训练并下载模型到本地,之后在本地 CPU 机器上加载,需用 map_location='cpu'

  2. 服务器训练 → 本地推理:在 GPU 服务器训练后,部署到无 GPU 的生产环境时需处理此问题。

 

http://www.xdnf.cn/news/987535.html

相关文章:

  • 安卓+苹果端签名教程
  • SiteAzure:文章删除后,前台还能搜索到
  • HarmonyOS - UIObserver(无感监听)
  • TF-IDF算法的代码实践应用——关键词提取、文本分类、信息检索
  • 帆软 BI 从入门到实战全攻略(一):安装激活与添加数据
  • 大量RPM仓库管理指南:更新与批量获取实战手册
  • VS2017----打开ui文件几秒后闪退
  • 汇编(函数调用)
  • 刷新网站 favicon 的几种方法
  • 医院重症监护系统 ICU重症病房管理系统 重症监护软件
  • QT第一课 —— 设置CMake路径
  • Rust:在Windows上安装
  • BEV和OCC学习-7:mmdet3d 3D检测demo测试
  • 剑指offer21——反转链表
  • 使用html写一个倒计时页面
  • 将模型保存到kaggle中的model中
  • 解码 K-Means 聚类:开启数据星河的炫酷聚类新纪元
  • 前端项目主题切换
  • 解锁Wi-SUN潜能!移远通信发布KCM0A5S模组,点亮智慧城市新图景
  • 关于有害的过度使用 std::move
  • Delphi 获取 XP系统 mac地址
  • Selenium工作原理
  • 【leetcode】36. 有效的数独
  • 利用递归来遍历树
  • Android学习之Window窗口
  • 一个数组样式上要分成两个
  • Unity UGUI GraphicRaycaster.Raycast详解
  • 免费开源的微信开发框架
  • LangSmith 实战指南:大模型链路调试与监控的深度解析
  • Linux 内核 Slab 分配器核心组件详解