当前位置: 首页 > news >正文

大文件断点续传解决方案:基于Vue 2与Spring Boot的完整实现

大文件断点续传解决方案:基于Vue 2与Spring Boot的完整实现

在这里插入图片描述

在现代Web应用中,大文件上传是一个常见但具有挑战性的需求。传统的文件上传方式在面对网络不稳定、大文件传输时往往表现不佳。本文将详细介绍如何实现一个支持断点续传的大文件上传功能,结合Vue 2前端和Spring Boot后端技术。

一、问题背景与挑战

在实际项目中,我们经常遇到需要上传大文件的场景,如视频、设计图纸、数据库备份等。这些文件可能达到几个GB甚至更大,直接使用传统表单上传方式会面临以下问题:

  1. 网络不稳定:上传过程中网络中断导致前功尽弃
  2. 服务器压力:大文件上传占用服务器资源时间长
  3. 用户体验:用户无法暂停/继续上传,进度不透明
  4. 重复上传:同一文件多次上传浪费带宽和存储空间

二、解决方案概述

我们的断点续传解决方案基于以下核心技术:

  1. 文件分片:将大文件分割成固定大小的块(如2MB)
  2. 唯一标识:使用MD5哈希值作为文件唯一标识
  3. 分片上传:仅上传服务器缺失的分片
  4. 状态记录:使用Redis记录已上传分片信息
  5. 合并恢复:所有分片上传完成后在服务器端合并

三、系统架构设计

前端架构(Vue 2.6.10)

- 文件选择组件
- MD5计算模块
- 分片管理模块
- 上传控制模块(开始/暂停/继续/取消)
- 进度显示组件

后端架构(Spring Boot)

- 文件状态检查接口
- 分片上传接口
- 分片合并接口
- 上传取消接口
- Redis存储服务
- 定时清理任务

四、核心技术实现

1. 前端核心代码

// 文件分片处理
createFileChunks() {if (!this.file) returnthis.uploadChunks = []const chunkCount = Math.ceil(this.file.size / CHUNK_SIZE)for (let i = 0; i < chunkCount; i++) {const start = i * CHUNK_SIZEconst end = Math.min(start + CHUNK_SIZE, this.file.size)const chunk = this.file.slice(start, end)this.uploadChunks.push({index: i,chunk: chunk,uploaded: this.uploadedChunkIndexes.includes(i),retries: 0})}
}// 带重试机制的分片上传
async uploadChunkWithRetry(chunk, maxRetries = 3) {try {await this.uploadChunk(chunk)chunk.uploaded = truethis.uploadedSize += chunk.chunk.size} catch (error) {chunk.retries++if (chunk.retries <= maxRetries) {await new Promise(resolve => setTimeout(resolve, 1000 * chunk.retries))return this.uploadChunkWithRetry(chunk, maxRetries)} else {throw error}}
}

2. 后端核心代码

2.1 Redis配置类
@Configuration
@EnableCaching
public class RedisConfig {@Beanpublic RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {RedisTemplate<String, Object> template = new RedisTemplate<>();template.setConnectionFactory(factory);// 使用Jackson2JsonRedisSerializer来序列化和反序列化redis的value值Jackson2JsonRedisSerializer<Object> serializer = new Jackson2JsonRedisSerializer<>(Object.class);ObjectMapper mapper = new ObjectMapper();mapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);mapper.activateDefaultTyping(mapper.getPolymorphicTypeValidator(), ObjectMapper.DefaultTyping.NON_FINAL);serializer.setObjectMapper(mapper);template.setValueSerializer(serializer);template.setKeySerializer(new StringRedisSerializer());template.setHashKeySerializer(new StringRedisSerializer());template.setHashValueSerializer(serializer);template.afterPropertiesSet();return template;}@Beanpublic CacheManager cacheManager(RedisConnectionFactory factory) {RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofHours(2)) // 设置缓存有效期2小时.disableCachingNullValues();return RedisCacheManager.builder(factory).cacheDefaults(config).build();}
}
2.2 实体类
@Data   
public class CancelUploadRequest {private String md5;
}@Data   
public class CancelUploadResponse {private boolean success;private String message;
}@Data
public class CheckFileResponse {private boolean uploaded;private List<Integer> uploadedChunks;
}@Data
public class MergeChunksRequest {private String md5;private String fileName;private int totalChunks;private long fileSize;
}@Data
public class MergeChunksResponse {private boolean success;private String message;private String filePath;
}@Data
public class UploadChunkResponse {private boolean success;private String message;
}
2.3 核心实现
@Slf4j
@Service
public class FileUploadService {@Value("${file.upload.chunk-dir:/tmp/chunks/}")private String chunkDir;@Value("${file.upload.final-dir:/tmp/uploads/}")private String finalDir;@Autowiredprivate RedisTemplate<String, Object> redisTemplate;// Redis键前缀private static final String UPLOAD_CHUNKS_KEY_PREFIX = "upload:chunks:";private static final String UPLOAD_INFO_KEY_PREFIX = "upload:info:";/*** 检查文件状态*/public CheckFileResponse checkFile(String md5, String fileName, long fileSize) {CheckFileResponse response = new CheckFileResponse();// 检查是否已存在完整文件File finalFile = new File(finalDir + md5 + "/" + fileName);if (finalFile.exists() && finalFile.length() == fileSize) {response.setUploaded(true);return response;}// 从Redis获取已上传的分片信息Set<Object> uploadedChunks = redisTemplate.opsForSet().members(UPLOAD_CHUNKS_KEY_PREFIX + md5);if (uploadedChunks != null && !uploadedChunks.isEmpty()) {List<Integer> chunks = uploadedChunks.stream().map(obj -> Integer.parseInt(obj.toString())).sorted().collect(Collectors.toList());response.setUploadedChunks(chunks);} else {// Redis中没有记录,检查磁盘上的分片文件File chunkFolder = new File(chunkDir + md5);if (chunkFolder.exists() && chunkFolder.isDirectory()) {List<Integer> uploadedChunksList = Arrays.stream(chunkFolder.listFiles()).filter(File::isFile).map(f -> {try {return Integer.parseInt(f.getName());} catch (NumberFormatException e) {return -1;}}).filter(i -> i >= 0).sorted().collect(Collectors.toList());response.setUploadedChunks(uploadedChunksList);// 将分片信息保存到Redisif (!uploadedChunksList.isEmpty()) {String[] chunksArray = uploadedChunksList.stream().map(String::valueOf).toArray(String[]::new);redisTemplate.opsForSet().add(UPLOAD_CHUNKS_KEY_PREFIX + md5, chunksArray);// 设置24小时过期时间redisTemplate.expire(UPLOAD_CHUNKS_KEY_PREFIX + md5, Duration.ofHours(24));}}}// 保存上传文件信息到Redisif (fileSize > 0) {Map<String, Object> fileInfo = new HashMap<>();fileInfo.put("fileName", fileName);fileInfo.put("fileSize", fileSize);fileInfo.put("totalChunks", (int) Math.ceil((double) fileSize / (2 * 1024 * 1024)));fileInfo.put("lastUpdate", System.currentTimeMillis());redisTemplate.opsForHash().putAll(UPLOAD_INFO_KEY_PREFIX + md5, fileInfo);redisTemplate.expire(UPLOAD_INFO_KEY_PREFIX + md5, Duration.ofHours(24));}return response;}/*** 上传分片*/public UploadChunkResponse uploadChunk(MultipartFile file, int chunkIndex,int totalChunks, String md5, String fileName, long fileSize) {UploadChunkResponse response = new UploadChunkResponse();try {// 创建分片存储目录File chunkFolder = new File(chunkDir + md5);if (!chunkFolder.exists()) {chunkFolder.mkdirs();}// 保存分片文件File chunkFile = new File(chunkFolder, String.valueOf(chunkIndex));file.transferTo(chunkFile);// 将分片信息保存到RedisredisTemplate.opsForSet().add(UPLOAD_CHUNKS_KEY_PREFIX + md5, String.valueOf(chunkIndex));// 更新过期时间redisTemplate.expire(UPLOAD_CHUNKS_KEY_PREFIX + md5, Duration.ofHours(24));// 更新文件信息if (fileSize > 0) {Map<String, Object> fileInfo = new HashMap<>();fileInfo.put("fileName", fileName);fileInfo.put("fileSize", fileSize);fileInfo.put("totalChunks", totalChunks);fileInfo.put("lastUpdate", System.currentTimeMillis());redisTemplate.opsForHash().putAll(UPLOAD_INFO_KEY_PREFIX + md5, fileInfo);redisTemplate.expire(UPLOAD_INFO_KEY_PREFIX + md5, Duration.ofHours(24));}response.setSuccess(true);response.setMessage("分片上传成功");} catch (IOException e) {response.setSuccess(false);response.setMessage("分片上传失败: " + e.getMessage());}return response;}/*** 合并分片*/public MergeChunksResponse mergeChunks(String md5, String fileName, int totalChunks, long fileSize) {MergeChunksResponse response = new MergeChunksResponse();try {File chunkFolder = new File(chunkDir + md5);if (!chunkFolder.exists()) {response.setSuccess(false);response.setMessage("分片文件夹不存在");return response;}// 检查分片是否完整File[] chunkFiles = chunkFolder.listFiles();if (chunkFiles == null || chunkFiles.length < totalChunks) {response.setSuccess(false);response.setMessage("分片不完整,无法合并");return response;}// 创建最终文件File finalFile = new File(finalDir + md5 + "/" + fileName);File finalDirFile = finalFile.getParentFile();if (!finalDirFile.exists()) {finalDirFile.mkdirs();}// 合并所有分片try (RandomAccessFile randomAccessFile = new RandomAccessFile(finalFile, "rw")) {byte[] buffer = new byte[1024 * 1024]; // 1MB缓冲区int bytesRead;for (int i = 0; i < totalChunks; i++) {File chunkFile = new File(chunkFolder, String.valueOf(i));if (!chunkFile.exists()) {response.setSuccess(false);response.setMessage("分片 " + i + " 不存在");return response;}try (FileInputStream fis = new FileInputStream(chunkFile)) {while ((bytesRead = fis.read(buffer)) != -1) {randomAccessFile.write(buffer, 0, bytesRead);}}}}// 验证文件大小if (finalFile.length() != fileSize) {response.setSuccess(false);response.setMessage("合并后的文件大小不匹配");finalFile.delete();return response;}// 删除分片临时文件和Redis记录deleteFolder(chunkFolder);redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);redisTemplate.delete(UPLOAD_INFO_KEY_PREFIX + md5);response.setSuccess(true);response.setMessage("文件合并成功");response.setFilePath(finalFile.getAbsolutePath());} catch (IOException e) {response.setSuccess(false);response.setMessage("文件合并失败: " + e.getMessage());}return response;}/*** 取消上传*/public CancelUploadResponse cancelUpload(String md5) {CancelUploadResponse response = new CancelUploadResponse();try {// 删除Redis中的记录redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);redisTemplate.delete(UPLOAD_INFO_KEY_PREFIX + md5);// 删除分片文件File chunkFolder = new File(chunkDir + md5);if (chunkFolder.exists()) {deleteFolder(chunkFolder);}response.setSuccess(true);response.setMessage("上传已取消");} catch (Exception e) {response.setSuccess(false);response.setMessage("取消上传失败: " + e.getMessage());}return response;}/*** 清理过期上传任务*/@Scheduled(cron = "0 0 2 * * ?") // 每天凌晨2点执行public void cleanupExpiredUploads() {try {// 查找24小时内没有更新的上传任务long twentyFourHoursAgo = System.currentTimeMillis() - (24 * 60 * 60 * 1000);// 获取所有上传信息键Set<String> keys = redisTemplate.keys(UPLOAD_INFO_KEY_PREFIX + "*");if (keys != null) {for (String key : keys) {Long lastUpdate = (Long) redisTemplate.opsForHash().get(key, "lastUpdate");if (lastUpdate != null && lastUpdate < twentyFourHoursAgo) {String md5 = key.substring(UPLOAD_INFO_KEY_PREFIX.length());// 删除Redis记录redisTemplate.delete(key);redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);// 删除分片文件File chunkFolder = new File(chunkDir + md5);if (chunkFolder.exists()) {deleteFolder(chunkFolder);}}}}} catch (Exception e) {log.error("清理过期上传任务失败", e);}}/*** 删除文件夹*/private void deleteFolder(File folder) {if (folder.isDirectory()) {File[] files = folder.listFiles();if (files != null) {for (File file : files) {deleteFolder(file);}}}folder.delete();}
}
// 使用Redis记录分片信息
public CheckFileResponse checkFile(String md5, String fileName, long fileSize) {CheckFileResponse response = new CheckFileResponse();// 从Redis获取已上传的分片信息Set<Object> uploadedChunks = redisTemplate.opsForSet().members(UPLOAD_CHUNKS_KEY_PREFIX + md5);if (uploadedChunks != null && !uploadedChunks.isEmpty()) {List<Integer> chunks = uploadedChunks.stream().map(obj -> Integer.parseInt(obj.toString())).sorted().collect(Collectors.toList());response.setUploadedChunks(chunks);}return response;
}// 分片合并处理
public MergeChunksResponse mergeChunks(String md5, String fileName, int totalChunks, long fileSize) {// 合并所有分片try (RandomAccessFile randomAccessFile = new RandomAccessFile(finalFile, "rw")) {byte[] buffer = new byte[1024 * 1024];int bytesRead;for (int i = 0; i < totalChunks; i++) {File chunkFile = new File(chunkFolder, String.valueOf(i));try (FileInputStream fis = new FileInputStream(chunkFile)) {while ((bytesRead = fis.read(buffer)) != -1) {randomAccessFile.write(buffer, 0, bytesRead);}}}}// 清理Redis记录和临时文件redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);redisTemplate.delete(UPLOAD_INFO_KEY_PREFIX + md5);deleteFolder(chunkFolder);
}
2.4 接口控制台
@RestController
@RequestMapping("/api/upload")
public class FileUploadController {@Autowiredprivate FileUploadService fileUploadService;@GetMapping("/check")public ResponseEntity<CheckFileResponse> checkFile(@RequestParam String md5,@RequestParam String fileName,@RequestParam(required = false) Long size) {CheckFileResponse response = fileUploadService.checkFile(md5, fileName, size != null ? size : 0);return ResponseEntity.ok(response);}@PostMapping("/chunk")public ResponseEntity<UploadChunkResponse> uploadChunk(@RequestParam("file") MultipartFile file,@RequestParam int chunkIndex,@RequestParam int totalChunks,@RequestParam String md5,@RequestParam String fileName,@RequestParam(required = false) Long fileSize) {UploadChunkResponse response = fileUploadService.uploadChunk(file, chunkIndex, totalChunks, md5, fileName, fileSize != null ? fileSize : 0);return ResponseEntity.ok(response);}@PostMapping("/merge")public ResponseEntity<MergeChunksResponse> mergeChunks(@RequestBody MergeChunksRequest request) {MergeChunksResponse response = fileUploadService.mergeChunks(request.getMd5(), request.getFileName(), request.getTotalChunks(),request.getFileSize());return ResponseEntity.ok(response);}@PostMapping("/cancel")public ResponseEntity<CancelUploadResponse> cancelUpload(@RequestBody CancelUploadRequest request) {CancelUploadResponse response = fileUploadService.cancelUpload(request.getMd5());return ResponseEntity.ok(response);}
}
2.5 配置建议

在application.properties中添加配置:

file:upload:chunk-dir: /tmp/chunks/final-dir: /tmp/uploads/
spring:servlet:multipart:max-file-size: 10GBmax-request-size: 10GB

五、性能优化策略

1. Redis优化

  • 使用Set结构存储分片索引,快速查询已上传分片
  • 设置24小时过期时间,自动清理未完成的上传任务
  • 使用Hash结构存储文件元信息

2. 并发上传控制

// 限制最大并发数,避免浏览器资源耗尽
const MAX_CONCURRENT_UPLOADS = 3;// 控制并发上传
for (let i = 0; i < chunksToUpload.length; i++) {const chunk = chunksToUpload[i];if (activeUploads.length >= MAX_CONCURRENT_UPLOADS) {await Promise.race(activeUploads);}const uploadPromise = this.uploadChunkWithRetry(chunk);activeUploads.push(uploadPromise);
}

3. 智能重试机制

  • 指数退避重试策略
  • 最大重试次数限制
  • 网络异常自动重试

六、实践建议

  1. 分片大小选择:根据网络环境和文件大小调整,通常2-5MB为宜
  2. MD5计算优化:超大文件可使用Web Worker避免界面卡顿
  3. 内存管理:及时释放已上传分片的Blob对象
  4. 安全考虑:添加文件类型校验、大小限制和权限控制
  5. 监控日志:记录上传成功率、耗时等指标用于优化

八、总结

本文介绍了一套完整的大文件断点续传解决方案,结合Vue 2前端和Spring Boot后端技术,通过文件分片、Redis状态管理和智能重试机制,有效解决了大文件上传的痛点问题。该方案具有以下特点:

  1. 可靠性高:断点续传和重试机制保证上传成功率
  2. 性能优异:并发上传和Redis缓存提升效率
  3. 用户体验好:实时进度反馈和操作控制
  4. 易于扩展:模块化设计便于功能扩展和定制

这套方案已在实际项目中验证,能够稳定支持GB级文件上传,为类似需求提供了可靠的技术参考。开发者可以根据具体业务场景调整参数和功能,实现最佳的上传体验。

http://www.xdnf.cn/news/1452313.html

相关文章:

  • C++并发编程-23. 线程间切分任务的方法
  • `void 0` 与 `undefined` 深度解析
  • mysql安装(压缩包方式8.0及以上)
  • 2026届IC秋招联芸科技IC面经(完整面试题)
  • 从零开始学大模型之大语言模型
  • 大模型部署全攻略:Docker+FastAPI+Nginx搭建高可用AI服务
  • MindMeister AI版:AI思维导图工具高效生成框架,解决结构卡壳与逻辑优化难题
  • 十一、容器化 vs 虚拟化-K8s-Kustomize
  • Spark中的堆外和堆内内存以及内部行数据表示UnsafeRow
  • S 3.3深度学习--卷积神经网络--代码
  • (A题|烟幕干扰弹的投放策略)2025年高教杯全国大学生数学建模国赛解题思路|完整代码论文集合
  • 【mmcv自己理解】
  • “全结构化录入+牙位可视化标记”人工智能化python编程路径探析
  • 新电脑硬盘如何分区?3个必知技巧避免“空间浪费症”!
  • 如何监控员工的电脑?7款实用的员工电脑管理软件,探索高效管理捷径!
  • cursor+python轻松实现电脑监控
  • 【嵌入式DIY实例-ESP32篇】-倾斜弹跳球游戏
  • 小程序缓存数据字典
  • Android 项目:画图白板APP开发(三)——笔锋(多 Path 叠加)
  • 当液态玻璃计划遭遇反叛者:一场 iOS 26 界面的暗战
  • 用 Rust + Actix-Web 打造“Hello, WebSocket!”——从握手到回声,只需 50 行代码
  • Energy期刊论文学习——基于集成学习模型的多源域迁移学习方法用于小样本实车数据锂离子电池SOC估计
  • 邮件如何防泄密?这10个电子邮件安全解决方案真的好用,快收藏
  • Windows+Docker一键部署CozeStudio私有化,保姆级
  • 15、Docker构建前端镜像并运行
  • 计算机大数据毕业设计推荐:基于Spark的新能源汽车保有量可视化分析系统
  • 配置阿里云 YUM 源指南
  • IPV6之DHCPv6服务器和中继代理和前缀代理服务器客户端
  • 高并发商城 商品为了防止超卖,都做了哪些努力?
  • PostgreSQL18-FDW连接的 SCRAM 直通身份验证