当前位置: 首页 > news >正文

AI学习笔记三十三:基于Opencv的单目标跟踪

若该文为原创文章,转载请注明原文出处。

一、功能介绍

主要是想实现跟踪视频中的一个特定目标。

使用了OpenCV库来实现视频中特定目标的跟踪。需要提供视频文件路径以及目标在第一帧中的位置坐标(x, y, width, height),程序会自动跟踪该目标在整个视频中的移动。

二、环境搭建

pip install opencv-contrib-python==3.4.13.47 -i https://pypi.tuna.tsinghua.edu.cn/simple

其他版本自行测试。

三、工作原理

1. 初始化阶段

  • 程序首先读取视频的第一帧,获取图像尺寸
  • 根据用户提供的坐标初始化目标区域(ROI - Region of Interest)
  • 保存初始目标区域作为模板用于后续重新检测
  • 尝试创建并初始化跟踪器(优先级顺序:CSRT、KCF、MOSSE)

2. 主跟踪循环

  • 逐帧读取视频
  • 使用跟踪器更新目标位置
  • 如果跟踪成功:
    • 在当前帧上绘制目标边界框
    • 更新位置历史信息
    • 定期更新模板库(用于重新检测)
    • 重置失败计数器
  • 如果跟踪失败:
    • 启动重新检测机制

3. 重新检测机制

当跟踪失败时,程序采用多种策略来重新找到目标:

多层次搜索策略:
  1. 局部搜索:在预测位置附近区域搜索
  2. 扩展搜索:逐渐扩大搜索区域
  3. 全图搜索:在整个图像中搜索
多种检测方法:
  1. 模板匹配:使用保存的模板与当前图像进行匹配
  2. ORB特征匹配:使用ORB特征检测和匹配算法
  3. 全图特征匹配:在整个图像上进行特征匹配
动态调整机制:
  • 根据失败次数动态调整搜索区域大小
  • 使用历史位置预测目标可能位置
  • 循环使用不同检测方法

4. 模板管理

程序维护一个模板库:

  • 保存多个不同时刻的目标图像作为模板
  • 定期更新模板以适应目标外观变化
  • 限制模板数量防止内存过度使用

5. 运动预测

  • 记录目标的历史位置
  • 计算平均运动向量预测下一位置
  • 根据预测位置调整搜索区域中心

6. 失败处理

  • 设置最大失败次数限制
  • 多种重新检测策略轮换使用
  • 如果重新检测成功则重新初始化跟踪器

四、源码

import cv2
import sys
import os
import numpy as np
import time
import mathdef safe_roi(roi, img_width, img_height):"""确保ROI在图像范围内"""x, y, w, h = roix = max(0, x)y = max(0, y)w = min(w, img_width - x)h = min(h, img_height - y)w = max(0, w)h = max(0, h)return (x, y, w, h)def adaptive_template_match(search_area, templates, scales=[0.4, 0.6, 0.8, 1.0, 1.2, 1.5, 1.8, 2.0]):"""自适应模板匹配,支持多模板和多尺度"""best_match = Nonebest_val = -1best_scale = 1.0best_template_idx = 0for scale in scales:# 缩放搜索区域if scale != 1.0:scaled_search = cv2.resize(search_area, None, fx=scale, fy=scale)else:scaled_search = search_areafor idx, template in enumerate(templates):# 确保模板小于搜索区域if template.shape[0] > scaled_search.shape[0] or template.shape[1] > scaled_search.shape[1]:continuetry:# 模板匹配res = cv2.matchTemplate(scaled_search, template, cv2.TM_CCOEFF_NORMED)min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)# 更新最佳匹配if max_val > best_val:best_val = max_valbest_loc = max_locbest_scale = scalebest_template_idx = idxexcept:continuereturn best_val, best_loc, best_scale, best_template_idxdef validate_detection(frame, candidate_roi, templates, min_similarity=0.3):"""验证检测结果是否有效"""x, y, w, h = candidate_roi# 确保ROI有效if w <= 5 or h <= 5:return False# 提取候选区域candidate_img = frame[y:y+h, x:x+w]if candidate_img.size == 0 or candidate_img.shape[0] == 0 or candidate_img.shape[1] == 0:return False# 与所有模板比较相似度max_similarity = 0for template in templates:try:# 调整模板大小以匹配候选区域resized_template = cv2.resize(template, (w, h))# 计算直方图相似度hist1 = cv2.calcHist([candidate_img], [0, 1, 2], None, [8, 8, 8], [0, 256, 0, 256, 0, 256])hist2 = cv2.calcHist([resized_template], [0, 1, 2], None, [8, 8, 8], [0, 256, 0, 256, 0, 256])cv2.normalize(hist1, hist1)cv2.normalize(hist2, hist2)similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_CORREL)if similarity > max_similarity:max_similarity = similarityif similarity > min_similarity:return Trueexcept:continueprint(f"直方图验证失败: 最大相似度={max_similarity:.2f}")return Falsedef contour_similarity(img1, template):"""通过轮廓比较图像相似度"""try:# 预处理图像if len(img1.shape) == 3:gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)else:gray1 = img1.copy()if len(template.shape) == 3:gray2 = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)else:gray2 = template.copy()# 二值化_, thresh1 = cv2.threshold(gray1, 127, 255, cv2.THRESH_BINARY)_, thresh2 = cv2.threshold(gray2, 127, 255, cv2.THRESH_BINARY)# 查找轮廓contours1, _ = cv2.findContours(thresh1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)contours2, _ = cv2.findContours(thresh2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)if not contours1 or not contours2:return 0.0# 取最大轮廓cnt1 = max(contours1, key=cv2.contourArea)cnt2 = max(contours2, key=cv2.contourArea)# 计算相似度 (值越小越相似)similarity = cv2.matchShapes(cnt1, cnt2, cv2.CONTOURS_MATCH_I2, 0.0)# 转换为相似度分数 (1-相似度,值越大越相似)return 1.0 - min(similarity, 1.0)except Exception as e:print(f"轮廓相似度计算错误: {e}")return 0.0def template_match_score(frame, roi, templates):"""评估候选区域的模板匹配分数"""x, y, w, h = roiif w <= 5 or h <= 5:return 0.0patch = frame[y:y+h, x:x+w]if patch.size == 0:return 0.0best_score = 0.0for template in templates:try:# 调整模板大小resized_tpl = cv2.resize(template, (w, h))# 计算匹配分数result = cv2.matchTemplate(patch, resized_tpl, cv2.TM_CCOEFF_NORMED)_, max_val, _, _ = cv2.minMaxLoc(result)if max_val > best_score:best_score = max_valexcept:continuereturn best_scoredef detect_with_orb(frame, templates, search_area_roi=None):"""使用ORB特征匹配检测目标"""if search_area_roi:x1, y1, x2, y2 = search_area_roisearch_area = frame[y1:y2, x1:x2]else:search_area = frameif search_area.size == 0:return None# 初始化ORB检测器orb = cv2.ORB_create(nfeatures=2000)# 检测搜索区域的关键点和描述符kp_search, des_search = orb.detectAndCompute(search_area, None)if des_search is None or len(kp_search) < 10:return Nonebest_match = Nonebest_matches = 0for template in templates:# 检测模板的关键点和描述符kp_template, des_template = orb.detectAndCompute(template, None)if des_template is None or len(kp_template) < 5:continue# 创建BFMatcher对象bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)try:# 匹配描述符matches = bf.match(des_template, des_search)matches = sorted(matches, key=lambda x: x.distance)# 选择最佳匹配if len(matches) > 10:# 获取匹配点坐标src_pts = np.float32([kp_template[m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)dst_pts = np.float32([kp_search[m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)# 计算单应性矩阵M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)if M is not None:# 获取模板的角点h, w = template.shape[:2]pts = np.float32([[0, 0], [0, h-1], [w-1, h-1], [w-1, 0]]).reshape(-1, 1, 2)# 应用单应性矩阵dst = cv2.perspectiveTransform(pts, M)# 计算边界框xs = [p[0][0] for p in dst]ys = [p[0][1] for p in dst]x, y, w, h = int(min(xs)), int(min(ys)), int(max(xs)-min(xs)), int(max(ys)-min(ys))if w > 5 and h > 5 and w < frame.shape[1] and h < frame.shape[0]:# 计算匹配质量match_quality = len(matches) * (1.0 - np.mean([m.distance for m in matches[:10]])/100.0)if match_quality > best_matches:best_matches = match_qualitybest_match = (x, y, w, h)except Exception as e:print(f"ORB匹配错误: {e}")continueif best_match and search_area_roi:# 调整坐标到原图x, y, w, h = best_matchbest_match = (x + search_area_roi[0], y + search_area_roi[1], w, h)return best_matchdef main(video_path, roi_coords):# 打开视频文件cap = cv2.VideoCapture(video_path)if not cap.isOpened():print("无法打开视频文件")return# 读取第一帧获取图像尺寸ret, frame = cap.read()if not ret:print("无法读取第一帧")returnheight, width = frame.shape[:2]print(f"视频尺寸: {width}x{height}")# 解析并验证ROI坐标try:if len(roi_coords) != 4:raise ValueError("需要4个坐标值: x, y, width, height")x, y, w, h = map(int, roi_coords)print(f"初始ROI: x={x}, y={y}, w={w}, h={h}")if w <= 0 or h <= 0:raise ValueError("ROI宽度和高度必须为正数")roi_box = safe_roi((x, y, w, h), width, height)if roi_box[2] <= 0 or roi_box[3] <= 0:raise ValueError(f"调整后ROI无效: {roi_box}")print(f"有效ROI: x={roi_box[0]}, y={roi_box[1]}, w={roi_box[2]}, h={roi_box[3]}")except Exception as e:print(f"ROI坐标错误: {e}")print(f"请确保ROI在图像范围内 (0-{width}, 0-{height})")cap.release()return# 保存初始模板用于重新检测x0, y0, w0, h0 = roi_boxinitial_template = frame[y0:y0+h0, x0:x0+w0].copy()# 创建跟踪器tracker = Nonetracker_types = [('CSRT', cv2.TrackerCSRT_create),('KCF', cv2.TrackerKCF_create),('MOSSE', cv2.TrackerMOSSE_create)]# 重新读取第一帧cap.set(cv2.CAP_PROP_POS_FRAMES, 0)ret, frame = cap.read()# 尝试不同的跟踪器for tracker_name, tracker_creator in tracker_types:try:print(f"尝试使用 {tracker_name} 跟踪器...")tracker = tracker_creator()success = tracker.init(frame, roi_box)if success:print(f"{tracker_name} 跟踪器初始化成功")breakelse:print(f"{tracker_name} 跟踪器初始化失败")tracker = Noneexcept:print(f"{tracker_name} 跟踪器创建失败")tracker = Noneif tracker is None:print("无法初始化任何跟踪器")cap.release()returnprint("开始跟踪目标...")# 创建窗口cv2.namedWindow("目标跟踪", cv2.WINDOW_NORMAL)# 显示初始帧和ROIcv2.rectangle(frame, (roi_box[0], roi_box[1]), (roi_box[0] + roi_box[2], roi_box[1] + roi_box[3]), (0, 255, 0), 2)cv2.putText(frame, "按ESC退出", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)cv2.imshow("目标跟踪", frame)cv2.waitKey(1000)frame_count = 0tracking_failures = 0max_failures = 200  # 允许更多失败帧reinit_threshold = 3  # 更早开始重新检测last_known_position = roi_boxreinit_attempts = 0last_success_time = time.time()# 模板管理templates = [initial_template]  # 模板列表max_templates = 5  # 更多模板template_update_interval = 5  # 更频繁更新模板# 运动预测prev_positions = []max_history = 10# 重新检测状态reinit_mode = 0  # 0: 模板匹配, 1: ORB特征匹配, 2: 全图搜索while True:ret, frame = cap.read()if not ret:print("视频结束")breakframe_count += 1# 更新跟踪器success, bbox = tracker.update(frame)# 处理跟踪结果if success:x, y, w, h = [int(v) for v in bbox]safe_bbox = safe_roi((x, y, w, h), width, height)if safe_bbox[2] > 0 and safe_bbox[3] > 0:# 更新位置历史if len(prev_positions) >= max_history:prev_positions.pop(0)prev_positions.append((x, y, w, h))cv2.rectangle(frame, (safe_bbox[0], safe_bbox[1]), (safe_bbox[0] + safe_bbox[2], safe_bbox[1] + safe_bbox[3]), (0, 255, 0), 2)status_text = f"跟踪成功 (帧 {frame_count})"cv2.putText(frame, status_text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)tracking_failures = 0last_known_position = safe_bboxreinit_attempts = 0last_success_time = time.time()reinit_mode = 0  # 重置重新检测模式# 定期更新模板if frame_count % template_update_interval == 0:# 获取当前目标区域target_roi = frame[safe_bbox[1]:safe_bbox[1]+safe_bbox[3], safe_bbox[0]:safe_bbox[0]+safe_bbox[2]]# 检查与现有模板的相似度min_similarity = 0.7too_similar = Falsefor tpl in templates:if target_roi.shape[0] > 5 and target_roi.shape[1] > 5:# 计算直方图相似度hist1 = cv2.calcHist([target_roi], [0], None, [256], [0,256])hist2 = cv2.calcHist([tpl], [0], None, [256], [0,256])cv2.normalize(hist1, hist1)cv2.normalize(hist2, hist2)similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_CORREL)if similarity > min_similarity:too_similar = Truebreak# 只添加显著不同的模板if not too_similar:# 如果模板数量已达上限,移除最旧的if len(templates) >= max_templates:templates.pop(0)# 添加新模板if target_roi.size > 0 and target_roi.shape[0] > 5 and target_roi.shape[1] > 5:templates.append(target_roi.copy())print(f"更新模板,当前模板数: {len(templates)}")# 处理跟踪失败if not success:tracking_failures += 1status_text = f"跟踪失败 (帧 {frame_count})"cv2.putText(frame, status_text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)# 尝试重新检测目标if tracking_failures >= reinit_threshold and reinit_attempts < 30:reinit_attempts += 1print(f"尝试重新检测目标 (尝试 {reinit_attempts}, 模式 {reinit_mode})")# 动态调整搜索区域边界base_margin = 300dynamic_margin = min(1000, base_margin + 50 * reinit_attempts)  # 更大搜索范围# 使用位置历史预测搜索区域if len(prev_positions) >= 2:# 计算平均运动向量dx = 0dy = 0speeds = []for i in range(1, len(prev_positions)):dx_i = prev_positions[i][0] - prev_positions[i-1][0]dy_i = prev_positions[i][1] - prev_positions[i-1][1]dx += dx_idy += dy_ispeeds.append(np.sqrt(dx_i**2 + dy_i**2))# 计算平均速度avg_speed = np.mean(speeds) if speeds else 0# 基于速度和方向预测predict_frames = min(20, 5 + reinit_attempts)  # 预测帧数predict_x = last_known_position[0] + int(dx * predict_frames / len(prev_positions))predict_y = last_known_position[1] + int(dy * predict_frames / len(prev_positions))# 根据速度调整搜索范围speed_factor = min(3.0, 1.0 + avg_speed/50.0)dynamic_margin = int(min(1000, 300 * speed_factor + 50 * reinit_attempts))# 确保预测位置在图像范围内predict_x = max(0, min(width - 1, predict_x))predict_y = max(0, min(height - 1, predict_y))search_center = (predict_x, predict_y)else:search_center = (last_known_position[0] + last_known_position[2] // 2, last_known_position[1] + last_known_position[3] // 2)# 计算搜索区域search_x1 = max(0, search_center[0] - dynamic_margin)search_y1 = max(0, search_center[1] - dynamic_margin)search_x2 = min(width, search_center[0] + dynamic_margin)search_y2 = min(height, search_center[1] + dynamic_margin)search_area_roi = (search_x1, search_y1, search_x2, search_y2)# 根据重新检测模式选择方法candidate_roi = None# 模式0: 模板匹配if reinit_mode == 0:if search_x2 > search_x1 and search_y2 > search_y1:search_area = frame[search_y1:search_y2, search_x1:search_x2]# 自适应模板匹配best_val, best_loc, best_scale, best_template_idx = adaptive_template_match(search_area, templates)print(f"模板匹配结果: 置信度={best_val:.2f}, 尺度={best_scale}, 模板={best_template_idx}")# 动态匹配阈值match_threshold = max(0.35, 0.7 - 0.02 * reinit_attempts)  # 更低阈值if best_val > match_threshold:# 计算匹配位置match_x = search_x1 + int(best_loc[0] / best_scale)match_y = search_y1 + int(best_loc[1] / best_scale)# 计算缩放后的模板尺寸scaled_w = int(w0 * (1.0 / best_scale))scaled_h = int(h0 * (1.0 / best_scale))# 创建新的ROIcandidate_roi = (match_x, match_y, scaled_w, scaled_h)print(f"模板匹配候选: {candidate_roi}")# 模式1: ORB特征匹配if reinit_mode == 1 or (reinit_mode == 0 and candidate_roi is None):print("尝试ORB特征匹配...")candidate_roi = detect_with_orb(frame, templates, (search_x1, search_y1, search_x2, search_y2))if candidate_roi:print(f"ORB检测到候选目标: {candidate_roi}")# 模式2: 全图搜索if reinit_mode == 2 or (reinit_mode == 1 and candidate_roi is None):print("尝试金字塔全图搜索...")# 使用图像金字塔pyramid_levels = 3best_candidate = Nonebest_score = -1for level in range(pyramid_levels):scale = 1.0 / (2 ** level)resized_frame = cv2.resize(frame, None, fx=scale, fy=scale)# 在缩小后的图像上搜索candidate = detect_with_orb(resized_frame, templates)if candidate:# 缩放回原图坐标x, y, w, h = candidatecandidate = (int(x/scale), int(y/scale), int(w/scale), int(h/scale))# 评分 (使用模板匹配验证)score = template_match_score(frame, candidate, templates)if score > best_score:best_score = scorebest_candidate = candidatecandidate_roi = best_candidateif candidate_roi:print(f"金字塔搜索检测到候选目标: {candidate_roi}, 分数={best_score:.2f}")# 处理检测结果if candidate_roi:safe_new_roi = safe_roi(candidate_roi, width, height)if safe_new_roi[2] > 5 and safe_new_roi[3] > 5:# 添加更灵活的验证阈值min_sim = max(0.25, 0.4 - 0.01 * reinit_attempts)  # 随尝试次数降低阈值# 添加多种验证方法valid = validate_detection(frame, safe_new_roi, templates, min_sim)# 添加轮廓相似度验证if not valid and len(templates) > 0:template = templates[-1]  # 使用最新模板candidate_img = frame[safe_new_roi[1]:safe_new_roi[1]+safe_new_roi[3], safe_new_roi[0]:safe_new_roi[0]+safe_new_roi[2]]if candidate_img.size > 0:contour_sim = contour_similarity(candidate_img, template)print(f"轮廓相似度: {contour_sim:.2f}")if contour_sim > 0.6:  # 轮廓相似度阈值print(f"轮廓验证通过: {contour_sim:.2f}")valid = Trueif valid:# 尝试重新初始化跟踪器for tracker_name, tracker_creator in tracker_types:try:new_tracker = tracker_creator()init_success = new_tracker.init(frame, safe_new_roi)if init_success:tracker = new_trackertracking_failures = 0reinit_attempts = 0last_known_position = safe_new_roiprint(f"重新检测到目标!使用 {tracker_name} 跟踪器")# 绘制重新检测到的区域cv2.rectangle(frame, (safe_new_roi[0], safe_new_roi[1]), (safe_new_roi[0] + safe_new_roi[2], safe_new_roi[1] + safe_new_roi[3]), (255, 0, 0), 2)cv2.putText(frame, "重新检测到目标!", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)# 更新最后成功时间last_success_time = time.time()breakexcept:continueelse:print("检测结果验证失败")# 绘制搜索区域cv2.rectangle(frame, (search_x1, search_y1), (search_x2, search_y2), (0, 255, 255), 1)cv2.putText(frame, f"搜索区域 (模式:{reinit_mode})", (search_x1, search_y1-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 255), 1)# 升级重新检测模式if reinit_attempts % 5 == 0:reinit_mode = (reinit_mode + 1) % 3print(f"升级重新检测模式到 {reinit_mode}")if tracking_failures > max_failures:print(f"连续跟踪失败超过 {max_failures} 次,停止跟踪")break# 显示帧cv2.imshow("目标跟踪", frame)# 按ESC退出if cv2.waitKey(30) & 0xFF == 27:break# 释放资源cap.release()cv2.destroyAllWindows()print(f"跟踪完成,处理了 {frame_count} 帧")def get_first_frame(video_path):"""提取并保存视频第一帧"""cap = cv2.VideoCapture(video_path)if not cap.isOpened():print("无法打开视频文件")return Noneret, frame = cap.read()cap.release()if not ret:print("无法读取视频第一帧")return Noneoutput_path = "first_frame.jpg"cv2.imwrite(output_path, frame)print(f"已保存第一帧为: {output_path}")return output_pathif __name__ == "__main__":if len(sys.argv) < 6:print("请提供视频文件路径和初始ROI坐标")print("用法: python tracker.py <video_path> <x> <y> <width> <height>")print("示例: python tracker.py video.mp4 100 50 200 150")# 如果只有视频路径,提取第一帧if len(sys.argv) == 2:video_path = sys.argv[1]if os.path.exists(video_path):first_frame = get_first_frame(video_path)if first_frame:print(f"请使用图像查看软件打开 '{first_frame}' 获取ROI坐标")else:video_path = sys.argv[1]roi_coords = sys.argv[2:6]  # 读取四个坐标值# 检查视频文件是否存在if not os.path.exists(video_path):print(f"错误: 视频文件 '{video_path}' 不存在")else:main(video_path, roi_coords)

五、测试

python .\02_tracker.py .\normal_video.mp4 185 375 70 70

测试结果

测试过程中发现,中途如果目标消失或目标过小,那就检测不到,算法还有待优化

如有侵权,或需要完整代码,请及时联系博主。

http://www.xdnf.cn/news/1218763.html

相关文章:

  • 对git 熟悉时,常用操作
  • day36 力扣1049.最后一块石头的重量II 力扣494.目标和 力扣474.一和零
  • 【LeetCode 热题 100】4. 寻找两个正序数组的中位数——(解法一)线性扫描
  • [论文阅读] 人工智能 + 软件工程 | KnowledgeMind:基于MCTS的微服务故障定位新方案——告别LLM幻觉,提升根因分析准确率
  • SFT最佳实践教程 —— 基于方舟直接进行模型精调
  • 构型空间(Configuration Space,简称C-space)
  • 全基因组关联分析(GWAS)中模型参数选择:MLM、GLM与FarmCPU的深度解析
  • 数据库中使用SQL作分组处理01(简单分组)
  • 【worklist】worklist的hl7、dicom是什么关系
  • 学以致用——用Docker搭建ThinkPHP开发环境
  • 深入探索Weaviate:构建高效AI应用的数据库解决方案
  • 《人工智能导论》(python版)第2章 python基础2.2编程基础
  • 大模型流式长链接场景下 k8s 优雅退出 JAVA
  • PHP 与 MySQL 详解实战入门(1)
  • 零基础构建MCP服务器:TypeScript/Python双语言实战指南
  • 在幸狐RV1106板子上用gcc14.2本地编译安装samba-4.22.3服务器,并且支持XP系统访问共享文件夹
  • 基于单片机胎压检测/锅炉蒸汽压力/气压检测系统
  • LCM中间件入门(2):LCM核心实现原理解析
  • InfluxDB 与 Python 框架结合:Django 应用案例(二)
  • kmp复习,需要多看多练
  • Kubernetes 应用部署实战:为什么需要 Kubernetes?
  • InfluxDB 与 Python 框架结合:Django 应用案例(三)
  • Java Matcher对象中find()与matches()的区别
  • QT6 Python UI文件转换PY文件的方法
  • HttpServletRequest 和 HttpServletResponse核心接口区别
  • 哈希的概念及其应用
  • linux线程封装和互斥
  • Flutter Chen Generator - yaml配置使用
  • 了解SQL
  • 从姑苏区人工智能大模型基础设施招标|学习服务器、AI处理器、GPU