当前位置: 首页 > news >正文

[ICLR 2022]How Much Can CLIP Benefit Vision-and-Language Tasks?

论文网址:pdf

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Background and Motivation

2.3.1. Motivation

2.4. CLIP-ViL

2.4.1. Visual Question Aswering

2.4.2. Image Captioning

2.4.3. Vision-and-Language Navigation

2.5. Vision-and-Language Pre-training

2.5.1. CLIP-VIL_p

2.5.2. Experiments

2.6. Analysis

2.7. Conclusions

1. 心得

(1)?非常简单的一篇文章,感觉在测试CLIP?

2. 论文逐段精读

2.1. Abstract

        ①Model pre-trained on large number of data brings better performance

        ②Scenarios suitable for CLIP: plug and fine-tune, or combining with V&L

2.2. Introduction

        ①Bottleneck of vision-and-language (V&L) tasks: visual representation and scarce labled data

        ②Most V&L tasks require complex reasoning, which can not use visual model directly

        ③They define two scenarios:

CLIP_ViLCLIP in direct task-specific fine-tuning
CLIP_ViL_pintegrate CLIP with V&L pre-training on image-text pairs and transfer to downstream tasks

        ④Tasks: Visual Question Answering, Image Captioning, and Vision-and-Language Navigation

2.3. Background and Motivation

        ①Training stage: 

visual encoder pretrianing, alignment (opt), downstream task

        ②Different types of model:

region based, network based, and CLIP (contrastive)

2.3.1. Motivation

        ①就是说直接把CLIP用在不同复杂视觉任务上性能一般般所以要小改一下

2.4. CLIP-ViL

2.4.1. Visual Question Aswering

        ①Performance of models on VQA v2.0 dataset:

2.4.2. Image Captioning

        ①Image captioning comparison table on COCO dataset:

2.4.3. Vision-and-Language Navigation

        ①The model performance on Room-to-Room (R2R) dataset:

        ②Changing ResNet to CLIP, the performance table:

2.5. Vision-and-Language Pre-training

2.5.1. CLIP-VIL_p

        ①For text segment T, tokenize it into subwords \{w_{1},w_{2},...,w_{k}\} and further embedded as the sum of its token, position and segment embeddings \{\textbf{w}_{1},\textbf{w}_{2},...,\textbf{w}_{k}\}

        ②Image I is is embedded as \{\textbf{v}_{1},\textbf{v}_{2},...,\textbf{v}_{m}\}

        ③Concatenate them two as \{\textbf{w}_{1},\textbf{w}_{2},...,\textbf{w}_{n},\textbf{v}_{1},\textbf{v}_{2},...,\textbf{v}_{m}\}

        ④Reconstruct sentence with 15% mask ratio, match text and image with the 50% correct sentence ratio, then execute visual question answering

2.5.2. Experiments

        ①Two variants of CLIP as visual encoder: CLIP-Res50andCLIP Res50x4

        ②Datasets: MSCOCOCaptions, VisualGenomeCaptions, VQA,GQA, and VG-QA  for pre-training

        ③Patch number for each image: 100

        ④Epoch of pretraining: 20

        ⑤Fine tune pretrained model on evaluation stage

        ⑥Dataset of tasks: VQAv2.0, visual entailment SNLI-VE, and GQA

        ⑦Results:

2.6. Analysis

        ①Zero-shot performance of CLIP on VQA v2.0 mini-eval:

        ②Influence of V&L pre-training:

        ③Visualization of feature positioning of different models:

2.7. Conclusions

        ~

http://www.xdnf.cn/news/968635.html

相关文章:

  • Suna 开源 AI Agent 安装配置过程全解析(输出与交互详解)
  • 泊松圆盘采样进行随机选点
  • iOS26 深度解析:WWDC25 重磅系统的设计革新与争议焦点
  • 聊一聊 - 如何像开源项目一样,去设计一个组件
  • (五)docker环境中配置hosts
  • React19源码系列之 事件插件系统
  • 鹰盾视频的AI行为检测是怎样的风控?
  • 黑马python(二)
  • 分析VSS,VCC和VDD
  • 206. 2013年蓝桥杯省赛 - 打印十字图(困难)- 模拟
  • 第三章支线五 ·组件之城 · 构建与复用的魔法工坊
  • 基于数字孪生的水厂可视化平台建设:架构与实践
  • nsight system分析LLM注意事项
  • PI数据库全面解析:原理、应用、行业案例与优劣对比
  • MySQL学习之触发器
  • Oracle实用参考(13)——Oracle for Linux ASM+RAC环境搭建(1)
  • 【AI News | 20250610】每日AI进展
  • 2.Vue编写一个app
  • Python实例题:Python计算实变函数
  • python打卡第50天
  • 题单:二分查找(==x个数)
  • 纯血Harmony NETX 5 打造趣味五子棋:(附源文件)
  • win11本地Docker部署腾讯云Docker部署若依前后端分离版
  • 解析 Go 语言中 time 包在实现定时任务时的易错点
  • Zustand 状态管理库:极简而强大的解决方案
  • c++中cout的用法 标准输出流cout使用指南
  • Linux操作系统之文件系统上
  • 编程风格良好的条件比较语句
  • 基于NOMP和降维字典的杂波空时功率谱稀疏恢复算法matlab仿真
  • PPT|230页| 制造集团企业供应链端到端的数字化解决方案:从需求到结算的全链路业务闭环构建