【盤點影響計算機視覺Top100論文】從ResNet到AlexNet

2021-01-08 和訊網

    本文首發於微信公眾號:新智元。文章內容屬作者個人觀點,不代表和訊網立場。投資者據此操作,風險請自擔。

1新智元編譯來源:github編譯整理: 新智元編輯部

【新智元導讀】計算機視覺近年來獲得了較大的發展,代表了深度學習最前沿的研究方向。本文梳理了2012到2017年計算機視覺領域的大事件:以論文和其他乾貨資源為主,並附上資源地址。囊括上百篇論文,分ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。 今年2月,新智元曾經向大家介紹了近5年100篇被引用次數最多的深度學習論文,覆蓋了優化/訓練方法、無監督/生成模型、卷積網絡模型和圖像分割/目標檢測等十大子領域。

上述的深度學習被引用最多的100篇論文是Github上的一個開源項目,社區的成員都可以參與。在這個項目上,我們發現了另一個項目——Deep Vision,這是一個關於計算機視覺資源的項目,包含了近年來對該領域影響最大的論文、圖書和博客等的匯總。其中在論文部分,作者也分為ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。

經典論文

ImageNet分類

物體檢測

物體跟蹤

低級視覺

超解析度

其他應用

邊緣檢測

語義分割

視覺注意力和顯著性

物體識別

人體姿態估計

CNN原理和性質(Understanding CNN)

圖像和語言

圖像解說

視頻解說

問答

圖像生成

上面是根據這些論文、作者、機構的一些製作的熱圖。

ImageNet分類

圖片來源:AlexNet論文

微軟ResNet

論文:用於圖像識別的深度殘差網絡

作者:何愷明、張祥雨、任少卿和孫劍

連結(複製後可以在瀏覽器中打開查看):http://arxiv.org/pdf/1512.03385v1.pdf

微軟PRelu(隨機糾正線性單元/權重初始化)

論文:深入學習整流器:在ImageNet分類上超越人類水平

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1502.01852.pdf

谷歌Batch Normalization

論文:批量歸一化:通過減少內部協變量來加速深度網絡訓練

作者:Sergey Ioffe, Christian Szegedy

連結:http://arxiv.org/pdf/1502.03167.pdf

谷歌GoogLeNet

論文:更深的卷積,CVPR 2015

作者:Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich

連結:http://arxiv.org/pdf/1409.4842.pdf

牛津VGG-Net

論文:大規模視覺識別中的極深卷積網絡,ICLR 2015

作者:Karen Simonyan & Andrew Zisserman

連結:http://arxiv.org/pdf/1409.1556.pdf

AlexNet

論文:使用深度卷積神經網絡進行ImageNet分類

作者:Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton

連結:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

物體檢測

圖片來源:Faster-RCNN 論文

PVANET

論文:用於實時物體檢測的深度輕量神經網絡(PVANET:Deep but Lightweight Neural Networks for Real-time Object Detection)

作者:Kye-Hyeon Kim, Sanghoon Hong, Byungseok Roh, Yeongjae Cheon, Minje Park

連結:http://arxiv.org/pdf/1608.08021

紐約大學OverFeat

論文:使用卷積網絡進行識別、定位和檢測(OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks),ICLR 2014

作者:Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun

連結:http://arxiv.org/pdf/1312.6229.pdf

伯克利R-CNN

論文:精確物體檢測和語義分割的豐富特徵層次結構(Rich feature hierarchies for accurate object detection and semantic segmentation),CVPR 2014

作者:Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf

微軟SPP

論文:視覺識別深度卷積網絡中的空間金字塔池化(Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition),ECCV 2014

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1406.4729.pdf

微軟Fast R-CNN

論文:Fast R-CNN

作者:Ross Girshick

連結:http://arxiv.org/pdf/1504.08083.pdf

微軟Faster R-CNN

論文:使用RPN走向實時物體檢測(Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks)

作者:任少卿、何愷明、Ross Girshick、孫劍

連結:http://arxiv.org/pdf/1506.01497.pdf

牛津大學R-CNN minus R

論文:R-CNN minus R

作者:Karel Lenc, Andrea Vedaldi

連結:http://arxiv.org/pdf/1506.06981.pdf

端到端行人檢測

論文:密集場景中端到端的行人檢測(End-to-end People Detection in Crowded Scenes)

作者:Russell Stewart, Mykhaylo Andriluka

連結:http://arxiv.org/pdf/1506.04878.pdf

實時物體檢測

論文:你只看一次:統一實時物體檢測(You Only Look Once: Unified, Real-Time Object Detection)

作者:Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi

連結:http://arxiv.org/pdf/1506.02640.pdf

Inside-Outside Net

論文:使用跳躍池化和RNN在場景中檢測物體(Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks)

作者:Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick

連結:http://arxiv.org/abs/1512.04143.pdf

微軟ResNet

論文:用於圖像識別的深度殘差網絡

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1512.03385v1.pdf

R-FCN

論文:通過區域全卷積網絡進行物體識別(R-FCN: Object Detection via Region-based Fully Convolutional Networks)

作者:代季峰,李益,何愷明,孫劍

連結:http://arxiv.org/abs/1605.06409

SSD

論文:單次多框檢測器(SSD: Single Shot MultiBox Detector)

作者:Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg

連結:http://arxiv.org/pdf/1512.02325v2.pdf

速度/精度權衡

論文:現代卷積物體檢測器的速度/精度權衡(Speed/accuracy trade-offs for modern convolutional object detectors)

作者:Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy

連結:http://arxiv.org/pdf/1611.10012v1.pdf

物體跟蹤

論文:用卷積神經網絡通過學習可區分的顯著性地圖實現在線跟蹤(Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network)

作者:Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han

地址:arXiv:1502.06796.

論文:DeepTrack:通過視覺跟蹤的卷積神經網絡學習辨別特徵表徵(DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking)

作者:Hanxi Li, Yi Li and Fatih Porikli

發表: BMVC, 2014.

論文:視覺跟蹤中,學習深度緊湊圖像表示(Learning a Deep Compact Image Representation for Visual Tracking)

作者:N Wang, DY Yeung

發表:NIPS, 2013.

論文:視覺跟蹤的分層卷積特徵(Hierarchical Convolutional Features for Visual Tracking)

作者:Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang

發表: ICCV 2015

論文:完全卷積網絡的視覺跟蹤(Visual Tracking with fully Convolutional Networks)

作者:Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu,

發表:ICCV 2015

論文:學習多域卷積神經網絡進行視覺跟蹤

(Learning Multi-Domain Convolutional Neural Networks for Visual Tracking)

作者:Hyeonseob Namand Bohyung Han

對象識別(Object Recognition)

論文:卷積神經網絡弱監督學習(Weakly-supervised learning with convolutional neural networks)

作者:Maxime Oquab,Leon Bottou,Ivan Laptev,Josef Sivic,CVPR,2015

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Oquab_Is_Object_Localization_2015_CVPR_paper.pdf

FV-CNN

論文:深度濾波器組用於紋理識別和分割(Deep Filter Banks for Texture Recognition and Segmentation)

作者:Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Cimpoi_Deep_Filter_Banks_2015_CVPR_paper.pdf

人體姿態估計(Human Pose Estimation)

論文:使用 Part Affinity Field的實時多人2D姿態估計(Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields)

作者:Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, CVPR, 2017.

論文:Deepcut:多人姿態估計的聯合子集分割和標籤(Deepcut: Joint subset partition and labeling for multi person pose estimation)

作者:Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, CVPR, 2016.

論文:Convolutional pose machines

作者:Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, CVPR, 2016.

論文:人體姿態估計的 Stacked hourglass networks(Stacked hourglass networks for human pose estimation)

作者:Alejandro Newell, Kaiyu Yang, and Jia Deng, ECCV, 2016.

論文:用於視頻中人體姿態估計的Flowing convnets(Flowing convnets for human pose estimation in videos)

作者:Tomas Pfister, James Charles, and Andrew Zisserman, ICCV, 2015.

論文:卷積網絡和人類姿態估計圖模型的聯合訓練(Joint training of a convolutional network and a graphical model for human pose estimation)

作者:Jonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, NIPS, 2014.

理解CNN

圖:(from Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.)

論文:通過測量同變性和等價性來理解圖像表示(Understanding image representations by measuring their equivariance and equivalence)

作者:Karel Lenc, Andrea Vedaldi, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Lenc_Understanding_Image_Representations_2015_CVPR_paper.pdf

論文:深度神經網絡容易被愚弄:無法識別的圖像的高置信度預測(Deep Neural Networks are Easily Fooled:High Confidence Predictions for Unrecognizable Images)

作者:Anh Nguyen, Jason Yosinski, Jeff Clune, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf

論文:通過反演理解深度圖像表示(Understanding Deep Image Representations by Inverting Them)

作者:Aravindh Mahendran, Andrea Vedaldi, CVPR, 2015

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf

論文:深度場景CNN中的對象檢測器(Object Detectors Emerge in Deep Scene CNNs)

作者:Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, ICLR, 2015.

連結:http://arxiv.org/abs/1412.6856

論文:用卷積網絡反演視覺表示(Inverting Visual Representations with Convolutional Networks)

作者:Alexey Dosovitskiy, Thomas Brox, arXiv, 2015.

連結:http://arxiv.org/abs/1506.02753

論文:可視化和理解卷積網絡(Visualizing and Understanding Convolutional Networks)

作者:Matthrew Zeiler, Rob Fergus, ECCV, 2014.

連結:http://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf

圖像與語言

圖像說明(Image Captioning)

圖:(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)

UCLA / Baidu

用多模型循環神經網絡解釋圖像(Explain Images with Multimodal Recurrent Neural Networks)

Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, arXiv:1410.1090

http://arxiv.org/pdf/1410.1090

Toronto

使用多模型神經語言模型統一視覺語義嵌入(Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models)

Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, arXiv:1411.2539.

http://arxiv.org/pdf/1411.2539

Berkeley

用於視覺識別和描述的長期循環卷積網絡(Long-term Recurrent Convolutional Networks for Visual Recognition and Description)

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, arXiv:1411.4389.

http://arxiv.org/pdf/1411.4389

Google

看圖寫字:神經圖像說明生成器(Show and Tell: A Neural Image Caption Generator)

Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, arXiv:1411.4555.

http://arxiv.org/pdf/1411.4555

Stanford

用於生成圖像描述的深度視覺語義對齊(Deep Visual-Semantic Alignments for Generating Image Description)

Andrej Karpathy, Li Fei-Fei, CVPR, 2015.

Web:http://cs.stanford.edu/people/karpathy/deepimagesent/

Paper:http://cs.stanford.edu/people/karpathy/cvpr2015.pdf

UML / UT

使用深度循環神經網絡將視頻轉換為自然語言(Translating Videos to Natural Language Using Deep Recurrent Neural Networks)

Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, NAACL-HLT, 2015.

http://arxiv.org/pdf/1412.4729

CMU / Microsoft

學習圖像說明生成的循環視覺表示(Learning a Recurrent Visual Representation for Image Caption Generation)

Xinlei Chen, C. Lawrence Zitnick, arXiv:1411.5654.

Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015

http://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf

Microsoft

從圖像說明到視覺概念(From Captions to Visual Concepts and Back)

Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, CVPR, 2015.

http://arxiv.org/pdf/1411.4952

Univ. Montreal / Univ. Toronto

Show, Attend, and Tell:視覺注意力與神經圖像標題生成(Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention)

Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, arXiv:1502.03044 / ICML 2015

http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf

Idiap / EPFL / Facebook

基於短語的圖像說明(Phrase-based Image Captioning)

Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, arXiv:1502.03671 / ICML 2015

http://arxiv.org/pdf/1502.03671

UCLA / Baidu

像孩子一樣學習:從圖像句子描述快速學習視覺的新概念(Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images)

Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, arXiv:1504.06692

http://arxiv.org/pdf/1504.06692

MS + Berkeley

探索圖像說明的最近鄰方法( Exploring Nearest Neighbor Approaches for Image Captioning)

Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, arXiv:1505.04467

http://arxiv.org/pdf/1505.04467.pdf

圖像說明的語言模型(Language Models for Image Captioning: The Quirks and What Works)

Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, arXiv:1505.01809

http://arxiv.org/pdf/1505.01809.pdf

阿德萊德

具有中間屬性層的圖像說明( Image Captioning with an Intermediate Attributes Layer)

Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, arXiv:1506.01144

蒂爾堡

通過圖片學習語言(Learning language through pictures)

Grzegorz Chrupala, Akos Kadar, Afra Alishahi, arXiv:1506.03694

蒙特婁大學

使用基於注意力的編碼器-解碼器網絡描述多媒體內容(Describing Multimedia Content using Attention-based Encoder-Decoder Networks)

Kyunghyun Cho, Aaron Courville, Yoshua Bengio, arXiv:1507.01053

康奈爾

圖像表示和神經圖像說明的新領域(Image Representations and New Domains in Neural Image Captioning)

Jack Hessel, Nicolas Savva, Michael J. Wilber, arXiv:1508.02091

MS + City Univ. of HongKong

Learning Query and Image Similarities with Ranking Canonical Correlation Analysis

Ting Yao, Tao Mei, and Chong-Wah Ngo, ICCV, 2015

視頻字幕(Video Captioning)

伯克利

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.

猶他州/ UML / 伯克利

Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.

微軟

Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.

猶他州/ UML / 伯克利

Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.

蒙特婁大學/ 舍布魯克

Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029

MPI / 伯克利

Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698

多倫多大學 / MIT

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724

蒙特婁大學

Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053

TAU / 美國南加州大學

Dotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950.

圖像生成

卷積/循環網絡

論文:Conditional Image Generation with PixelCNN Decoders」

作者:Aäron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu

論文:Learning to Generate Chairs with Convolutional Neural Networks

作者:Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox

發表:CVPR, 2015.

論文:DRAW: A Recurrent Neural Network For Image Generation

作者:Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra

發表:ICML, 2015.

對抗網絡

論文:生成對抗網絡(Generative Adversarial Networks)

作者:Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

發表:NIPS, 2014.

論文:使用對抗網絡Laplacian Pyramid 的深度生成圖像模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)

作者:Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus

發表:NIPS, 2015.

論文:生成模型演講概述 (A note on the evaluation of generative models)

作者:Lucas Theis, Aäron van den Oord, Matthias Bethge

發表:ICLR 2016.

論文:變分自動編碼深度高斯過程(Variationally Auto-Encoded Deep Gaussian Processes)

作者:Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence

發表:ICLR 2016.

論文:用注意力機制從字幕生成圖像 (Generating Images from Captions with Attention)

作者:Elman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov

發表: ICLR 2016

論文:分類生成對抗網絡的無監督和半監督學習(Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks)

作者:Jost Tobias Springenberg

發表:ICLR 2016

論文:用一個對抗檢測表徵(Censoring Representations with an Adversary)

作者:Harrison Edwards, Amos Storkey

發表:ICLR 2016

論文:虛擬對抗訓練實現分布式順滑 (Distributional Smoothing with Virtual Adversarial Training)

作者:Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

發表:ICLR 2016

論文:自然圖像流形上的生成視覺操作(Generative Visual Manipulation on the Natural Image Manifold)

作者:朱俊彥, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros

發表: ECCV 2016.

論文:深度卷積生成對抗網絡的無監督表示學習(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks)

作者:Alec Radford, Luke Metz, Soumith Chintala

發表: ICLR 2016

問題回答

圖:(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)

維吉尼亞大學 / 微軟研究院

論文:VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.

作者:Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh

MPI / 伯克利

論文:Ask Your Neurons: A Neural-based Approach to Answering Questions about Images

作者:Mateusz Malinowski, Marcus Rohrbach, Mario Fritz,

發布 : arXiv:1505.01121.

多倫多

論文: Image Question Answering: A Visual Semantic Embedding Model and a New Dataset

作者:Mengye Ren, Ryan Kiros, Richard Zemel

發表: arXiv:1505.02074 / ICML 2015 deep learning workshop.

百度/ 加州大學洛杉磯分校

作者:Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, 徐偉

論文:Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering

發表: arXiv:1505.05612.

POSTECH(韓國)

論文:Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction

作者:Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han

發表: arXiv:1511.05765

CMU / 微軟研究院

論文:Stacked Attention Networks for Image Question Answering

作者:Yang, Z., He, X., Gao, J., Deng, L., & Smola, A. (2015)

發表: arXiv:1511.02274.

MetaMind

論文:Dynamic Memory Networks for Visual and Textual Question Answering

作者:Xiong, Caiming, Stephen Merity, and Richard Socher

發表: arXiv:1603.01417 (2016).

首爾國立大學 + NAVER

論文:Multimodal Residual Learning for Visual QA

作者:Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang

發表:arXiv:1606:01455

UC Berkeley + 索尼

論文:Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

作者:Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach

發表:arXiv:1606.01847

Postech

論文:Training Recurrent Answering Units with Joint Loss Minimization for VQA

作者:Hyeonwoo Noh and Bohyung Han

發表: arXiv:1606.03647

首爾國立大學 + NAVER

論文: Hadamard Product for Low-rank Bilinear Pooling

作者:Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhan

發表:arXiv:1610.04325.

視覺注意力和顯著性

Mr-CNN

論文:Predicting Eye Fixations using Convolutional Neural Networks

作者:Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu

發表:CVPR, 2015.

學習地標的連續搜索

作者:Learning a Sequential Search for Landmarks

論文:Saurabh Singh, Derek Hoiem, David Forsyth

發表:CVPR, 2015.

視覺注意力機制實現多物體識別

論文:Multiple Object Recognition with Visual Attention

作者:Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu,

發表:ICLR, 2015.

視覺注意力機制的循環模型

作者:Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu

論文:Recurrent Models of Visual Attention

發表:NIPS, 2014.

低級視覺

超解析度

Iterative Image Reconstruction

Sven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001.

Sven Behnke: Learning Iterative Image Reconstruction in the Neural Abstraction Pyramid. International Journal of Computational Intelligence and Applications, vol. 1, no. 4, pp. 427-438, 2001.

Super-Resolution (SRCNN)

Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.

Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.

Very Deep Super-Resolution

Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015.

Deeply-Recursive Convolutional Network

Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015.

Casade-Sparse-Coding-Network

Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015.

Perceptual Losses for Super-Resolution

Justin Johnson, Alexandre Alahi, Li Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016.

SRGAN

Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, arXiv:1609.04802v3, 2016.

其他應用

Optical Flow (FlowNet)

Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.

Compression Artifacts Reduction

Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.

Blur Removal

Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf, Learning to Deblur, arXiv:1406.7444

Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015

Image Deconvolution

Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.

Deep Edge-Aware Filter

Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.

Computing the Stereo Matching Cost with a Convolutional Neural Network

Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.

Colorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros, ECCV, 2016

Feature Learning by Inpainting

Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016

邊緣檢測

Holistically-Nested Edge Detection

Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375.

DeepEdge

Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.

DeepContour

Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.

語義分割

圖片來源:BoxSup論文

SEC: Seed, Expand and Constrain

Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016.

Adelaide

Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. (1st ranked in VOC2012)

Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. (4th ranked in VOC2012)

Deep Parsing Network (DPN)

Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 (2nd ranked in VOC 2012)

CentraleSuperBoundaries, INRIA

Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)

BoxSup

Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)

POSTECH

Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. (7th ranked in VOC2012)

Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924.

Seunghoon Hong,Junhyuk Oh,Bohyung Han, andHonglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928

Conditional Random Fields as Recurrent Neural Networks

Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)

DeepLab

Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. (9th ranked in VOC2012)

Zoom-out

Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015

Joint Calibration

Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.

Fully Convolutional Networks for Semantic Segmentation

Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.

Hypercolumn

Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.

Deep Hierarchical Parsing

Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015.

Learning Hierarchical Features for Scene Labeling

Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.

Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.

University of Cambridge

Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015.

Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015.

Princeton

Fisher Yu, Vladlen Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions", ICLR 2016

Univ. of Washington, Allen AI

Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi, Ali Farhadi, "Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing", ICCV, 2015

INRIA

Iasonas Kokkinos, "Pusing the Boundaries of Boundary Detection Using deep Learning", ICLR 2016

UCSB

Niloufar Pourian, S. Karthikeyan, and B.S. Manjunath, "Weakly supervised graph based semantic segmentation by learning communities of image-parts", ICCV, 2015

其他資源

課程

· 深度視覺

[斯坦福] CS231n: Convolutional Neural Networks for Visual Recognition

[香港中文大學] ELEG 5040: Advanced Topics in Signal Processing(Introduction to Deep Learning)

· 更多深度課程推薦

[斯坦福] CS224d: Deep Learning for Natural Language Processing

[牛津 Deep Learning by Prof. Nando de Freitas

[紐約大學] Deep Learning by Prof. Yann LeCun

圖書

免費在線圖書

Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

Neural Networks and Deep Learning by Michael Nielsen

Deep Learning Tutorial by LISA lab, University of Montreal

視頻

演講

Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng

Recent Developments in Deep Learning By Geoff Hinton

The Unreasonable Effectiveness of Deep Learning by Yann LeCun

Deep Learning of Representations by Yoshua bengio

軟體

框架

· Tensorflow: An open source software library for numerical computation using data flow graph by Google [Web]

· Torch7: Deep learning library in Lua, used by Facebook and Google Deepmind [Web]

Torch-based deep learning libraries: [torchnet],

· Caffe: Deep learning framework by the BVLC [Web]

· Theano: Mathematical library in Python, maintained by LISA lab [Web]

Theano-based deep learning libraries: [Pylearn2], [Blocks], [Keras], [Lasagne]

· MatConvNet: CNNs for MATLAB [Web]

· MXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support [Web]

· Deepgaze: A computer vision library for human-computer interaction based on CNNs [Web]

應用

· 對抗訓練 Code and hyperparameters for the paper "Generative Adversarial Networks" [Web]

· 理解與可視化 Source code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015. [Web]

· 詞義分割 Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014. [Web] ; Source code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015. [Web]

· 超解析度 Image Super-Resolution for Anime-Style-Art [Web]

· 邊緣檢測 Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015. [Web]

;Source code for the paper "Holistically-Nested Edge Detection", ICCV 2015. [Web]

講座

· [CVPR 2014] Tutorial on Deep Learning in Computer Vision

· [CVPR 2015] Applied Deep Learning for Computer Vision with Torch

博客

· Deep down the rabbit hole: CVPR 2015 and beyond@Tombone's Computer Vision Blog

· CVPR recap and where we're going@Zoya Bylinskii (MIT PhD Student)'s Blog

· Facebook's AI Painting@Wired

· Inceptionism: Going Deeper into Neural Networks@Google Research

· Implementing Neural networks

    文章來源:微信公眾號新智元

(責任編輯:婁在霞 HN151)

相關焦點

  • 全球計算機視覺頂會CVPR 2020論文出爐:騰訊優圖17篇論文入選
    全球計算機視覺頂級會議CVPR2020 (IEEE Conference on Computer Vision and Pattern Recognition,即IEEE國際計算機視覺與模式識別會議) 即將於2020年6月14日-19日在美國西雅圖召開。
  • 計算機視覺新手指南
    通過機器的眼睛去探索如果我們想讓機器學會思考,就需要教他們學會如何用視覺去看周圍環境。—— 史丹福大學AI實驗室和斯坦福視覺實驗室主任李飛飛使計算機或手機等機器看到周圍環境的現象稱為計算機視覺。機器仿生人眼的研究工作可以追溯到50年代,從那時起,我們已經走了很長一段路。
  • 光學預處理使計算機視覺更強大、更節能
    圖像分析在當代技術中無處不在:從醫療診斷到自動駕駛車輛再到面部識別。使用深度學習卷積神經網絡的計算機(處理圖像的算法層)已經徹底改變了計算機視覺。但是,卷積神經網絡(CNN)通過從先前訓練的數據中學習,經常記憶或發展成定式來對圖像進行分類。
  • 半路出家OCR後成領域專家,白翔:計算機視覺科研沒有捷徑
    研究領域為計算機視覺與模式識別,文檔分析與識別。已在國內外刊物和學術會議上發表論文180餘篇,谷歌學術顯示引用15000餘次,h指數63 …… 這是白翔教授目前的學術成就。可以說在國內計算機視覺研究領域,白翔教授所處的位置是比較高的。而這份成就的取得,可以從他的學生時代說起。
  • 算法到實戰,如何零基礎入門計算機視覺領域
    再看右邊這張圖,其實就是蠻準的,它首先檢測到了有兩個人臉,一個小女孩的,一個baby的臉,它首先判斷出他們的性別,還把他們的年齡判斷出來,還是挺準的。所以舉例子就是想說計算機視覺它和人理解外面的世界是有點像的,就是要看到圖像,而且能夠理解它。從獲取圖像到讀懂圖像
  • 計算機視覺入門大全:基礎概念、運行原理、應用案例詳解
    選自 tryolabs機器之心編譯參與:魔王這是一篇計算機視覺入門指南,從概念、原理、用例等角度介紹了計算機視覺。「機器能夠模擬人類視覺系統」的幻想已經過時了。自 1960 年代第一批學術論文出現以來,計算機視覺已經走了很遠,現代系統已經出現,且它們可以集成到移動應用中。
  • 人工智慧之計算機視覺應用專題報告2016
    計算機視覺作為人工智慧技術的基礎,受到深度學習的成功影響在近幾年內取得了突破性的進展,正在成為影響行業發展的下一個引擎。巨頭紛紛布局,市場也吸引了越來越多的人才創業參與其中。計算機視覺正在成為人工智慧最火熱的細分領域之一。本報告將針對計算機視覺技術發展的關鍵節點、市場現狀及應用場景進行分析和研究。
  • 它為什麼比計算機視覺更重要?
    總的來說,自然語言就是指人類社會互相默認同時又區別於人工語言的一門獨特的語言,它區別於計算機的語言,就像python等等,這些語言有著嚴格的格式,與人類的語言有著本質的區別。同時,縱觀人類文明史,所有人類歷史的記載和流傳,以及代代相傳的知識與科學文化藝術等,這些文字信息佔到人類全體知識總量的 80%以上。
  • AlphaGo 早已擊敗圍棋冠軍,計算機視覺還是 3 歲的「智力」
    但是現在,AlphaGo 已經擊敗圍棋冠軍,IBM Watson 也在 Jeopardy 中擊敗人類競賽者,而大多數計算機視覺軟體最多只能完成 3 歲兒童的任務……理論與實踐不斷證明,人類視覺神經非常複雜,計算機視覺實現並非易事。計算機視覺研究從上世紀 50 年代興起之後,也歷經了狂歡、冷靜,又重新燃起希望的階段。
  • AlphaGo 早已擊敗圍棋冠軍,計算機視覺還是 3 歲的 「智力」
    20 世紀 50 年代和 60 年代,計算機視覺並沒有被看成重頭戲,人們認為視覺系統很容易複製,而教計算機下棋更加困難。數學家詹姆斯 · 萊特希爾(James Lighthill)1973 年發表了一篇論文,批評早期人工智慧研究,這為後來英國政府撤回對該領域的支持奠定了研究基礎。隨後的這段時間被稱為 「人工智慧的冬天」。雖然 20 世紀 80 年代和 90 年代研究還在繼續,也有過一些小規模的復興,但人工智慧基本上被被歸入了科幻小說的範疇,嚴肅的計算機科學家都避免使用這個詞。
  • 殘差學習,152層網絡,微軟奪冠2015 ImageNet計算機視覺識別挑戰
    微軟全球資深副總裁、微軟亞太研發集團主席兼微軟亞洲研究院院長洪小文博士表示:「微軟亞洲研究院視覺計算組在此次ImageNet挑戰賽中所取得的出色成績,不僅是微軟在深層神經網絡的研究和應用上所取得的科學突破,同時也代表著計算機視覺技術在目標識別方面的又一次飛躍。我對研究組多年來的技術積累、探索和成果倍感驕傲,同時也對這一突破對其它研究領域的推動以及相關產品的轉化充滿期待。」
  • 羅傑波、馬毅、華剛等談視覺研究那些事:是時候重新定義視覺了
    1論文選題不要湊熱鬧大佬們談笑風生 深度學習是計算機視覺的研究技術之一,深度學習在各領域大火的時候,也影響到了計算機視覺,做學者的要有自己的原則,羅傑波表示:不刷榜單,不湊熱鬧,明白學者在算力方面是比不過企業的,在選題的時候要找自己感興趣的話題
  • CVPR 2020論文收錄揭曉:百度22篇論文入選 涵蓋全視覺領域
    近日,計算機視覺領域「奧斯卡」CVPR 2020官方公布論文收錄結果,伴隨投稿數量激增,接收率開始經歷了一個持續下降的過程。今年,在6656篇有效投稿中,共有1470篇論文被接收,接收率為22%左右,相較去年25%的入選率,同比下降3%。在論文接收率下降的同時,中國科技企業被錄取論文數量卻不降反增,百度作為AI代表企業今年中選22篇,比去年的17篇增加了5篇。
  • ...985計算機視覺畢業後找不到工作怎麼辦?怒刷leetcode,還是另尋...
    我們來看看他的履歷:   本人目前是985高校研究生,方向是計算機視覺。成績中等,無論文,無比賽經歷,有項目經歷。  然而,到了找工作的時候,這哥們發現就業形式和他想的相差太大:   最近陸陸續續有公司開始秋招的提前批了,計算機視覺崗位招的清一色算法工程師,沒有論文,或者大賽獲獎的找到算法崗的簡直不要太難,周圍的同學都開始紛紛轉Java開發,自己也開始慌了。
  • Facebook開源物體識別工具Detectron,加速計算機視覺研究
    大數據文摘作品作者:龍牧雪 馮曉麗1月22日,Facebook的人工智慧實驗室(FAIR)開源了計算機視覺研究平臺Detectron。Detectron系統實現了最先進的物體檢測算法,包括Mask R-CNN。 它是用Python編寫的,支持Caffe2深度學習框架。
  • 華人計算機視覺科學家黃煦濤逝世,眾多AI大牛發文緬懷
    當地時間2020年4月25日,華人計算機科學家黃煦濤教授在美國印第安納州逝世,享年84歲。黃煦濤教授主要從事教學與圖像處理、模式識別、計算機視覺和人機互動等方面的研究工作,一生出版了 14 本書,發表了 400 多篇學術論文。
  • 計算機視覺的三部曲 - 人人都是產品經理
    計算機視覺目前已經被應用到多個領域,如無人駕駛、人臉識別、文字識別、智慧交通、VA/AR、以圖搜索、醫學圖像分析等等,是人工智慧(AI)目前最火的領域之一。那計算機視覺是什麼?完整鏈路是怎樣的?有哪些技術點?本文將跟大家一起探討。計算機視覺(Computer Vision),就是用機器來模擬人的視覺獲取和處理信息的能力。
  • 為什麼說現在是計算機視覺最好的時代?
    隨著深度學習的迅猛發展,計算機視覺也成為了目前人工智慧領域落地最順利的技術。計算機視覺(Computer Vision)是一門研究如何用攝影機和計算機代替人眼對目標進行跟蹤、識別、分析、處理等。此過程極具挑戰性,光是隔離圖像並進行識別的簡單概念就花費了研究人員大量的時間。
  • Hinton、Bengio、何愷明等經典論文貢獻:機器學習必讀TOP100論文
    這裡整合了2012年到2016年的高引TOP 100論文,引用量要求隨著年份遞減而遞增,Hinton、Bengio、何愷明等大牛的論文都在其中,一起來看看吧:清單列表理解、泛化、遷移學習1、Distilling the knowledge