【封面推薦】盤點!影響計算機視覺Top100論文,從ResNet到AlexNet

2021-02-07 Major術業

【原文】github

【編譯】新智元(ID:AI_era)



計算機視覺近年來獲得了較大的發展,代表了深度學習最前沿的研究方向。本文梳理了2012到2017年計算機視覺領域的大事件:以論文和其他乾貨資源為主,並附上資源地址。


Deep Vision是一個關於計算機視覺資源的項目,包含了近年來對該領域影響最大的論文、圖書和博客等的匯總。其中在論文部分,作者也分為ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。



ImageNet分類

物體檢測

物體跟蹤

低級視覺

邊緣檢測

語義分割

視覺注意力和顯著性

物體識別

人體姿態估計

CNN原理和性質(Understanding CNN)

圖像和語言

圖像生成



上面是根據這些論文、作者、機構的一些關鍵詞製作的熱圖。



圖片來源:AlexNet論文


論文:用於圖像識別的深度殘差網絡

作者:何愷明、張祥雨、任少卿和孫劍

連結(複製後可以在瀏覽器中打開查看):http://arxiv.org/pdf/1512.03385v1.pdf


論文:深入學習整流器:在ImageNet分類上超越人類水平

作者:何愷明、張祥雨、任少卿和孫劍

連結:https://arxiv.org/pdf/1502.01852.pdf


論文:批量歸一化:通過減少內部協變量來加速深度網絡訓練

作者:Sergey Ioffe, Christian Szegedy

連結:https://arxiv.org/pdf/1502.03167.pdf


論文:更深的卷積,CVPR 2015

作者:Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich

連結:https://arxiv.org/pdf/1409.4842.pdf


論文:大規模視覺識別中的極深卷積網絡,ICLR 2015

作者:Karen Simonyan & Andrew Zisserman

連結:https://arxiv.org/pdf/1409.1556.pdf


論文:使用深度卷積神經網絡進行ImageNet分類

作者:Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton

連結:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf



圖片來源:Faster-RCNN 論文


論文:用於實時物體檢測的深度輕量神經網絡(PVANET:Deep but Lightweight Neural Networks for Real-time Object Detection)

作者:Kye-Hyeon Kim, Sanghoon Hong, Byungseok Roh, Yeongjae Cheon, Minje Park

連結:https://arxiv.org/pdf/1608.08021


論文:使用卷積網絡進行識別、定位和檢測(OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks),ICLR 2014

作者:Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun

連結:https://arxiv.org/pdf/1312.6229.pdf


論文:精確物體檢測和語義分割的豐富特徵層次結構(Rich feature hierarchies for accurate object detection and semantic segmentation),CVPR 2014

作者:Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf


論文:視覺識別深度卷積網絡中的空間金字塔池化(Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition),ECCV 2014

作者:何愷明、張祥雨、任少卿和孫劍

連結:https://arxiv.org/pdf/1406.4729.pdf


論文:Fast R-CNN

作者:Ross Girshick

連結:https://arxiv.org/pdf/1504.08083.pdf


論文:使用RPN走向實時物體檢測(Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks)

作者:任少卿、何愷明、Ross Girshick、孫劍

連結:https://arxiv.org/pdf/1506.01497.pdf


論文:R-CNN minus R

作者:Karel Lenc, Andrea Vedaldi

連結:https://arxiv.org/pdf/1506.06981.pdf


論文:密集場景中端到端的行人檢測(End-to-end People Detection in Crowded Scenes)

作者:Russell Stewart, Mykhaylo Andriluka

連結:https://arxiv.org/pdf/1506.04878.pdf


論文:你只看一次:統一實時物體檢測(You Only Look Once: Unified, Real-Time Object Detection)

作者:Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi

連結:https://arxiv.org/pdf/1506.02640.pdf


論文:使用跳躍池化和RNN在場景中檢測物體(Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks)

作者:Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick

連結:https://arxiv.org/abs/1512.04143.pdf


論文:用於圖像識別的深度殘差網絡

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1512.03385v1.pdf


論文:通過區域全卷積網絡進行物體識別(R-FCN: Object Detection via Region-based Fully Convolutional Networks)

作者:代季峰,李益,何愷明,孫劍

連結:https://arxiv.org/abs/1605.06409


論文:單次多框檢測器(SSD: Single Shot MultiBox Detector)

作者:Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg

連結:https://arxiv.org/pdf/1512.02325v2.pdf


論文:現代卷積物體檢測器的速度/精度權衡(Speed/accuracy trade-offs for modern convolutional object detectors)

作者:Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy

連結:https://arxiv.org/pdf/1611.10012v1.pdf



論文:用卷積神經網絡通過學習可區分的顯著性地圖實現在線跟蹤(Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network)

作者:Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han 

地址:arXiv:1502.06796.


論文:DeepTrack:通過視覺跟蹤的卷積神經網絡學習辨別特徵表徵(DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking)

作者:Hanxi Li, Yi Li and Fatih Porikli 

發表: BMVC, 2014.


論文:視覺跟蹤中,學習深度緊湊圖像表示(Learning a Deep Compact Image Representation for Visual Tracking)

作者:N Wang, DY Yeung

發表:NIPS, 2013.


論文:視覺跟蹤的分層卷積特徵(Hierarchical Convolutional Features for Visual Tracking)

作者:Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang

發表: ICCV 2015


論文:完全卷積網絡的視覺跟蹤(Visual Tracking with fully Convolutional Networks)

作者:Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu, 

發表:ICCV 2015 


論文:學習多域卷積神經網絡進行視覺跟蹤

(Learning Multi-Domain Convolutional Neural Networks for Visual Tracking)

作者:Hyeonseob Namand Bohyung Han


四、對象識別(Object Recognition)


論文:卷積神經網絡弱監督學習(Weakly-supervised learning with convolutional neural networks)

作者:Maxime Oquab,Leon Bottou,Ivan Laptev,Josef Sivic,CVPR,2015

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Oquab_Is_Object_Localization_2015_CVPR_paper.pdf


FV-CNN

論文:深度濾波器組用於紋理識別和分割(Deep Filter Banks for Texture Recognition and Segmentation)

作者:Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Cimpoi_Deep_Filter_Banks_2015_CVPR_paper.pdf


五、人體姿態估計(Human Pose Estimulation)


論文:使用 Part Affinity Field的實時多人2D姿態估計(Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields)

作者:Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, CVPR, 2017.


論文:Deepcut:多人姿態估計的聯合子集分割和標籤(Deepcut: Joint subset partition and labeling for multi person pose estimation)

作者:Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, CVPR, 2016.


論文:Convolutional pose machines

作者:Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, CVPR, 2016.


論文:人體姿態估計的 Stacked hourglass networks(Stacked hourglass networks for human pose estimation)

作者:Alejandro Newell, Kaiyu Yang, and Jia Deng, ECCV, 2016.


論文:用於視頻中人體姿態估計的Flowing convnets(Flowing convnets for human pose estimation in videos)

作者:Tomas Pfister, James Charles, and Andrew Zisserman, ICCV, 2015.


論文:卷積網絡和人類姿態估計圖模型的聯合訓練(Joint training of a convolutional network and a graphical model for human pose estimation)

作者:Jonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, NIPS, 2014.



圖:(from Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.)


論文:通過測量同變性和等價性來理解圖像表示(Understanding image representations by measuring their equivariance and equivalence)

作者:Karel Lenc, Andrea Vedaldi, CVPR, 2015. 

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Lenc_Understanding_Image_Representations_2015_CVPR_paper.pdf


論文:深度神經網絡容易被愚弄:無法識別的圖像的高置信度預測(Deep Neural Networks are Easily Fooled:High Confidence Predictions for Unrecognizable Images)

作者:Anh Nguyen, Jason Yosinski, Jeff Clune, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf


論文:通過反演理解深度圖像表示(Understanding Deep Image Representations by Inverting Them)

作者:Aravindh Mahendran, Andrea Vedaldi,  CVPR, 2015

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf


論文:深度場景CNN中的對象檢測器(Object Detectors Emerge in Deep Scene CNNs)

作者:Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, ICLR, 2015.

連結:http://arxiv.org/abs/1412.6856


論文:用卷積網絡反演視覺表示(Inverting Visual Representations with Convolutional Networks)

作者:Alexey Dosovitskiy, Thomas Brox, arXiv, 2015.

連結:http://arxiv.org/abs/1506.02753


論文:可視化和理解卷積網絡(Visualizing and Understanding Convolutional Networks)

作者:Matthrew Zeiler, Rob Fergus, ECCV, 2014.

連結:https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf



圖像說明(Image Captioning)


圖:(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)


UCLA / Baidu

用多模型循環神經網絡解釋圖像(Explain Images with Multimodal Recurrent Neural Networks)

Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, arXiv:1410.1090

http://arxiv.org/pdf/1410.1090


Toronto 

使用多模型神經語言模型統一視覺語義嵌入(Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models)

Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, arXiv:1411.2539.

http://arxiv.org/pdf/1411.2539


Berkeley

用於視覺識別和描述的長期循環卷積網絡(Long-term Recurrent Convolutional Networks for Visual Recognition and Description)

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, arXiv:1411.4389.

http://arxiv.org/pdf/1411.4389


Google 

看圖寫字:神經圖像說明生成器(Show and Tell: A Neural Image Caption Generator)

Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, arXiv:1411.4555.

http://arxiv.org/pdf/1411.4555


Stanford 

用於生成圖像描述的深度視覺語義對齊(Deep Visual-Semantic Alignments for Generating Image Description)

Andrej Karpathy, Li Fei-Fei, CVPR, 2015.

Web:http://cs.stanford.edu/people/karpathy/deepimagesent/

Paper:http://cs.stanford.edu/people/karpathy/cvpr2015.pdf


UML / UT

使用深度循環神經網絡將視頻轉換為自然語言(Translating Videos to Natural Language Using Deep Recurrent Neural Networks)

Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, NAACL-HLT, 2015.

http://arxiv.org/pdf/1412.4729


CMU / Microsoft

學習圖像說明生成的循環視覺表示(Learning a Recurrent Visual Representation for Image Caption Generation)

Xinlei Chen, C. Lawrence Zitnick, arXiv:1411.5654.

Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015

http://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf


Microsoft

從圖像說明到視覺概念(From Captions to Visual Concepts and Back)

Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, CVPR, 2015.

http://arxiv.org/pdf/1411.4952


Univ. Montreal / Univ. Toronto

Show, Attend, and Tell:視覺注意力與神經圖像標題生成(Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention)

Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, arXiv:1502.03044 / ICML 2015

http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf


Idiap / EPFL / Facebook

基於短語的圖像說明(Phrase-based Image Captioning)

Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, arXiv:1502.03671 / ICML 2015

http://arxiv.org/pdf/1502.03671


UCLA / Baidu 

像孩子一樣學習:從圖像句子描述快速學習視覺的新概念(Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images)

Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, arXiv:1504.06692

http://arxiv.org/pdf/1504.06692


MS + Berkeley


探索圖像說明的最近鄰方法( Exploring Nearest Neighbor Approaches for Image Captioning)

Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, arXiv:1505.04467 

http://arxiv.org/pdf/1505.04467.pdf


圖像說明的語言模型(Language Models for Image Captioning: The Quirks and What Works)

Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, arXiv:1505.01809

http://arxiv.org/pdf/1505.01809.pdf


阿德萊德

具有中間屬性層的圖像說明( Image Captioning with an Intermediate Attributes Layer)

Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, arXiv:1506.01144


蒂爾堡

通過圖片學習語言(Learning language through pictures)

Grzegorz Chrupala, Akos Kadar, Afra Alishahi, arXiv:1506.03694


蒙特婁大學

使用基於注意力的編碼器-解碼器網絡描述多媒體內容(Describing Multimedia Content using Attention-based Encoder-Decoder Networks)

Kyunghyun Cho, Aaron Courville, Yoshua Bengio, arXiv:1507.01053


康奈爾

圖像表示和神經圖像說明的新領域(Image Representations and New Domains in Neural Image Captioning)

Jack Hessel, Nicolas Savva, Michael J. Wilber, arXiv:1508.02091


MS + City Univ. of HongKong

Learning Query and Image Similarities with Ranking Canonical Correlation Analysis

Ting Yao, Tao Mei, and Chong-Wah Ngo, ICCV, 2015


視頻字幕(Video Captioning)


伯克利

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.


猶他州/ UML / 伯克利

Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.


微軟

Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.


猶他州/ UML / 伯克利

Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.


蒙特婁大學/ 舍布魯克

Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029


MPI / 伯克利

Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698


多倫多大學 / MIT 

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724


蒙特婁大學

Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053


TAU / 美國南加州大學

Dotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950.



卷積/循環網絡


論文:Conditional Image Generation with PixelCNN Decoders」

作者:Aäron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu 


論文:Learning to Generate Chairs with Convolutional Neural Networks

作者:Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox 

發表:CVPR, 2015. 


論文:DRAW: A Recurrent Neural Network For Image Generation

作者:Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra

發表:ICML, 2015. 


對抗網絡


論文:生成對抗網絡(Generative Adversarial Networks)

作者:Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

發表:NIPS, 2014. 


論文:使用對抗網絡Laplacian Pyramid 的深度生成圖像模型(Deep Generative Image Models using a  Laplacian Pyramid of Adversarial Networks)

作者:Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus 

發表:NIPS, 2015. 


論文:生成模型演講概述 (A note on the evaluation of generative models)

作者:Lucas Theis, Aäron van den Oord, Matthias Bethge

發表:ICLR 2016. 


論文:變分自動編碼深度高斯過程(Variationally Auto-Encoded Deep Gaussian Processes)

作者:Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence

發表:ICLR 2016.


論文:用注意力機制從字幕生成圖像 (Generating Images from Captions with Attention)

作者:Elman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov

發表: ICLR 2016


論文:分類生成對抗網絡的無監督和半監督學習(Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks)

作者:Jost Tobias Springenberg

發表:ICLR 2016


論文:用一個對抗檢測表徵(Censoring Representations with an Adversary)

作者:Harrison Edwards, Amos Storkey

發表:ICLR 2016


論文:虛擬對抗訓練實現分布式順滑 (Distributional Smoothing with Virtual Adversarial Training)

作者:Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

發表:ICLR 2016


論文:自然圖像流形上的生成視覺操作(Generative Visual Manipulation on the Natural Image Manifold)

作者:朱俊彥, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros

發表: ECCV 2016.

 

論文:深度卷積生成對抗網絡的無監督表示學習(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks)

作者:Alec Radford, Luke Metz, Soumith Chintala

發表: ICLR 2016


圖:(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)


維吉尼亞大學 / 微軟研究院


論文:VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.

作者:Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh 


MPI / 伯克利


論文:Ask Your Neurons: A Neural-based Approach to Answering Questions about Images

作者:Mateusz Malinowski, Marcus Rohrbach, Mario Fritz,

發布 : arXiv:1505.01121.


多倫多


論文: Image Question Answering: A Visual Semantic Embedding Model and a New Dataset

作者:Mengye Ren, Ryan Kiros, Richard Zemel

發表: arXiv:1505.02074 / ICML 2015 deep learning workshop.


百度/ 加州大學洛杉磯分校


作者:Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, 徐偉

論文:Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering

發表: arXiv:1505.05612.


POSTECH(韓國)


論文:Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction 

作者:Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han

發表: arXiv:1511.05765


CMU / 微軟研究院


論文:Stacked Attention Networks for Image Question Answering

作者:Yang, Z., He, X., Gao, J., Deng, L., & Smola, A. (2015)

發表: arXiv:1511.02274.


MetaMind


論文:Dynamic Memory Networks for Visual and Textual Question Answering

作者:Xiong, Caiming, Stephen Merity, and Richard Socher

發表: arXiv:1603.01417 (2016).


首爾國立大學 + NAVER


論文:Multimodal Residual Learning for Visual QA

作者:Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang

發表:arXiv:1606:01455


UC Berkeley + 索尼


論文:Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

作者:Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach

發表:arXiv:1606.01847


Postech


論文:Training Recurrent Answering Units with Joint Loss Minimization for VQA

作者:Hyeonwoo Noh and Bohyung Han

發表: arXiv:1606.03647


首爾國立大學 + NAVER


論文: Hadamard Product for Low-rank Bilinear Pooling

作者:Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhan

發表:arXiv:1610.04325.




Mr-CNN


論文:Predicting Eye Fixations using Convolutional Neural Networks 

作者:Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu

發表:CVPR, 2015.


學習地標的連續搜索


作者:Learning a Sequential Search for Landmarks

論文:Saurabh Singh, Derek Hoiem, David Forsyth

發表:CVPR, 2015.


視覺注意力機制實現多物體識別 


論文:Multiple Object Recognition with Visual Attention

作者:Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu, 

發表:ICLR, 2015.


視覺注意力機制的循環模型

作者:Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu

論文:Recurrent Models of Visual Attention

發表:NIPS, 2014.


超解析度


Iterative Image Reconstruction

Sven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001. 

Sven Behnke: Learning Iterative Image Reconstruction in the Neural Abstraction Pyramid. International Journal of Computational Intelligence and Applications, vol. 1, no. 4, pp. 427-438, 2001. 

Super-Resolution (SRCNN) 

Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.

Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.

Very Deep Super-Resolution

Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015. 

Deeply-Recursive Convolutional Network

Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015. 

Casade-Sparse-Coding-Network

Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015.

Perceptual Losses for Super-Resolution

Justin Johnson, Alexandre Alahi, Li Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016.

SRGAN

Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, arXiv:1609.04802v3, 2016.


其他應用


Optical Flow (FlowNet)

Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.

Compression Artifacts Reduction 

Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.

Blur Removal

Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf, Learning to Deblur, arXiv:1406.7444

Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015

Image Deconvolution

Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.

Deep Edge-Aware Filter

Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.

Computing the Stereo Matching Cost with a Convolutional Neural Network

Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.

Colorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros, ECCV, 2016

Feature Learning by Inpainting

Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016






圖片來源:BoxSup論文


SEC: Seed, Expand and Constrain

Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016.

Adelaide

Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. (1st ranked in VOC2012)

Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. (4th ranked in VOC2012)

Deep Parsing Network (DPN)

Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 (2nd ranked in VOC 2012)

CentraleSuperBoundaries, INRIA

BoxSup 

Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)

POSTECH

Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. (7th ranked in VOC2012)

Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. 

Seunghoon Hong,Junhyuk Oh, Bohyung Han, and Honglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928

Conditional Random Fields as Recurrent Neural Networks 

Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)

DeepLab

Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. (9th ranked in VOC2012)

Zoom-out 

Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015

Joint Calibration

Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.

Fully Convolutional Networks for Semantic Segmentation  

Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.

Hypercolumn

Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.

Deep Hierarchical Parsing

Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015. 

Learning Hierarchical Features for Scene Labeling

Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.

Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.

University of Cambridge

Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015.

Princeton

Univ. of Washington, Allen AI

Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi, Ali Farhadi, "Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing", ICCV, 2015

INRIA

UCSB

Niloufar Pourian, S. Karthikeyan, and B.S. Manjunath, "Weakly supervised graph based semantic segmentation by learning communities of image-parts", ICCV, 2015

課程


• 深度視覺


[斯坦福] CS231n: Convolutional Neural Networks for Visual Recognition

[香港中文大學] ELEG 5040: Advanced Topics in Signal Processing(Introduction to Deep Learning)


• 更多深度課程推薦


[斯坦福] CS224d: Deep Learning for Natural Language Processing

[牛津 Deep Learning by Prof. Nando de Freitas

[紐約大學] Deep Learning by Prof. Yann LeCun


圖書


免費在線圖書

◦ Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

◦ Neural Networks and Deep Learning by Michael Nielsen

◦ Deep Learning Tutorial by LISA lab, University of Montreal


視頻


演講

◦ Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng

◦ Recent Developments in Deep Learning By Geoff Hinton

◦ The Unreasonable Effectiveness of Deep Learning by Yann LeCun

◦ Deep Learning of Representations by Yoshua bengio


軟體


框架

• Tensorflow: An open source software library for numerical computation using data flow graph by Google [Web]

• Torch7: Deep learning library in Lua, used by Facebook and Google Deepmind [Web]

◦ Torch-based deep learning libraries: [torchnet],

• Caffe: Deep learning framework by the BVLC [Web]

• Theano: Mathematical library in Python, maintained by LISA lab [Web]

◦ Theano-based deep learning libraries: [Pylearn2], [Blocks], [Keras], [Lasagne]

• MatConvNet: CNNs for MATLAB [Web]

• MXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support [Web]

• Deepgaze: A computer vision library for human-computer interaction based on CNNs [Web]


應用


• 對抗訓練 Code and hyperparameters for the paper "Generative Adversarial Networks" [Web]

• 理解與可視化 Source code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015. [Web]

• 詞義分割  Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014. [Web]  ; Source code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015. [Web]

• 超解析度 Image Super-Resolution for Anime-Style-Art [Web]

• 邊緣檢測 Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015. [Web]

;Source code for the paper "Holistically-Nested Edge Detection", ICCV 2015. [Web]


講座


• [CVPR 2014] Tutorial on Deep Learning in Computer Vision

• [CVPR 2015] Applied Deep Learning for Computer Vision with Torch


博客


• Deep down the rabbit hole: CVPR 2015 and beyond@Tombone's Computer Vision Blog

• CVPR recap and where we're going@Zoya Bylinskii (MIT PhD Student)'s Blog

• Facebook's AI Painting@Wired

• Inceptionism: Going Deeper into Neural Networks@Google Research

• Implementing Neural networks





【Major術業】是一個推薦、解讀、討論人工智慧領域技術及應用的知識共享平臺。如果你是AI 領域學習者、從業者、研究者或創業者,歡迎在公眾號菜單欄點擊「AI技術群/自動駕駛群」,小M拉你入群,陪你一起迎接AI時代的到來。

相關焦點

  • 【盤點影響計算機視覺Top100論文】從ResNet到AlexNet
    1新智元編譯來源:github編譯整理: 新智元編輯部 【新智元導讀】計算機視覺近年來獲得了較大的發展,代表了深度學習最前沿的研究方向。本文梳理了2012到2017年計算機視覺領域的大事件:以論文和其他乾貨資源為主,並附上資源地址。
  • 一文帶你讀懂計算機視覺
    ,inceptionnet,resnet遷移學習:在一個新場景上用很少的資源重新訓練大型神經網絡圖像分割:rcnn生成式對抗網絡計算機視覺所需硬體:選擇什麼,關鍵是GPU這些是自2000年以來在opencv中出現的舊的計算機視覺方法。
  • DL經典論文系列(二) AlexNet、VGG、GoogLeNet/Inception、ResNet
    1、一張原始圖片被resize到(224,224,3)。使用一個更深更大的網絡使得GoogLeNet移除全連接層之後還不影響準確度。其在ImageNet上的top-5準確度為93.3%,但是速度還比VGG還快。四、ResNet核心解讀:cvpr2016最佳論文獎,ImageNet當年的冠軍。
  • PyTorch Hub輕鬆解決論文可復現性
    用戶可以提交、瀏覽模型,極大的改善了論文的可復現性難題。機器學習論文的可復現性一直是個難題。許多機器學習相關論文要麼無法復現,要麼難以重現。有時候論文讀者經常為了調用各種經典機器學習模型,還要重複造輪子。隨著提交給arXiv以及各種會議上的論文數量開始暴漲,可復現性的重要性也越來越凸顯。
  • 從AlexNet到BERT:深度學習中那些最重要idea的最簡單回顧
    即使這樣,下面介紹到的深度學習技術,也已經涵蓋了理解現代深度學習研究所需的基本知識。如果你是該領域的萌新,那麼很好,這會是你一個非常好的起點。深度學習是個瞬息萬變的領域,海量的研究論文和想法可能會令人感覺有點跟不上。就算是經驗豐富的研究人員,有時也會很懵圈,很難告訴公司PR真正的突破是哪些。
  • 計算機科學技術學院研究生在計算機視覺頂級期刊發表論文
    新聞網訊 近日,計算機科學技術學院2017級研究生王國濤撰寫的學術論文Improved Robust Video Saliency Detection based on Longterm Spatial-temporal Information在計算機視覺領域國際頂級刊物IEEE Transactions
  • 從AlexNet到BERT:深度學習中那些最重要的idea回顧
    即使這樣,下面介紹到的深度學習技術,也已經涵蓋了理解現代深度學習研究所需的基本知識。如果你是該領域的萌新,那麼很好,這會是你一個非常好的起點。深度學習是個瞬息萬變的領域,海量的研究論文和想法可能會令人感覺有點跟不上。就算是經驗豐富的研究人員,有時也會很懵圈,很難告訴公司PR真正的突破是哪些。
  • 理解並實現 ResNet(Keras)
    -446d7ff84d33ResNet 是殘差網絡(Residual Network)的縮寫,是一種作為許多計算機視覺任務主幹的經典神經網絡。我是在Andrew Ng的 DeepLearning.AI 課程上學習到關於編寫ResNet的內容的,非常推薦大家觀看這個課程。在我的Github repo上,我分享了兩個Jupyter Notebook,一個是如DeepLearning.AI中所述,從頭開始編碼ResNet,另一個在Keras中使用預訓練的模型。
  • 解讀計算機視覺論文投稿到接收,不可不知的關鍵環節
    今天,我們將介紹智源青年科學家、北大計算機係數字媒體研究所研究員施柏鑫的報告《計算機視覺會議論文從投稿到接收》。 本次報告,施柏鑫從作者、審稿人、領域主席、期刊編委等視角,圍繞計算機視覺領域,為我們講述了該領域頂級會議論文從寫文章、投文章(包括Rebuttal)、審文章、擴期刊各環節的經驗心得和關鍵注意事項。
  • 計算機視覺論文速遞
    現有的方法和數據集試圖解釋知覺相似性使用刺激,這些刺激可能無法覆蓋影響人類相似性判斷的全部因素,甚至包括那些與此目標相關的因素。我們在一個流行的娛樂網站之後引入了一個名為\ textbf (TTL)的新數據集,該網站包含人類配對的圖像,視覺上相似。該數據集包含6016個來自野外的圖像對,根據人類使用的豐富多樣的標準揭示了這些圖像對。
  • 獨家| 人工智慧學習篇4:計算機視覺開源框架
    計算機視覺是使用計算機及相關設備來模擬生物視覺,其核心問題是研究如何對輸入的圖像信息進行組織,對物體和場景進行識別,進而對圖像內容給予解釋。具體說來,是指用攝影機和計算機代替人眼對圖像內容進行解釋,實現目標識別、跟蹤和測量等功能。代表性技術和開源軟體馬爾《視覺》的問世,標誌著計算機視覺成為一門獨立的學科。
  • [計算機視覺論文速遞] ECCV 2018 專場9
    點擊上方「CVer」,選擇「置頂公眾號」重磅乾貨,第一時間送達前戲Amusi 將日常整理的論文都會同步發布到link: https://github.com/amusi/daily-paper-computer-visionECCV 2018是計算機視覺領域中的頂級會議,目前已經公開了部分已錄用的paper。
  • 計算機視覺領域2019推薦論文列表
    研究員提出了一種新的通用的多模態預訓練模型VL-BERT,該模型採用簡單而強大的Transformer模型作為主幹網絡,並將其輸入擴展為同時包含視覺與語言輸入的多模態形式,適用於絕大多數視覺語義下遊任務。
  • 經典CNN網絡(Lenet、Alexnet、GooleNet、VGG、ResNet、DenseNet)
    inception v4:inception 和resnet的組合,Inception-ResNetVGG(2014)由5層卷積層、3層全連接層、softmax輸出層構成。VGG缺陷:參數量大,有140M需要注意到LRN只在A-LRN出現,且A-LRN結果沒有A好,說明LRN作用不大。A效果不如更深的BCDE;B與C比較:增加1x1filter,增加了額外的非線性提升效果;與D比較:3x3 的filter(結構D)比1x1(結構C)的效果好。全連層的作用:從特徵空間映射到樣本標記空間。
  • AlexNet之前,早有算法完成計算機視覺四大挑戰
    這一成績引起了學界和業界的極大關注,計算機視覺也開始逐漸進入深度學習主導的時代。但這樣一個劃時代的研究最近也受到了質疑。近日,有網友在 reddit 上聲稱,Jurgen Schmidhuber 團隊的 Dan Ciresan 提出的 DanNet(也是一種基於 CUDA 的卷積神經網絡)先於 AlexNet 完成了四項圖像識別挑戰。1.
  • 2018最具突破性計算機視覺論文Top 10
    機器學習模型很容易受到對抗性樣本(adversarial examples)的影響:圖像中的微小變化會導致計算機視覺模型出錯,比如把一輛校車誤識別成鴕鳥。在這篇論文中,我們通過利用最近的技術來解決這個問題,這些技術可以將具有已知參數和架構的計算機視覺模型轉換為具有未知參數和架構的其他模型,並匹配人類視覺系統的初始處理。我們發現,在計算機視覺模型之間強烈轉移的對抗性樣本會影響有時間限制的人類觀察者做出的分類。
  • 【深度學習系列】用PaddlePaddle和Tensorflow實現經典CNN網絡AlexNet
    博客專欄:https://www.cnblogs.com/charlotte77/前文傳送門:【好書推薦&學習階段】三個月教你從零入門深度學習【深度學習系列】PaddlePaddle之手寫數字識別【深度學習系列】卷積神經網絡CNN原理詳解(一)——基本原理【深度學習系列】PaddlePaddle之數據預處理
  • ...生以第一作者身份在計算機視覺國際頂級會議ECCV2020發表論文
    近日,計算機視覺國際頂級會議ECCV2020(European Conference on Computer Vision)接收結果公布。理學院數學系應用數學和人工智慧研究團隊在ECCV2020上發表題為SingleImage Super-Resolution via a Holistic Attention Network論文。