【盤點影響計算機視覺Top100論文】從ResNet到AlexNet

2021-01-11 和訊網

    本文首發於微信公眾號:新智元。文章內容屬作者個人觀點,不代表和訊網立場。投資者據此操作,風險請自擔。

1新智元編譯來源:github編譯整理: 新智元編輯部

【新智元導讀】計算機視覺近年來獲得了較大的發展,代表了深度學習最前沿的研究方向。本文梳理了2012到2017年計算機視覺領域的大事件:以論文和其他乾貨資源為主,並附上資源地址。囊括上百篇論文,分ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。 今年2月,新智元曾經向大家介紹了近5年100篇被引用次數最多的深度學習論文,覆蓋了優化/訓練方法、無監督/生成模型、卷積網絡模型和圖像分割/目標檢測等十大子領域。

上述的深度學習被引用最多的100篇論文是Github上的一個開源項目,社區的成員都可以參與。在這個項目上,我們發現了另一個項目——Deep Vision,這是一個關於計算機視覺資源的項目,包含了近年來對該領域影響最大的論文、圖書和博客等的匯總。其中在論文部分,作者也分為ImageNet 分類、物體檢測、物體追蹤、物體識別、圖像與語言和圖像生成等多個方向進行介紹。

經典論文

ImageNet分類

物體檢測

物體跟蹤

低級視覺

超解析度

其他應用

邊緣檢測

語義分割

視覺注意力和顯著性

物體識別

人體姿態估計

CNN原理和性質(Understanding CNN)

圖像和語言

圖像解說

視頻解說

問答

圖像生成

上面是根據這些論文、作者、機構的一些製作的熱圖。

ImageNet分類

圖片來源:AlexNet論文

微軟ResNet

論文:用於圖像識別的深度殘差網絡

作者:何愷明、張祥雨、任少卿和孫劍

連結(複製後可以在瀏覽器中打開查看):http://arxiv.org/pdf/1512.03385v1.pdf

微軟PRelu(隨機糾正線性單元/權重初始化)

論文:深入學習整流器:在ImageNet分類上超越人類水平

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1502.01852.pdf

谷歌Batch Normalization

論文:批量歸一化:通過減少內部協變量來加速深度網絡訓練

作者:Sergey Ioffe, Christian Szegedy

連結:http://arxiv.org/pdf/1502.03167.pdf

谷歌GoogLeNet

論文:更深的卷積,CVPR 2015

作者:Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich

連結:http://arxiv.org/pdf/1409.4842.pdf

牛津VGG-Net

論文:大規模視覺識別中的極深卷積網絡,ICLR 2015

作者:Karen Simonyan & Andrew Zisserman

連結:http://arxiv.org/pdf/1409.1556.pdf

AlexNet

論文:使用深度卷積神經網絡進行ImageNet分類

作者:Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton

連結:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

物體檢測

圖片來源:Faster-RCNN 論文

PVANET

論文:用於實時物體檢測的深度輕量神經網絡(PVANET:Deep but Lightweight Neural Networks for Real-time Object Detection)

作者:Kye-Hyeon Kim, Sanghoon Hong, Byungseok Roh, Yeongjae Cheon, Minje Park

連結:http://arxiv.org/pdf/1608.08021

紐約大學OverFeat

論文:使用卷積網絡進行識別、定位和檢測(OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks),ICLR 2014

作者:Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun

連結:http://arxiv.org/pdf/1312.6229.pdf

伯克利R-CNN

論文:精確物體檢測和語義分割的豐富特徵層次結構(Rich feature hierarchies for accurate object detection and semantic segmentation),CVPR 2014

作者:Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf

微軟SPP

論文:視覺識別深度卷積網絡中的空間金字塔池化(Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition),ECCV 2014

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1406.4729.pdf

微軟Fast R-CNN

論文:Fast R-CNN

作者:Ross Girshick

連結:http://arxiv.org/pdf/1504.08083.pdf

微軟Faster R-CNN

論文:使用RPN走向實時物體檢測(Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks)

作者:任少卿、何愷明、Ross Girshick、孫劍

連結:http://arxiv.org/pdf/1506.01497.pdf

牛津大學R-CNN minus R

論文:R-CNN minus R

作者:Karel Lenc, Andrea Vedaldi

連結:http://arxiv.org/pdf/1506.06981.pdf

端到端行人檢測

論文:密集場景中端到端的行人檢測(End-to-end People Detection in Crowded Scenes)

作者:Russell Stewart, Mykhaylo Andriluka

連結:http://arxiv.org/pdf/1506.04878.pdf

實時物體檢測

論文:你只看一次:統一實時物體檢測(You Only Look Once: Unified, Real-Time Object Detection)

作者:Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi

連結:http://arxiv.org/pdf/1506.02640.pdf

Inside-Outside Net

論文:使用跳躍池化和RNN在場景中檢測物體(Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks)

作者:Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick

連結:http://arxiv.org/abs/1512.04143.pdf

微軟ResNet

論文:用於圖像識別的深度殘差網絡

作者:何愷明、張祥雨、任少卿和孫劍

連結:http://arxiv.org/pdf/1512.03385v1.pdf

R-FCN

論文:通過區域全卷積網絡進行物體識別(R-FCN: Object Detection via Region-based Fully Convolutional Networks)

作者:代季峰,李益,何愷明,孫劍

連結:http://arxiv.org/abs/1605.06409

SSD

論文:單次多框檢測器(SSD: Single Shot MultiBox Detector)

作者:Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg

連結:http://arxiv.org/pdf/1512.02325v2.pdf

速度/精度權衡

論文:現代卷積物體檢測器的速度/精度權衡(Speed/accuracy trade-offs for modern convolutional object detectors)

作者:Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy

連結:http://arxiv.org/pdf/1611.10012v1.pdf

物體跟蹤

論文:用卷積神經網絡通過學習可區分的顯著性地圖實現在線跟蹤(Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network)

作者:Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han

地址:arXiv:1502.06796.

論文:DeepTrack:通過視覺跟蹤的卷積神經網絡學習辨別特徵表徵(DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking)

作者:Hanxi Li, Yi Li and Fatih Porikli

發表: BMVC, 2014.

論文:視覺跟蹤中,學習深度緊湊圖像表示(Learning a Deep Compact Image Representation for Visual Tracking)

作者:N Wang, DY Yeung

發表:NIPS, 2013.

論文:視覺跟蹤的分層卷積特徵(Hierarchical Convolutional Features for Visual Tracking)

作者:Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang

發表: ICCV 2015

論文:完全卷積網絡的視覺跟蹤(Visual Tracking with fully Convolutional Networks)

作者:Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu,

發表:ICCV 2015

論文:學習多域卷積神經網絡進行視覺跟蹤

(Learning Multi-Domain Convolutional Neural Networks for Visual Tracking)

作者:Hyeonseob Namand Bohyung Han

對象識別(Object Recognition)

論文:卷積神經網絡弱監督學習(Weakly-supervised learning with convolutional neural networks)

作者:Maxime Oquab,Leon Bottou,Ivan Laptev,Josef Sivic,CVPR,2015

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Oquab_Is_Object_Localization_2015_CVPR_paper.pdf

FV-CNN

論文:深度濾波器組用於紋理識別和分割(Deep Filter Banks for Texture Recognition and Segmentation)

作者:Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Cimpoi_Deep_Filter_Banks_2015_CVPR_paper.pdf

人體姿態估計(Human Pose Estimation)

論文:使用 Part Affinity Field的實時多人2D姿態估計(Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields)

作者:Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, CVPR, 2017.

論文:Deepcut:多人姿態估計的聯合子集分割和標籤(Deepcut: Joint subset partition and labeling for multi person pose estimation)

作者:Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, CVPR, 2016.

論文:Convolutional pose machines

作者:Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, CVPR, 2016.

論文:人體姿態估計的 Stacked hourglass networks(Stacked hourglass networks for human pose estimation)

作者:Alejandro Newell, Kaiyu Yang, and Jia Deng, ECCV, 2016.

論文:用於視頻中人體姿態估計的Flowing convnets(Flowing convnets for human pose estimation in videos)

作者:Tomas Pfister, James Charles, and Andrew Zisserman, ICCV, 2015.

論文:卷積網絡和人類姿態估計圖模型的聯合訓練(Joint training of a convolutional network and a graphical model for human pose estimation)

作者:Jonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, NIPS, 2014.

理解CNN

圖:(from Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.)

論文:通過測量同變性和等價性來理解圖像表示(Understanding image representations by measuring their equivariance and equivalence)

作者:Karel Lenc, Andrea Vedaldi, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Lenc_Understanding_Image_Representations_2015_CVPR_paper.pdf

論文:深度神經網絡容易被愚弄:無法識別的圖像的高置信度預測(Deep Neural Networks are Easily Fooled:High Confidence Predictions for Unrecognizable Images)

作者:Anh Nguyen, Jason Yosinski, Jeff Clune, CVPR, 2015.

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf

論文:通過反演理解深度圖像表示(Understanding Deep Image Representations by Inverting Them)

作者:Aravindh Mahendran, Andrea Vedaldi, CVPR, 2015

連結:http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf

論文:深度場景CNN中的對象檢測器(Object Detectors Emerge in Deep Scene CNNs)

作者:Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, ICLR, 2015.

連結:http://arxiv.org/abs/1412.6856

論文:用卷積網絡反演視覺表示(Inverting Visual Representations with Convolutional Networks)

作者:Alexey Dosovitskiy, Thomas Brox, arXiv, 2015.

連結:http://arxiv.org/abs/1506.02753

論文:可視化和理解卷積網絡(Visualizing and Understanding Convolutional Networks)

作者:Matthrew Zeiler, Rob Fergus, ECCV, 2014.

連結:http://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf

圖像與語言

圖像說明(Image Captioning)

圖:(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)

UCLA / Baidu

用多模型循環神經網絡解釋圖像(Explain Images with Multimodal Recurrent Neural Networks)

Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, arXiv:1410.1090

http://arxiv.org/pdf/1410.1090

Toronto

使用多模型神經語言模型統一視覺語義嵌入(Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models)

Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, arXiv:1411.2539.

http://arxiv.org/pdf/1411.2539

Berkeley

用於視覺識別和描述的長期循環卷積網絡(Long-term Recurrent Convolutional Networks for Visual Recognition and Description)

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, arXiv:1411.4389.

http://arxiv.org/pdf/1411.4389

Google

看圖寫字:神經圖像說明生成器(Show and Tell: A Neural Image Caption Generator)

Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, arXiv:1411.4555.

http://arxiv.org/pdf/1411.4555

Stanford

用於生成圖像描述的深度視覺語義對齊(Deep Visual-Semantic Alignments for Generating Image Description)

Andrej Karpathy, Li Fei-Fei, CVPR, 2015.

Web:http://cs.stanford.edu/people/karpathy/deepimagesent/

Paper:http://cs.stanford.edu/people/karpathy/cvpr2015.pdf

UML / UT

使用深度循環神經網絡將視頻轉換為自然語言(Translating Videos to Natural Language Using Deep Recurrent Neural Networks)

Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, NAACL-HLT, 2015.

http://arxiv.org/pdf/1412.4729

CMU / Microsoft

學習圖像說明生成的循環視覺表示(Learning a Recurrent Visual Representation for Image Caption Generation)

Xinlei Chen, C. Lawrence Zitnick, arXiv:1411.5654.

Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015

http://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf

Microsoft

從圖像說明到視覺概念(From Captions to Visual Concepts and Back)

Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, CVPR, 2015.

http://arxiv.org/pdf/1411.4952

Univ. Montreal / Univ. Toronto

Show, Attend, and Tell:視覺注意力與神經圖像標題生成(Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention)

Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, arXiv:1502.03044 / ICML 2015

http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf

Idiap / EPFL / Facebook

基於短語的圖像說明(Phrase-based Image Captioning)

Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, arXiv:1502.03671 / ICML 2015

http://arxiv.org/pdf/1502.03671

UCLA / Baidu

像孩子一樣學習:從圖像句子描述快速學習視覺的新概念(Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images)

Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, arXiv:1504.06692

http://arxiv.org/pdf/1504.06692

MS + Berkeley

探索圖像說明的最近鄰方法( Exploring Nearest Neighbor Approaches for Image Captioning)

Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, arXiv:1505.04467

http://arxiv.org/pdf/1505.04467.pdf

圖像說明的語言模型(Language Models for Image Captioning: The Quirks and What Works)

Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, arXiv:1505.01809

http://arxiv.org/pdf/1505.01809.pdf

阿德萊德

具有中間屬性層的圖像說明( Image Captioning with an Intermediate Attributes Layer)

Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, arXiv:1506.01144

蒂爾堡

通過圖片學習語言(Learning language through pictures)

Grzegorz Chrupala, Akos Kadar, Afra Alishahi, arXiv:1506.03694

蒙特婁大學

使用基於注意力的編碼器-解碼器網絡描述多媒體內容(Describing Multimedia Content using Attention-based Encoder-Decoder Networks)

Kyunghyun Cho, Aaron Courville, Yoshua Bengio, arXiv:1507.01053

康奈爾

圖像表示和神經圖像說明的新領域(Image Representations and New Domains in Neural Image Captioning)

Jack Hessel, Nicolas Savva, Michael J. Wilber, arXiv:1508.02091

MS + City Univ. of HongKong

Learning Query and Image Similarities with Ranking Canonical Correlation Analysis

Ting Yao, Tao Mei, and Chong-Wah Ngo, ICCV, 2015

視頻字幕(Video Captioning)

伯克利

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.

猶他州/ UML / 伯克利

Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.

微軟

Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.

猶他州/ UML / 伯克利

Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.

蒙特婁大學/ 舍布魯克

Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029

MPI / 伯克利

Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698

多倫多大學 / MIT

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724

蒙特婁大學

Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053

TAU / 美國南加州大學

Dotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950.

圖像生成

卷積/循環網絡

論文:Conditional Image Generation with PixelCNN Decoders」

作者:Aäron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu

論文:Learning to Generate Chairs with Convolutional Neural Networks

作者:Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox

發表:CVPR, 2015.

論文:DRAW: A Recurrent Neural Network For Image Generation

作者:Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra

發表:ICML, 2015.

對抗網絡

論文:生成對抗網絡(Generative Adversarial Networks)

作者:Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

發表:NIPS, 2014.

論文:使用對抗網絡Laplacian Pyramid 的深度生成圖像模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)

作者:Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus

發表:NIPS, 2015.

論文:生成模型演講概述 (A note on the evaluation of generative models)

作者:Lucas Theis, Aäron van den Oord, Matthias Bethge

發表:ICLR 2016.

論文:變分自動編碼深度高斯過程(Variationally Auto-Encoded Deep Gaussian Processes)

作者:Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence

發表:ICLR 2016.

論文:用注意力機制從字幕生成圖像 (Generating Images from Captions with Attention)

作者:Elman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov

發表: ICLR 2016

論文:分類生成對抗網絡的無監督和半監督學習(Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks)

作者:Jost Tobias Springenberg

發表:ICLR 2016

論文:用一個對抗檢測表徵(Censoring Representations with an Adversary)

作者:Harrison Edwards, Amos Storkey

發表:ICLR 2016

論文:虛擬對抗訓練實現分布式順滑 (Distributional Smoothing with Virtual Adversarial Training)

作者:Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

發表:ICLR 2016

論文:自然圖像流形上的生成視覺操作(Generative Visual Manipulation on the Natural Image Manifold)

作者:朱俊彥, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros

發表: ECCV 2016.

論文:深度卷積生成對抗網絡的無監督表示學習(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks)

作者:Alec Radford, Luke Metz, Soumith Chintala

發表: ICLR 2016

問題回答

圖:(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)

維吉尼亞大學 / 微軟研究院

論文:VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.

作者:Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh

MPI / 伯克利

論文:Ask Your Neurons: A Neural-based Approach to Answering Questions about Images

作者:Mateusz Malinowski, Marcus Rohrbach, Mario Fritz,

發布 : arXiv:1505.01121.

多倫多

論文: Image Question Answering: A Visual Semantic Embedding Model and a New Dataset

作者:Mengye Ren, Ryan Kiros, Richard Zemel

發表: arXiv:1505.02074 / ICML 2015 deep learning workshop.

百度/ 加州大學洛杉磯分校

作者:Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, 徐偉

論文:Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering

發表: arXiv:1505.05612.

POSTECH(韓國)

論文:Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction

作者:Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han

發表: arXiv:1511.05765

CMU / 微軟研究院

論文:Stacked Attention Networks for Image Question Answering

作者:Yang, Z., He, X., Gao, J., Deng, L., & Smola, A. (2015)

發表: arXiv:1511.02274.

MetaMind

論文:Dynamic Memory Networks for Visual and Textual Question Answering

作者:Xiong, Caiming, Stephen Merity, and Richard Socher

發表: arXiv:1603.01417 (2016).

首爾國立大學 + NAVER

論文:Multimodal Residual Learning for Visual QA

作者:Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang

發表:arXiv:1606:01455

UC Berkeley + 索尼

論文:Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

作者:Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach

發表:arXiv:1606.01847

Postech

論文:Training Recurrent Answering Units with Joint Loss Minimization for VQA

作者:Hyeonwoo Noh and Bohyung Han

發表: arXiv:1606.03647

首爾國立大學 + NAVER

論文: Hadamard Product for Low-rank Bilinear Pooling

作者:Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhan

發表:arXiv:1610.04325.

視覺注意力和顯著性

Mr-CNN

論文:Predicting Eye Fixations using Convolutional Neural Networks

作者:Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu

發表:CVPR, 2015.

學習地標的連續搜索

作者:Learning a Sequential Search for Landmarks

論文:Saurabh Singh, Derek Hoiem, David Forsyth

發表:CVPR, 2015.

視覺注意力機制實現多物體識別

論文:Multiple Object Recognition with Visual Attention

作者:Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu,

發表:ICLR, 2015.

視覺注意力機制的循環模型

作者:Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu

論文:Recurrent Models of Visual Attention

發表:NIPS, 2014.

低級視覺

超解析度

Iterative Image Reconstruction

Sven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001.

Sven Behnke: Learning Iterative Image Reconstruction in the Neural Abstraction Pyramid. International Journal of Computational Intelligence and Applications, vol. 1, no. 4, pp. 427-438, 2001.

Super-Resolution (SRCNN)

Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.

Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.

Very Deep Super-Resolution

Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015.

Deeply-Recursive Convolutional Network

Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015.

Casade-Sparse-Coding-Network

Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015.

Perceptual Losses for Super-Resolution

Justin Johnson, Alexandre Alahi, Li Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016.

SRGAN

Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, arXiv:1609.04802v3, 2016.

其他應用

Optical Flow (FlowNet)

Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.

Compression Artifacts Reduction

Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.

Blur Removal

Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf, Learning to Deblur, arXiv:1406.7444

Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015

Image Deconvolution

Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.

Deep Edge-Aware Filter

Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.

Computing the Stereo Matching Cost with a Convolutional Neural Network

Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.

Colorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros, ECCV, 2016

Feature Learning by Inpainting

Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016

邊緣檢測

Holistically-Nested Edge Detection

Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375.

DeepEdge

Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.

DeepContour

Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.

語義分割

圖片來源:BoxSup論文

SEC: Seed, Expand and Constrain

Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016.

Adelaide

Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. (1st ranked in VOC2012)

Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. (4th ranked in VOC2012)

Deep Parsing Network (DPN)

Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 (2nd ranked in VOC 2012)

CentraleSuperBoundaries, INRIA

Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)

BoxSup

Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)

POSTECH

Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. (7th ranked in VOC2012)

Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924.

Seunghoon Hong,Junhyuk Oh,Bohyung Han, andHonglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928

Conditional Random Fields as Recurrent Neural Networks

Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)

DeepLab

Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. (9th ranked in VOC2012)

Zoom-out

Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015

Joint Calibration

Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.

Fully Convolutional Networks for Semantic Segmentation

Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.

Hypercolumn

Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.

Deep Hierarchical Parsing

Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015.

Learning Hierarchical Features for Scene Labeling

Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.

Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.

University of Cambridge

Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015.

Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015.

Princeton

Fisher Yu, Vladlen Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions", ICLR 2016

Univ. of Washington, Allen AI

Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi, Ali Farhadi, "Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing", ICCV, 2015

INRIA

Iasonas Kokkinos, "Pusing the Boundaries of Boundary Detection Using deep Learning", ICLR 2016

UCSB

Niloufar Pourian, S. Karthikeyan, and B.S. Manjunath, "Weakly supervised graph based semantic segmentation by learning communities of image-parts", ICCV, 2015

其他資源

課程

· 深度視覺

[斯坦福] CS231n: Convolutional Neural Networks for Visual Recognition

[香港中文大學] ELEG 5040: Advanced Topics in Signal Processing(Introduction to Deep Learning)

· 更多深度課程推薦

[斯坦福] CS224d: Deep Learning for Natural Language Processing

[牛津 Deep Learning by Prof. Nando de Freitas

[紐約大學] Deep Learning by Prof. Yann LeCun

圖書

免費在線圖書

Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

Neural Networks and Deep Learning by Michael Nielsen

Deep Learning Tutorial by LISA lab, University of Montreal

視頻

演講

Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng

Recent Developments in Deep Learning By Geoff Hinton

The Unreasonable Effectiveness of Deep Learning by Yann LeCun

Deep Learning of Representations by Yoshua bengio

軟體

框架

· Tensorflow: An open source software library for numerical computation using data flow graph by Google [Web]

· Torch7: Deep learning library in Lua, used by Facebook and Google Deepmind [Web]

Torch-based deep learning libraries: [torchnet],

· Caffe: Deep learning framework by the BVLC [Web]

· Theano: Mathematical library in Python, maintained by LISA lab [Web]

Theano-based deep learning libraries: [Pylearn2], [Blocks], [Keras], [Lasagne]

· MatConvNet: CNNs for MATLAB [Web]

· MXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support [Web]

· Deepgaze: A computer vision library for human-computer interaction based on CNNs [Web]

應用

· 對抗訓練 Code and hyperparameters for the paper "Generative Adversarial Networks" [Web]

· 理解與可視化 Source code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015. [Web]

· 詞義分割 Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014. [Web] ; Source code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015. [Web]

· 超解析度 Image Super-Resolution for Anime-Style-Art [Web]

· 邊緣檢測 Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015. [Web]

;Source code for the paper "Holistically-Nested Edge Detection", ICCV 2015. [Web]

講座

· [CVPR 2014] Tutorial on Deep Learning in Computer Vision

· [CVPR 2015] Applied Deep Learning for Computer Vision with Torch

博客

· Deep down the rabbit hole: CVPR 2015 and beyond@Tombone's Computer Vision Blog

· CVPR recap and where we're going@Zoya Bylinskii (MIT PhD Student)'s Blog

· Facebook's AI Painting@Wired

· Inceptionism: Going Deeper into Neural Networks@Google Research

· Implementing Neural networks

    文章來源:微信公眾號新智元

(責任編輯:婁在霞 HN151)

相關焦點

  • 一文帶你讀懂計算機視覺
    ,inceptionnet,resnet遷移學習:在一個新場景上用很少的資源重新訓練大型神經網絡圖像分割:rcnn生成式對抗網絡計算機視覺所需硬體:選擇什麼,關鍵是GPU在這篇論文:http://wearables.cc.gatech.edu/paper_of_week/viola01rapid.pdf 中做了介紹。這是一種機器學習模型,專門用於目標檢測的特徵提取。Haar分類器速度快但準確度低。
  • PyTorch Hub輕鬆解決論文可復現性
    用戶可以提交、瀏覽模型,極大的改善了論文的可復現性難題。機器學習論文的可復現性一直是個難題。許多機器學習相關論文要麼無法復現,要麼難以重現。有時候論文讀者經常為了調用各種經典機器學習模型,還要重複造輪子。隨著提交給arXiv以及各種會議上的論文數量開始暴漲,可復現性的重要性也越來越凸顯。
  • AlexNet之前,早有算法完成計算機視覺四大挑戰
    這一成績引起了學界和業界的極大關注,計算機視覺也開始逐漸進入深度學習主導的時代。但這樣一個劃時代的研究最近也受到了質疑。 近日,有網友在 reddit 上聲稱,Jurgen Schmidhuber 團隊的 Dan Ciresan 提出的 DanNet(也是一種基於 CUDA 的卷積神經網絡)先於 AlexNet 完成了四項圖像識別挑戰。
  • 視覺感知-從人類視覺到計算機視覺
    但是,使計算機感知視覺世界有多困難?截至2019年,我們才取得了一定進展,但依舊還有很長的路要走。計算機視覺是計算機科學的一個相對較新的領域,大約有60年的歷史。他們使用幻燈機向貓展示了特定的圖案,並注意到特定的圖案刺激了大腦特定部位的活動。實驗表明,視覺皮層細胞對邊緣的方向敏感,但對邊緣的位置不敏感。他們得出結論,視覺皮層中有3種類型的細胞:簡單,複雜和超複雜。視覺處理從簡單的單元格開始,這意味著它從學習簡單的事物(即邊和角)開始。這為現代計算機視覺奠定了基礎。
  • 2018最具突破性計算機視覺論文Top 10
    examples)的影響:圖像中的微小變化會導致計算機視覺模型出錯,比如把一輛校車誤識別成鴕鳥。在這篇論文中,我們通過利用最近的技術來解決這個問題,這些技術可以將具有已知參數和架構的計算機視覺模型轉換為具有未知參數和架構的其他模型,並匹配人類視覺系統的初始處理。我們發現,在計算機視覺模型之間強烈轉移的對抗性樣本會影響有時間限制的人類觀察者做出的分類。
  • 微軟亞洲研究院常務副院長郭百寧:計算機視覺的黃金時代到了
    「1998 年微軟亞洲研究院建院的時候我們第一個成立的組就是視覺計算組,那時候計算機視覺應用非常少,很冷門」,微軟亞洲研究院常務副院長郭百寧博士對極客公園說道,他同時也是計算機視覺領域的頂尖科學家,「當時圖形學火了很多年,現在輪到計算機視覺火了。」
  • 10 大深度學習架構:計算機視覺優秀從業者必備(附代碼實現)
    的文章,總結了計算機視覺領域已經成效卓著的 10 個深度學習架構,並附上了每篇論文的地址連結和代碼實現。機器之心對該文進行了編譯,原文連結請見文末。 時刻跟上深度學習領域的最新進展變的越來越難,幾乎每一天都有創新或新應用。但是,大多數進展隱藏在大量發表的 ArXiv / Springer 研究論文中。
  • 首個基於計算機視覺解決圖書館盤點的AI機器人圖書館書童驚豔亮相
    打開APP 首個基於計算機視覺解決圖書館盤點的AI機器人圖書館書童驚豔亮相 工程師青青 發表於 2018-10-01 08:44:00
  • 全球計算機視覺頂會CVPR 2019論文出爐:騰訊優圖25篇論文入選
    全球計算機視覺頂級會議 IEEE CVPR 2019(Computer Vision and Pattern Recognition,即IEEE國際計算機視覺與模式識別會議) 即將於6月在美國長灘召開。本屆大會總共錄取來自全球論文1299篇。
  • 盤點中國十大計算機視覺公司
    計算機視覺也可以看作是研究如何使人工系統從圖像或多維數據中「感知」的科學。它的最終研究目標就是使計算機能象人那樣通過視覺觀察和理解世界,具有自主適應環境的能力。  從學科分類上,二者都被認為是ArtificialIntelligence下屬科目,不過計算機視覺偏軟體,通過算法對圖像進行識別分析,而機器視覺軟硬體都包括(採集設備,光源,鏡頭,控制,機構,算法等),指的是系統,更偏實際應用。簡單的說,我們可以認為計算機視覺是研究「讓機器怎麼看」的科學,而機器視覺是研究「看了之後怎麼用」的科學。
  • 騰訊優圖25篇論文入選全球頂級計算機視覺會議CVPR 2019
    全球計算機視覺頂級會議 IEEE CVPR 2019(Computer Vision and Pattern Recognition,即IEEE國際計算機視覺與模式識別會議) 即將於6月在美國長灘召開。本屆大會總共錄取來自全球論文1299篇。
  • 視源股份(CVTE)與中山大學聯合發表論文,入選計算機視覺領域頂會...
    (圖片來源:CVPR 2018)作為計算機視覺領域的全球頂會,CVPR(Computer Vision and Pattern Recognition)至今已經走到了第31個年頭。據統計,本屆大會有超過3309篇大會論文投稿,接收979篇論文。中國本土企業「視源股份(CVTE)」中央研究院聯合中山大學發表論文《Image Blind Denoising With Generative Adversarial Network Based Noise Modeling》,提出了一種新的圖像降噪方法。
  • 西工大王鵬教授課題組論文被計算機視覺頂級會議CVPR2020錄用
    西工大新聞網2月26日電(王寧 高揚)2月24日,我校計算機學院王鵬教授課題組論文被2020年IEEE國際計算機視覺與模式識別會議(IEEE Conference on ComputerCVPR會議始於1983年,由IEEE舉辦,是計算機視覺和模式識別領域的頂級會議。今年CVPR共收到的投稿超過7000篇,有效論文投稿為6656篇,最終收錄數量為1470篇,收錄率為22.0%。
  • Nature盤點:從Fortran、arXiv到AlexNet,這些代碼改變了科學界
    從 Fortran 編譯器到 arXiv 預印本庫、AlexNet,這些計算機代碼和平臺改變了科學界。2019 年,「事件視界望遠鏡」團隊拍下了第一張黑洞照片。這張照片並非傳統意義上的照片,而是計算得來的——將美國、墨西哥、智利、西班牙和南極多臺射電望遠鏡捕捉到的數據進行數學轉換。該團隊公開了所用代碼,使科學社區可以看到,並基於此做進一步的探索。而這逐漸成為一種普遍模式。
  • ECCV 2020 論文大盤點-光流篇
    本文盤點 ECCV 2020 所有光流(Optical Flow)相關論文,總計 7 篇,值得一提的是來自普林斯頓大學的論文『RAFT:
  • 計算機視覺方向簡介 | 多視角立體視覺MVS
    推薦閱讀前段時間參加了個線下交流會(附SLAM入門視頻)計算機視覺方向簡介 | 從全景圖恢復三維結構計算機視覺方向簡介 | 陣列相機立體全景拼接計算機視覺方向簡介 |單目微運動生成深度圖計算機視覺方向簡介 | 深度相機室內實時稠密三維重建計算機視覺方向簡介 | 深度圖補全計算機視覺方向簡介 | 人體骨骼關鍵點檢測綜述計算機視覺方向簡介 | 人臉識別中的活體檢測算法綜述計算機視覺方向簡介 | 目標檢測最新進展總結與展望計算機視覺方向簡介 |
  • 全球計算機視覺頂會 CVPR 連續三年收錄騰訊優圖論文 2019 收錄 25...
    全球計算機視覺頂級會議 IEEE CVPR 2019(Computer Vision and Pattern Recognition,即 IEEE 國際計算機視覺與模式識別會議) 即將於 6 月在美國長灘召開。本屆大會總共錄取來自全球論文 1299 篇。
  • 逆水寒捏臉技術成國際計算機視覺領域頂會論文
    事實上,它所蘊含的技術分量,已經成為足夠985博士畢業論文級別登上計算機視覺方向的頂級峰會「ICCV 」。近日,有玩家發現,自己在計算機視覺方向的頂級會議「ICCV 」收錄論文中,發現了逆水寒關於面部表情捕捉和深度學習模型的技術論文。ICCV是計算機視覺領域最高級別的會議,全球範圍內每兩年召開一次,微軟、谷歌等科技企業都積極參與ICCV峰會。
  • 光學預處理與計算機視覺結合,UCR學者用漩渦實現混合計算機視覺系統
    在本文中,來自加州大學河濱分校機械工程系的研究者通過應用光學漩渦證明了混合計算機視覺系統的可行性。該研究為光子學在構建通用的小腦混合神經網絡和開發用於大數據分析的實時硬體方面的作用提供了新見解。從醫學診斷到自動駕駛再到人臉識別,圖像分析在現代技術中無處不在。使用深度學習卷積神經網絡的計算機徹底改變了計算機視覺。