本文轉載自知乎,作者:吳豔敏
原文連結:https://zhuanlan.zhihu.com/p/130530891
未經作者允許,禁止二次轉載
回覆:SLAM實驗室,即可獲取全文文檔(共36個實驗室介紹,本文20個)
1. 美國卡耐基梅隴大學機器人研究所研究方向:機器人感知、結構,服務型、運輸、製造業、現場機器
研究所主頁:https://www.ri.cmu.edu/
下屬 Field Robotic Center 主頁:https://frc.ri.cmu.edu/
發表論文:https://www.ri.cmu.edu/pubs/
👦Michael Kaess:
個人主頁 ,谷歌學術
👦Sebastian Scherer:
個人主頁 ,谷歌學術
📜Kaess M, Ranganathan A, Dellaert F. iSAM: Incremental smoothing and mapping[J]. IEEE Transactions on Robotics, 2008, 24(6): 1365-1378.
📜Hsiao M, Westman E, Zhang G, et al. Keyframe-based dense planar SLAM[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 5110-5117.
📜Kaess M. Simultaneous localization and mapping with infinite planes[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 4605-4611.
研究方向:多模態環境理解,語義導航,自主信息獲取
實驗室主頁:https://existentialrobotics.org/index.html
發表論文匯總:https://existentialrobotics.org/pages/publications.html
👦Nikolay Atanasov:
個人主頁 谷歌學術
📜語義 SLAM 經典論文:
Bowman S L, Atanasov N, Daniilidis K, et al.Probabilistic data association for semantic slam[C]//2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017: 1722-1729.
📜實例網格模型定位與建圖:
Feng Q, Meng Y, Shan M, et al.Localization and Mapping using Instance-specific Mesh Models[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4985-4991.
📜基於事件相機的 VIO:
Zihao Zhu A, Atanasov N, Daniilidis K.Event-based visual inertial odometry[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5391-5399.
研究方向:SLAM、VINS、語義定位與建圖等
實驗室主頁:https://sites.udel.edu/robot/
發表論文匯總:https://sites.udel.edu/robot/publications/
Github 地址:https://github.com/rpng?page=2
📜Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.(代碼:https://github.com/rpng/open_vins )
📜Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6319-6326.(代碼:https://github.com/rpng/R-VIO )
📜Zuo X, Geneva P, Yang Y, et al. Visual-Inertial Localization With Prior LiDAR Map Constraints[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3394-3401.
📜Zuo X, Ye W, Yang Y, et al. Multimodal localization: Stereo over LiDAR map[J]. Journal of Field Robotics, 2020 ( 左星星博士谷歌學術)
👦黃國權教授主頁
研究方向:位姿估計與導航,路徑規劃,控制與決策,機器學習與強化學習
實驗室主頁:http://acl.mit.edu/
發表論文:http://acl.mit.edu/publications (實驗室的學位論文也可以在這裡找到)
👦Jonathan P. How 教授:個人主頁 谷歌學術
👦Kasra Khosoussi(SLAM 圖優化):谷歌學術
📜物體級 SLAM:Mu B, Liu S Y, Paull L, et al.Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.(代碼:https://github.com/BeipengMu/objectSLAM)
📜物體級 SLAM 導航:Ok K, Liu K, Frey K, et al.Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 669-675.
📜SLAM 的圖優化:Khosoussi, K., Giamou, M., Sukhatme, G., Huang, S., Dissanayake, G., and How, J. P.,Reliable Graphs for SLAM [C]//International Journal of Robotics Research (IJRR), 2019.
研究方向:移動機器人環境感知
實驗室主頁:http://web.mit.edu/sparklab/
👦Luca Carlone 教授:個人主頁 谷歌學術
📜SLAM 經典綜述:Cadena C, Carlone L, Carrillo H, et al.Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age[J]. IEEE Transactions on robotics, 2016, 32(6): 1309-1332.
📜VIO 流形預積分:Forster C, Carlone L, Dellaert F, et al.On-Manifold Preintegration for Real-Time Visual--Inertial Odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21.
📜開源語義 SLAM:Rosinol A, Abate M, Chang Y, et al.Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.(代碼:https://github.com/MIT-SPARK/Kimera )
研究方向:水下或陸地移動機器人導航與建圖
實驗室主頁:https://marinerobotics.mit.edu/ (隸屬於 MIT 計算機科學與人工智慧實驗室)
👦John Leonard 教授:谷歌學術
發表論文匯總:https://marinerobotics.mit.edu/biblio
📜面向物體的 SLAM:Finman R, Paull L, Leonard J J.Toward object-based place recognition in dense rgb-d maps[C]//ICRA Workshop Visual Place Recognition in Changing Environments, Seattle, WA. 2015.
📜拓展 KinectFusion:Whelan T, Kaess M, Fallon M, et al.Kintinuous: Spatially extended kinectfusion[J]. 2012.
📜語義 SLAM 概率數據關聯:Doherty K, Fourie D, Leonard J.Multimodal semantic slam with probabilistic data association[C]//2019 international conference on robotics and automation (ICRA). IEEE, 2019: 2419-2425.
研究方向:視覺、雷射、慣性導航系統,行動裝置大規模三維建模與定位
實驗室主頁:http://mars.cs.umn.edu/index.php
發表論文匯總:http://mars.cs.umn.edu/publications.php
👦Stergios I. Roumeliotis:個人主頁 ,谷歌學術
📜行動裝置 VIO:Wu K, Ahmed A, Georgiou G A, et al.A Square Root Inverse Filter for Efficient Vision-aided Inertial Navigation on Mobile Devices[C]//Robotics: Science and Systems. 2015, 2.(項目主頁:http://mars.cs.umn.edu/research/sriswf.php )
📜行動裝置大規模三維半稠密建圖:Guo C X, Sartipi K, DuToit R C, et al.Resource-aware large-scale cooperative three-dimensional mapping using multiple mobile devices[J]. IEEE Transactions on Robotics, 2018, 34(5): 1349-1369. (項目主頁:http://mars.cs.umn.edu/research/semi_dense_mapping.php )
📜VIO 相關研究:http://mars.cs.umn.edu/research/vins_overview.php
研究方向:自主微型無人機
實驗室主頁:https://www.kumarrobotics.org/
發表論文:https://www.kumarrobotics.org/publications/
研究成果視頻:https://www.youtube.com/user/KumarLabPenn/videos
📜無人機半稠密 VIO:Liu W, Loianno G, Mohta K, et al.Semi-Dense Visual-Inertial Odometry and Mapping for Quadrotors with SWAP Constraints[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-6.
📜語義數據關聯:Liu X, Chen S W, Liu C, et al.Monocular Camera Based Fruit Counting and Mapping with Semantic Data Association[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2296-2303.
研究方向:三維重構、語義分割、視覺 SLAM、圖像定位、深度神經網絡
👦Srikumar Ramalingam:個人主頁 谷歌學術
📜點面 SLAM:Taguchi Y, Jian Y D, Ramalingam S, et al.Point-plane SLAM for hand-held 3D sensors[C]//2013 IEEE international conference on robotics and automation. IEEE, 2013: 5182-5189.
📜點線定位:Ramalingam S, Bouaziz S, Sturm P.Pose estimation using both points and lines for geo-localization[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011: 4716-4723.(視頻)
📜2D 3D 定位:Ataer-Cansizoglu E, Taguchi Y, Ramalingam S.Pinpoint SLAM: A hybrid of 2D and 3D simultaneous localization and mapping for RGB-D sensors[C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016: 1300-1307.(視頻)
研究方向:SLAM,圖像時空重構
👦個人主頁,谷歌學術
📜因子圖:Dellaert F.Factor graphs and GTSAM: A hands-on introduction[R]. Georgia Institute of Technology, 2012. (GTSAM 代碼:http://borg.cc.gatech.edu/ )
📜多機器人分布式 SLAM:Cunningham A, Wurm K M, Burgard W, et al.Fully distributed scalable smoothing and mapping with robust multi-robot data association[C]//2012 IEEE International Conference on Robotics and Automation. IEEE, 2012: 1093-1100.
📜Choudhary S, Trevor A J B, Christensen H I, et al. SLAM with object discovery, modeling and mapping[C]//2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014: 1018-1025.
研究方向:視覺 SLAM、三維重建、多目標跟蹤
👦個人主頁 谷歌學術
📜Zhao Y, Smith J S, Karumanchi S H, et al. Closed-Loop Benchmarking of Stereo Visual-Inertial SLAM Systems: Understanding the Impact of Drift and Latency on Tracking Accuracy[J]. arXiv preprint arXiv:2003.01317, 2020.
📜Zhao Y, Vela P A. Good feature selection for least squares pose optimization in VO/VSLAM[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1183-1189.(代碼:https://github.com/ivalab/FullResults_GoodFeature )
📜Zhao Y, Vela P A. Good line cutting: Towards accurate pose tracking of line-assisted VO/VSLAM[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 516-531. (代碼:https://github.com/ivalab/GF_PL_SLAM )
研究方向:SLAM,不確定性建模
實驗室主頁:http://montrealrobotics.ca/
👦Liam Paull 教授:個人主頁 谷歌學術
📜Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.(代碼:https://github.com/BeipengMu/objectSLAM)
📜Murthy Jatavallabhula K, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.(代碼:https://github.com/montrealrobotics/gradSLAM )
研究方向:移動機器人軟硬體設計
實驗室主頁:https://introlab.3it.usherbrooke.ca/
📜雷射視覺稠密重建:Labbé M, Michaud F.RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, 2019, 36(2): 416-446.
研究方向:移動機器人、無人機環境感知與導航,VISLAM,事件相機
實驗室主頁:http://rpg.ifi.uzh.ch/index.html
發表論文匯總:http://rpg.ifi.uzh.ch/publications.html
Github 代碼公開地址:https://github.com/uzh-rpg
📜Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.
📜VO/VIO 軌跡評估工具 rpg_trajectory_evaluation:https://github.com/uzh-rpg/rpg_trajectory_evaluation
📜事件相機項目主頁:http://rpg.ifi.uzh.ch/research_dvs.html
👦人物:Davide Scaramuzza 張子潮
研究方向:定位、三維重建、語義分割、機器人視覺
實驗室主頁:http://www.cvg.ethz.ch/index.php
發表論文:http://www.cvg.ethz.ch/publications/
📜視覺語義裡程計:Lianos K N, Schonberger J L, Pollefeys M, et al.Vso: Visual semantic odometry[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 234-250.
📜視覺語義定位:CVPR 2018Semantic visual localization
📜大規模戶外建圖:Bârsan I A, Liu P, Pollefeys M, et al.Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.
代碼:https://github.com/AndreiBarsan/DynSLAM
作者博士學位論文:Barsan I A.Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.
👦Marc Pollefeys:個人主頁,谷歌學術
👦Johannes L. Schönberger:個人主頁,谷歌學術
研究方向:機器人視覺場景與物體理解、機器人操縱
實驗室主頁:https://www.imperial.ac.uk/dyson-robotics-lab/
發表論文:https://www.imperial.ac.uk/dyson-robotics-lab/publications/
代表性工作:MonoSLAM、CodeSLAM、ElasticFusion、KinectFusion
📜ElasticFusion:Whelan T, Leutenegger S, Salas-Moreno R, et al.ElasticFusion: Dense SLAM without a pose graph[C]. Robotics: Science and Systems, 2015.(代碼:https://github.com/mp3guy/ElasticFusion )
📜Semanticfusion:McCormac J, Handa A, Davison A, et al.Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.(代碼:https://github.com/seaun163/semanticfusion )
📜Code-SLAM:Bloesch M, Czarnowski J, Clark R, et al.CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2560-2568.
👦Andrew Davison:谷歌學術
研究方向:SLAM、目標跟蹤、運動結構、場景增強、移動機器人運動規劃、導航與建圖等等等
實驗室主頁:http://www.robots.ox.ac.uk/
發表論文匯總:
代表性工作:
📜Klein G, Murray D. PTAM: Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, 2007: 225-234.
📜RobotCar 數據集:
https://robotcar-dataset.robots.ox.ac.uk/
👦人物(谷歌學術):David Murray Maurice Fallon
部分博士學位論文可以在這裡搜到:https://ora.ox.ac.uk/
研究方向:三維重建、機器人視覺、深度學習、視覺 SLAM 等
實驗室主頁:https://vision.in.tum.de/research/vslam
發表論文匯總:https://vision.in.tum.de/publications
代表作:DSO、LDSO、LSD_SLAM、DVO_SLAM
📜DSO:Engel J, Koltun V, Cremers D.Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.(代碼:https://github.com/JakobEngel/dso )
📜LSD-SLAM:Engel J, Schöps T, Cremers D.LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.(代碼:https://github.com/tum-vision/lsd_slam )2.
Github 地址:https://github.com/tum-vision
👦Daniel Cremers 教授:個人主頁 谷歌學術
👦Jakob Engel(LSD-SLAM,DSO 作者):個人主頁 谷歌學術
研究方向:智能體自主環境理解、導航與物體操縱
實驗室主頁:https://ev.is.tuebingen.mpg.de/
👦負責人 Jörg Stückler(前 TUM 教授):個人主頁 谷歌學術
📜發表論文匯總:https://ev.is.tuebingen.mpg.de/publications
Kasyanov A, Engelmann F, Stückler J, et al. Keyframe-based visual-inertial online SLAM with relocalization[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 6662-6669.
📜Strecke M, Stuckler J. EM-Fusion: Dynamic Object-Level SLAM with Probabilistic Data Association[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 5865-5874.
📜Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D. Visual-Inertial Mapping with Non-Linear Factor Recovery IEEE Robotics and Automation Letters (RA-L), 5, 2020
研究方向:多機器人導航與協作,環境建模與狀態估計
實驗室主頁:http://ais.informatik.uni-freiburg.de/index_en.php
發表論文匯總:http://ais.informatik.uni-freiburg.de/publications/index_en.php (學位論文也可以在這裡找到)
👦Wolfram Burgard:谷歌學術
開放數據集:http://aisdatasets.informatik.uni-freiburg.de/
📜RGB-D SLAM:Endres F, Hess J, Sturm J, et al.3-D mapping with an RGB-D camera[J]. IEEE transactions on robotics, 2013, 30(1): 177-187.(代碼:https://github.com/felixendres/rgbdslam_v2 )
📜跨季節的 SLAM:Naseer T, Ruhnke M, Stachniss C, et al.Robust visual SLAM across seasons[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015: 2529-2535.
📜博士學位論文:Robust Graph-Based Localization and Mapping 2015
📜博士學位論文:Discovering and Leveraging Deep Multimodal Structure for Reliable Robot Perception and Localization 2019
📜
博士學位論文:Robot Localization and Mapping in Dynamic Environments 2019
回覆:SLAM實驗室,即可獲取全文文檔(共36個實驗室介紹,本文20個)