點擊上方「CVer」,選擇加"星標"置頂
重磅乾貨,第一時間送達
Amusi 發現了一個超快速3D目標檢測網絡!SFA3D:基於LiDAR的實時、準確的3D目標檢測模型,在GTX 1080 Ti上速度高達95 FPS!代碼現已開源!主要特性:
1. 快速訓練和推理;
2. Anchor-free的方法;
3. 無NMS;
4. 支持分布式數據並行訓練 演示Demo詳見視頻
項目連結:https://github.com/maudzung/SFA3D
點擊下面該視頻,皆可查看演示Demo
Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point CloudsFeaturesSuper fast and accurate 3D object detection based on LiDARFast training, fast inferenceSupport distributed data parallel trainingRelease pre-trained modelsThe technical details are described here
Update 2020.09.06: Add ROS source code. The great work has been done by @AhmedARadwan.The implementation is here
Demonstration (on a single GTX 1080Ti)2. Getting Started2.1. RequirementThe instructions for setting up a virtual environment is here.
git clone https://github.com/maudzung/SFA3D.git SFA3D
cd SFA3D/
pip install .
2.2. Data PreparationDownload the 3D KITTI detection dataset from here.
The downloaded data includes:
Velodyne point clouds (29 GB)Training labels of object data set (5 MB)Camera calibration matrices of object data set (16 MB)Left color images of object data set (12 GB) (For visualization purpose only)Please make sure that you construct the source code & dataset directories structure as below.
2.3. How to run2.3.1. Visualize the datasetTo visualize 3D point clouds with 3D boxes, let's execute:
cd sfa/data_process/
python kitti_dataset.py
2.3.2. InferenceThe pre-trained model was pushed to this repo.
python test.py --gpu_idx 0 --peak_thresh 0.2
2.3.3. Making demonstrationpython demo_2_sides.py --gpu_idx 0 --peak_thresh 0.2The data for the demonstration will be automatically downloaded by executing the above command.
2.3.4. Training2.3.4.1. Single machine, single gpupython train.py --gpu_idx 0
2.3.4.2. Distributed Data Parallel TrainingSingle machine (node), multiple GPUspython train.py --multiprocessing-distributed --world-size 1 --rank 0 --batch_size 64 --num_workers 8Two machines (two nodes), multiple GPUs
python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 0 --batch_size 64 --num_workers 8
python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 1 --batch_size 64 --num_workers 8
TensorboardTo track the training progress, go to the logs/ folder andcd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./
Then go to http://localhost:6006/ContactIf you think this work is useful, please give me a star!
Citation
If you find any errors or have any suggestions, please contact me (Email: nguyenmaudung93.kstn@gmail.com).
Thank you!@misc{Super-Fast-Accurate-3D-Object-Detection-PyTorch,
author = {Nguyen Mau Dung},
title = {{Super-Fast-Accurate-3D-Object-Detection-PyTorch}},
howpublished = {\url{https://github.com/maudzung/Super-Fast-Accurate-3D-Object-Detection}},
year = {2020}
}
References[1] CenterNet: Objects as Points paper, PyTorch Implementation
[2] RTM3D: PyTorch Implementation
[3] Libra_R-CNN: PyTorch ImplementationThe YOLO-based models with the same BEV maps input:
[4] Complex-YOLO: v4, v3, v23D LiDAR Point pre-processing:
Folder structure
[5] VoxelNet: PyTorch Implementation${ROOT}
└── checkpoints/
├── fpn_resnet_18/
├── fpn_resnet_18_epoch_300.pth
└── dataset/
└── kitti/
├──ImageSets/
│ ├── test.txt
│ ├── train.txt
│ └── val.txt
├── training/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ ├── label_2/
│ └── velodyne/
└── testing/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ └── velodyne/
└── classes_names.txt
└── sfa/
├── config/
│ ├── train_config.py
│ └── kitti_config.py
├── data_process/
│ ├── kitti_dataloader.py
│ ├── kitti_dataset.py
│ └── kitti_data_utils.py
├── models/
│ ├── fpn_resnet.py
│ ├── resnet.py
│ └── model_utils.py
└── utils/
│ ├── demo_utils.py
│ ├── evaluation_utils.py
│ ├── logger.py
│ ├── misc.py
│ ├── torch_utils.py
│ ├── train_utils.py
│ └── visualization_utils.py
├── demo_2_sides.py
├── demo_front.py
├── test.py
└── train.py
├── README.md
└── requirements.txt項目代碼下載
後臺回覆:SFA3D,即可下載上述項目代碼
下載:CVPR / ECCV 2020開原始碼
在CVer公眾號後臺回覆:CVPR2020,即可下載CVPR 2020代碼開源的論文合集
在CVer公眾號後臺回覆:ECCV2020,即可下載ECCV 2020代碼開源的論文合集
重磅!CVer-論文寫作與投稿交流群成立
掃碼添加CVer助手,可申請加入CVer-論文寫作與投稿 微信交流群,目前已滿2400+人,旨在交流頂會(CVPR/ICCV/ECCV/NIPS/ICML/ICLR/AAAI等)、頂刊(IJCV/TPAMI/TIP等)、SCI、EI、中文核心等寫作與投稿事宜。
同時也可申請加入CVer大群和細分方向技術群,細分方向已涵蓋:目標檢測、圖像分割、目標跟蹤、人臉檢測&識別、OCR、姿態估計、超解析度、SLAM、醫療影像、Re-ID、GAN、NAS、深度估計、自動駕駛、強化學習、車道線檢測、模型剪枝&壓縮、去噪、去霧、去雨、風格遷移、遙感圖像、行為識別、視頻理解、圖像融合、圖像檢索、論文投稿&交流、PyTorch和TensorFlow等群。
一定要備註:研究方向+地點+學校/公司+暱稱(如論文寫作+上海+上交+卡卡),根據格式備註,可更快被通過且邀請進群
▲長按加微信群
▲長按關注CVer公眾號
整理不易,請給CVer點讚和在看!