You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -186,7 +186,7 @@ Geometrically Constrained Keypoints in Real-Time](https://arxiv.org/abs/2006.130
186
186
-[Pillar-based Object Detection for Autonomous Driving](https://arxiv.org/abs/2007.10323) <kbd>ECCV 2020</kbd>
187
187
-[Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation](https://arxiv.org/abs/1905.08598) <kbd>ICCV 2019 workshop</kbd> [indoor]
188
188
-[InstanceMotSeg: Real-time Instance Motion Segmentation for Autonomous Driving](https://arxiv.org/abs/2008.07008)[motion segmentation]
189
-
-[Monocular 3D Object Detection via Feature Domain Adaptation](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123540018.pdf) <kbd>ECCV 2020</kbd> [mono3D]
189
+
-[DA-3Det: Monocular 3D Object Detection via Feature Domain Adaptation](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123540018.pdf)[[Notes](paper_notes/da_3det.md)] <kbd>ECCV 2020</kbd> [mono3D]
190
190
-[RAR-Net: Reinforced Axial Refinement Network for Monocular 3D Object Detection](https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/2822_ECCV_2020_paper.php)[[Notes](paper_notes/rarnet.md)] <kbd>ECCV 2020</kbd> [mono3D]
-[Disambiguating Monocular Depth Estimation with a Single Transient](https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/3668_ECCV_2020_paper.php) <kbd>ECCV 2020</kbd> [additional laser sensor, indoor depth]
# [DA-3Det: Monocular 3D Object Detection via Feature Domain Adaptation](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123540018.pdf)
2
+
3
+
_August 2020_
4
+
5
+
tl;dr: Use Domain Adaptation to bridge the gap between pseudo-lidar and real lidar.
6
+
7
+
#### Overall impression
8
+
[DA-3Det](da_3det.md) uses a Siamese network and takes in real lidar and pseudo-lidar data. The difference between the features are penalized. This way [DA-3Det](da_3det.md) learns a general feature based on pseudo-lidar.
9
+
10
+
Similar ideas to bridge the gap between real and pseudo-lidar has been witnessed in [RefinedMPL](refined_mpl.md), which proposes a way to downsample the dense lidar points to mimic the sparsity of point cloud.
11
+
12
+
#### Key ideas
13
+
- The paper also uses the [Frustum PointNet](frustum_pointnet.md) version of pseudo-lidar due to its simplicity in dealing with point cloud.
14
+
- Siamese network with domain adaptation loss (L2 between features).
15
+
- During training process, real-lidar data is also utilized for feature domain adaptation. Only a single image is required during the inference stage.
16
+
- Context aware segmentation module: this is simply a pretrained segmentation module that is finetuned online.
17
+
- Pretraining improves performance as compared to unsupervised training with random initialization.
18
+
- Domain adaptation is a useful technique that can be applied to mono --> stereo and stereo --> lidar.
19
+
20
+
#### Technical details
21
+
- Random sampling of lidar point for each object. For object containing smaller numbers of lidar points, sample with replacement (duplication).
22
+
23
+
#### Notes
24
+
- Questions and notes on how to improve/revise the current work
0 commit comments