Skip to content

Conversation

@Kin-Zhang
Copy link
Contributor

PR Summary

I directly copied the desciprtion from OpenSceneFlow/pull/5.

As HiMo: High-Speed Objects Motion Compensation in Point Cloud highlights: LiDAR point clouds often contain motion-induced distortions, degrading the accuracy of object appearances in the captured data.

📺 Check this 40s video clip for a visual explanation:
https://youtube.com/clip/UgkxXpEk6ef0nFTTDF7ikTtwae3dMfT0ycyl?si=xqWrntJiP4bThsd4

🛠️ What's Improved?

Previously, Argoverse 2 generated scene flow ground truth by expanding bounding boxes with a fixed 20cm value. However, following HiMo, we now expand bounding boxes based on object relative velocity, leading to more accurate motion compensation.

🔍 Before & After Visualization

Below is a comparison showing the improved bounding box expansion using HiMo:

image

Visualization from demo/train, scene id: 25e5c600-36fe-3245-9cc0-40ef91620c22

Comments: Feel free to remove/revise the comment in codes. Let me know if anything is not clear.

Testing

In order to ensure this PR works as intended, it is:

  • unit tested.
  • other or not applicable (additional detail/rationale required)

Compliance with Standards

As the author, I certify that this PR conforms to the following standards:

  • Code changes conform to PEP8 and docstrings conform to the Google Python style guide.
  • A well-written summary explains what was done and why it was done.
  • The PR is adequately tested and the testing details and links to external results are included.

@Kin-Zhang
Copy link
Contributor Author

Kin-Zhang commented Mar 30, 2025

In case afterward people are interested on testing and flow_label things. The way to generated flow_label.feather script is here (some keys are difference from src/av2/evaluation/scene_flow/make_annotation_files.py.

Click here to expand the revised `make_annotation_files.py` code:
"""
Utility program for producing minimnal annotation files used for evaluation on the val and test splits.
# example running command:
python src/av2/evaluation/scene_flow/make_annotation_files.py /home/kin/workspace/av2-api/tests/unit/test_data/sensor/val/7fab2350-7eaf-3b7e-a39d-6937a4c1bede /home/kin/workspace/av2-api/tests/unit /home/kin/data/av2/3d_scene_flow/val-masks.zip --name test_data --split val
"""

from pathlib import Path
from typing import Final, Tuple

import click
import numpy as np
import pandas as pd
from rich.progress import track

from av2.evaluation.scene_flow.utils import get_eval_point_mask, get_eval_subset
from av2.torch.data_loaders.scene_flow import SceneFlowDataloader
from av2.utils.typing import NDArrayBool, NDArrayByte, NDArrayNumber

CLOSE_DISTANCE_THRESHOLD: Final = 35.0


def write_annotation(
    category_indices: NDArrayByte,
    is_close: NDArrayBool,
    is_dynamic: NDArrayBool,
    is_valid: NDArrayBool,
    is_ground: NDArrayBool,
    flow: NDArrayNumber,
    sweep_uuid: Tuple[str, int],
    output_dir: Path,
) -> None:
    """Write an annotation file.

    Args:
        category_indices: Category label indices.
        is_close: Close (inside 70 meter box) labels.
        is_dynamic: Dynamic labels.
        is_valid: Valid flow labels.
        flow: Flow labels.
        sweep_uuid: Log id and timestamp_ns of the sweep.
        output_dir: Top level directory to store the output in.
    """
    output = pd.DataFrame(
        {
            "classes": category_indices.astype(np.uint8),
            "is_close": is_close.astype(bool),
            "dynamic": is_dynamic.astype(bool),
            "is_valid": is_valid.astype(bool),
            "is_ground_0": is_ground.astype(bool),
            "flow_tx_m": flow[:, 0].astype(np.float16),
            "flow_ty_m": flow[:, 1].astype(np.float16),
            "flow_tz_m": flow[:, 2].astype(np.float16),
        }
    )

    log_id, timestamp_ns = sweep_uuid

    output_subdir = output_dir / log_id
    output_subdir.mkdir(exist_ok=True)
    output_file = output_subdir / f"{timestamp_ns}.feather"
    output.to_feather(output_file)


def make_annotation_files(
    output_dir: str, mask_file: str, data_dir: str, name: str, split: str
) -> None:
    """Create annotation files for running the evaluation.

    Args:
        output_dir: Path to output directory.
        data_dir: Path to input data.
        mask_file: Archive of submission masks.
        name: Name of the dataset (e.g. av2).
        split: Split to make annotations for.

    Raises:
        ValueError: If the dataset does not have annotations.
    """
    data_loader = SceneFlowDataloader(Path(data_dir), name, "val")

    output_root = Path(output_dir)
    output_root.mkdir(exist_ok=True)

    eval_inds = get_eval_subset(data_loader)
    for i in track(eval_inds):
        sweep_0, _, _, flow_labels = data_loader[i]
        if flow_labels is None:
            raise ValueError("Missing flow annotations!")

        # mask = get_eval_point_mask(sweep_0.sweep_uuid, Path(mask_file))
        mask = np.ones(len(sweep_0.lidar.as_tensor()), dtype=bool)

        flow = flow_labels.flow[mask].numpy().astype(np.float16)
        is_valid = flow_labels.is_valid[mask].numpy().astype(bool)
        category_indices = flow_labels.category_indices[mask].numpy().astype(np.uint8)
        is_dynamic = flow_labels.is_dynamic[mask].numpy().astype(bool)
        is_ground = sweep_0.is_ground[mask].numpy().astype(bool)

        pc = sweep_0.lidar.as_tensor()[mask, :3].numpy()
        is_close = np.logical_and.reduce(
            np.abs(pc[:, :2]) <= CLOSE_DISTANCE_THRESHOLD, axis=1
        ).astype(bool)

        write_annotation(
            category_indices,
            is_close,
            is_dynamic,
            is_valid,
            is_ground,
            flow,
            sweep_0.sweep_uuid,
            output_root,
        )


@click.command()
@click.argument("output_dir", type=str)
@click.argument("data_dir", type=str)
@click.argument("mask_file", type=str)
@click.option(
    "--name",
    type=str,
    help="the data should be located in <data_dir>/<name>/sensor/<split>",
    default="av2",
)
@click.option(
    "--split",
    help="the data should be located in <data_dir>/<name>/sensor/<split>",
    default="val",
    type=click.Choice(["test", "val"]),
)
def _make_annotation_files_entry(
    output_dir: str, mask_file: str, data_dir: str, name: str, split: str
) -> None:
    """Entry point for make_annotation_files."""
    make_annotation_files(output_dir, mask_file, data_dir, name, split)


if __name__ == "__main__":
    _make_annotation_files_entry()

@Kin-Zhang
Copy link
Contributor Author

I check the error in Action, I think not my commit problem?

Error: Failed to get ID token: Error message: Unable to get ACTIONS_ID_TOKEN_REQUEST_URL env variable

@benjaminrwilson Maybe you can check whether the expired TOKEN in this repo. If anything updates in main, I will merge from it again. Thanks for your time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant