Skip to content

Running integration tests with mock hardware #1226

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

URJala
Copy link
Collaborator

@URJala URJala commented Jan 9, 2025

Added some launch arguments and some checks to avoid connecting to the dashboard interface when using mock hardware, as that will not work. Maybe not the most elegant, but it works for now. Currently passthrough controller does not work at all with mock hardware, and is bypassed if using that in the test. test_trajectory_scaled_aborts_on_violation fails, as the hardware doesnt abort. test_set_io also fails as the controller cant verify that a pin has been set.

Copy link

codecov bot commented Jan 9, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 19.10%. Comparing base (1b121b7) to head (8c02090).
Report is 427 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##            main    #1226       +/-   ##
==========================================
+ Coverage   3.59%   19.10%   +15.51%     
==========================================
  Files         13       33       +20     
  Lines        947     3428     +2481     
  Branches     152      414      +262     
==========================================
+ Hits          34      655      +621     
- Misses       843     2739     +1896     
+ Partials      70       34       -36     
Flag Coverage Δ
unittests 19.10% <ø> (+15.51%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@URJala URJala linked an issue Jan 10, 2025 that may be closed by this pull request
@URJala URJala force-pushed the mock_hardware_tests branch from 1c6e90d to 44cfe0b Compare March 14, 2025 13:57
@URJala URJala force-pushed the mock_hardware_tests branch from 0d259f8 to 1727bd1 Compare April 8, 2025 12:46
@URJala URJala marked this pull request as ready for review April 8, 2025 14:16
@URJala URJala force-pushed the mock_hardware_tests branch from db57a95 to 72ac7ce Compare April 8, 2025 14:20
Copy link
Member

@urfeex urfeex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is a good idea to split up the robot_driver test into individual tests.

However, I think it would be beneficial to be able to specify mock hardware outside of the test parametrization. This way, we can run the mock hardware tests also when the integration tests flag is not set. This would effectively allow running some tests on the buildfarm, as well.

URJala added 13 commits April 22, 2025 10:34
Added some launch arguments and some checks to avoid connecting to the dashboard interface when using mock hardware, as that will not work. Maybe not the most elegant, but it works for now.
Currently passthrough controller does not work at all with mock hardware, and is bypassed if using that in the test.
test_trajectory_scaled_aborts_on_violation fails, as the hardware doesnt abort.
test_set_io also fails as the controller cant verify that a pin has been set.
And into separate test cases.
Moved timout for trajectory execution to common file
@URJala URJala force-pushed the mock_hardware_tests branch from ac702c4 to c488e21 Compare April 22, 2025 10:57
@@ -364,10 +366,47 @@ def generate_dashboard_test_description():
)


def generate_mock_hardware_test_description(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that doesn't require a use_mock_hardware parameter, right?

def generate_driver_test_description(
tf_prefix="",
initial_joint_controller="scaled_joint_trajectory_controller",
controller_spawner_timeout=TIMEOUT_WAIT_SERVICE_INITIAL,
use_mock_hardware="false",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you created a separate one, we don't need to add the parameter here.

@@ -300,7 +300,9 @@


<gpio name="${tf_prefix}get_robot_software_version">
<state_interface name="get_version_major"/>
<state_interface name="get_version_major">
<param name="initial_value">1</param>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the config controller we test that it is not equal to 0. With this change, that would always be true, right?

Comment on lines +59 to +60
@launch_testing.parametrize("tf_prefix, use_mock_hardware", [("", "true"), ("my_ur_", "true")])
def generate_test_description(tf_prefix, use_mock_hardware):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess, the use_mock_hardware parameter can be removed from that.

rclpy.init()
cls.node = Node("mock_hardware_test")
time.sleep(1)
cls.mock_hardware = use_mock_hardware == "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cls.mock_hardware = use_mock_hardware == "true"

Comment on lines +107 to +113
def test_start_scaled_jtc_controller(self):
self.assertTrue(
self._controller_manager_interface.switch_controller(
strictness=SwitchController.Request.BEST_EFFORT,
activate_controllers=["scaled_joint_trajectory_controller"],
).ok
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When BEST_EFFORT is used, that will return ok, as soon as the controller exists.


def setUp(self):
time.sleep(1)
self.assertTrue(self._io_status_controller_interface.resend_robot_program().success)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that this makes a lot of sense here.

Comment on lines +172 to +198
def test_trajectory_scaled(self, tf_prefix):
"""Test robot movement."""
# Construct test trajectory
test_trajectory = [
(Duration(sec=6, nanosec=0), [0.0 for j in ROBOT_JOINTS]),
(Duration(sec=6, nanosec=500000000), [-1.0 for j in ROBOT_JOINTS]),
]

trajectory = JointTrajectory(
joint_names=[tf_prefix + joint for joint in ROBOT_JOINTS],
points=[
JointTrajectoryPoint(positions=test_pos, time_from_start=test_time)
for (test_time, test_pos) in test_trajectory
],
)

# Execute trajectory
logging.info("Sending goal for robot to follow")
goal_handle = self._scaled_follow_joint_trajectory.send_goal(trajectory=trajectory)
self.assertTrue(goal_handle.accepted)

# Verify execution
result = self._scaled_follow_joint_trajectory.get_result(
goal_handle,
TIMEOUT_EXECUTE_TRAJECTORY,
)
self.assertEqual(result.error_code, FollowJointTrajectory.Result.SUCCESSFUL)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That test doesn't make a lot of sense, since the mock hardware isn't scaling.

).ok
)

def test_trajectory(self, tf_prefix):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are basically the same as the ones from the test_scaled_jtc.py test. I think, we should share the code, then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add test using mock_hardware
2 participants