Raspberry Pi 5, Ubuntu, ROS2 Jazzy Jalisco, iRobot Create 3 – all playing together

This will be a rolling post as I work through this, in the hopes that others also can find the help they need to put these things together. ROS can be a difficult mess to just get the initial plumbing up and running, before you even get to do the fun stuff. Compatibility, difficult documentation, and the niche nature of ROS means that often those who attempt to jump in get disappointed and stop before they can make the robot move. There will be times where this guide makes mistakes, skips parts, or generally is confusing, but I write it in the hopes that my ramble as I work through the latest LTS stable release of ROS2 on the latest Raspberry Pi hardware will help others get past these difficult early parts. I will do my best to offer full commands for things like moving/copying/creating/editing files in Linux, but the assumption is that you have at least some experience with these things. Please look up how to use nano as a command line text editor if you’re not familiar.

For those who don’t know, the Raspberry Pi 5 only supports Ubuntu 23.10 and onwards because of a kernel support problem with the Raspberry Pi 5 hardware. This creates a complex situation with those wanting to use the hardware with ROS2 Humble Hawksbill as it only supports (easily) Ubuntu 22.04.

The only clean and easily replicable method here is to move onto the next just released LTS version of ROS2: Jazzy Jalisco. This is made for Ubuntu 24.04 LTS, which means we can finally fit these things together.

These are the broad steps to get rolling:
1. Image the SD card
Use the Raspberry Pi imager tool to image Ubuntu 24.04 server onto an SD card. You’ll find this option in the “Other general-purpose OS” section of the menu, then Ubuntu, and finally “Ubuntu Server 24.04.x LTS (64 bit). (we use server because desktop is not needed, we don’t want to eat up all the processing power and RAM to render a desktop, ssh will suffice).
IMPORTANT: make sure you use the options area when you’re about to image to set up things like wifi and login information, and also enable SSH. This will mean your pi will turn up on your network with credentials and SSH ready to log in

2. Update the iRobot Create 3
You’ll need to follow the instructions provided by iRobot. This usually involves going to the iRobot Create 3 web page, downloading the latest firmware file, powering on the robot, waiting for it to create its own wifi access point, connecting to it, opening the default config website on it, uploading the firmware file, waiting for it to finish.
Update: It seems at the time of writing, that they officially only support up to ROS2 “Iron” (older stable release) but should still be compatible with “Jazzy” (latest long term support release).
https://iroboteducation.github.io/create3_docs/releases/i_0_0/
At this point I downloaded manually and uploaded to the robot the Cyclone DDS version of the firmware, as this didn’t need a discovery server, and should “just work” happily with Jazzy Jalisco (plus it seems Jazzy moving forward is using cyclone by default? Correct me if I’m wrong).

3. Mount the Raspberry Pi 5 in the back case of the robot
This may involve 3D printing one of the many mounts for Raspberry pis to fit in the case, or doing whatever you see best to mount it safely. Be mindful of contact with metal etc.

4. Plugging the Raspberry Pi 5 into onboard power
There is a hidden USB C connector inside the back caddy of the robot, so with a short USB C to USB C cable you can power the Raspberry Pi, as well as provide it with USB networking to control the robot

5. Setting up the Raspberry Pi 5 to run networking over USB
This one was a little complex, as it wasn’t immediately clear what was wrong, and there being mixed messages about the USB C port on the Raspberry Pi 5. Many say that it’s for power only, and various sources say it’s not officially supported, but the USB C connector can power as well as have data run over it. Basically you have to load a kernel module to enable gadget ethernet over USB, then configure the new network interface to be the right subnet to reach the robot.

First, add the module to load on boot:
echo "g_ether" | sudo tee -a /etc/modules
This loads the “g_ether” (gadget ethernet) module to load on boot. This will create a new network connection called “usb0” when plugged into the robot.

Next, add the network config for this new network connection:
sudo nano /etc/netplan/99-usb0.yaml
The contents:

network:
    version: 2
    ethernets:
        usb0:
            dhcp4: no
            addresses:
                - 192.168.186.3/24

(save and quit)

This creates a new config file called “99-usb0.yaml” in the /etc/netplan folder and puts the config for the new network interface in place. Notice the address/subnet? That’s because the iRobot Create uses address 192.168.186.2/24 by default. If your robot is configured differently, then change the address accordingly.

Apply the new netplan config:
sudo netplan apply

Check it worked:
ip addr show usb0
This should show your connection with an address assigned, and up.

6. Installing ROS2
This step I won’t step out, as it’s covered well here:
https://docs.ros.org/en/jazzy/Installation/Ubuntu-Install-Debs.html
I would however recommend keeping it to the base packages (sudo apt install ros-jazzy-ros-base) because you don’t have a desktop installation of Ubuntu, so you want to keep it to the basics, and connect using your laptop with ROS2 Jazzy installed on it to run any visualisation etc.

7. Set up the correct middleware for ROS2 and the iRobot Create
The middleware is what is used to pass messages between different components in ROS2. We have to have the robot using the same middleware as the Raspberry Pi in order for all of the components to talk to each other.

You should have installed the firmware with the cycloneDDS version in a previous step. Now we want to install and set up ROS2 on the Raspberry Pi.

Run:
sudo apt install ros-jazzy-rmw-cyclonedds-cpp
Which will install the CycloneDDS middleware for ROS2
Then:
export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
Which will tell ROS2 to use it.
echo 'export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp' >> ~/.bashrc
To make this permanently happen without having to export it each login

We then need to confirm that the robot is set up properly:
Go to the Application->Configuration menu of the robot’s webpage
Ensure that in “RMW_IMPLEMENTATION” setting it has “rmw_cyclonedds_cpp” selected
IMPORTANT: untick the “Enable Fast DDS discovery server?” setting (if you don’t it still appears to try and use FastDDS instead of CycloneDDS)
Press the menu item Application->RestartApplication to restart it. This should then have it turn up on discovery on the network for ROS2

Finally:
Run command ros2 topic list and you should see something like (probably way more):
/cmd_audio
/cmd_lightring
/cmd_vel
/cmd_vel_stamped
/parameter_events
/rosout

This means it’s visible and things are talking! If not, check you’ve done everything above, and ensured that the networking is up.

7. Have some fun testing manual driving
Because we have our /cmd_vel topic exposed now to ROS2, we have the ability to send commands to drive.

First we’ll need to install a ROS2 package which is a lightweight command line keyboard controller:
sudo apt install ros-jazzy-teleop-twist-keyboard

Then we’ll run it with standard arguments (expects /cmd_vel to exist etc):
ros2 run teleop_twist_keyboard teleop_twist_keyboard
(there are instructions on screen)

You should be now able to drive the robot around!

8. Connect an RPLIDAR A1 on top and scan the room
A remote control robot is pretty boring, we want it to make its own decisions and drive itself, so we need to have some sensing of the environment and mapping to be able to decide how to drive and where to drive. This is called SLAM (Simultaneous Location And Mapping), and to get there we need spatial awareness.

We’re going to use a cheap and easy to get RPLIDAR A1 2D lidar to see with. I 3D printed a bracket to mount it on top in the middle of the robot to make it simple for now. Connect it to the Raspberry Pi with a USB cable.

We will now create a build environment, and grab the driver for this to build.

Create the ROS workspace to build from in the home directory:
mkdir -p ~/ros2_ws/src
Move to the src directory inside:
cd ~/ros2_ws/src
Clone the source code from the Slamtec github:
git clone https://github.com/Slamtec/sllidar_ros2.git
Move back to the top of the ROS workspace:
cd ~/ros2_ws
Grab the system dependencies this source code will want:
rosdep install --from-paths src --ignore-src -r -y
Build the driver/utilities for the LIDAR:
colcon build --symlink-install
“Source” the build environment to overlay the current ROS2 system environment (allows for the new driver to be used in place, not having to install it systemwide):
source install/setup.bash

We’re ready to launch the ROS node for the lidar (this is for the defaults with my A1 LIDAR, if yours is different you will need a different launch file – my launch file is sllidar_a1_launch.py):
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=laser

Let’s check that the topic for /scan exists with:
ros2 topic list

This is great – we appear to be running if you see it.

But it’s not much use unless we can actually see the output.
One method is to just run:
ros2 topic echo /scan
But you’ll be quickly overwhelmed with data – we humans need visuals!

ROS2 uses rviz2 as a tool to visualise data.

It’s best you don’t run this on the Raspberry Pi, so install ROS2 Jazzy Desktop package onto your own laptop. This can be messy if your system isn’t compatible, but let’s leave that for you to follow the instructions and figure out. On my laptop running Ubuntu 22.04 it as messy, so I just decided to run up a virtual Ubuntu 24.04 desktop that I can install ROS2 Jazzy in.

Then we can run rviz2 to see the scan data from the /scan topic

Or can we? No…

We’ve created a new problem here, in that we are now jumping across multiple networks, and hoping that the middleware (CycloneDDS) will jump through these worlds with us with its multicasting. It unfortunately won’t.

We’ll have to set up cycloneDDS to unicast to particular endpoints or subnets instead using the only unifying point on the network that all parties can reach: the Raspberry Pi onboard the robot.

So we’ll configure ROS on both the Raspberry Pi, and the laptop/VM to unicast to definite endpoints instead of relying on network broadcasts for discoveries.

On the Raspberry Pi, we’ll create a new file in the home directory called cyclonedds.xml and put this in it (using nano or another command line text editor of choice):

<CycloneDDS>
    <Domain id="any">
        <General>
            <Interfaces>
                <NetworkInterface name="wlan0"/>
                <NetworkInterface name="usb0"/>
            </Interfaces>
        </General>
        <Discovery>
            <Peers>
             <Peer address="192.168.20.101"/>  <!-- Laptop/VM IP-->
             <Peer address="192.168.186.2"/>    <!--iRobot IP-->
            </Peers>
        </Discovery>
    </Domain>
</CycloneDDS>


And to export this as a system variable for ROS to find it type this at the commandline:
export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml

And to make this persist across logins/reboots, add it to your bashrc file that is read/sourced each time you login:
echo 'export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml' >> ~/.bashrc

This ensures that ROS2 on the Raspberry pi points both at the robot via its usb network link, and to the laptop/VM via the wifi network link.

Now we must do the same on the Laptop/VM to make it point back at the Raspberry Pi:

Again, we put the following in a file called cyclonedds.xml in the home directory (enp0s3 is the name of the network adaptor on mine, adjust yours accordingly by checking with “ip address” at the commandline on the laptop/VM):

<CycloneDDS>
<Domain id="any">
<General>
<Interfaces>
<NetworkInterface name="enp0s3"/>
</Interfaces>
</General>
<Discovery>
<Peers>
<Peer address="192.168.20.117"/> <!-- The IP of the RPi5-->
</Peers>
</Discovery>
</Domain>
</CycloneDDS>

And again export this system variable, and add it to the bashrc of the laptop/VM:
export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml

echo 'export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml' >> ~/.bashrc

Now we can run the LIDAR driver on the Raspberry Pi:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=laser

Making sure that runs successfully, we then jump to our laptop/VM and try to look for published ROS topics made available by the LIDAR – this should be at minimum /scan:
ros2 topic list

If you’re lucky you’ll see a whole bunch more. I’m not a super expert on DDS messaging, but it seems to me like my Raspberry Pi is also acting as a relay, passing through the topics from the Robot itself, which is more than I had hoped for!

If you’ve been trying to make this work unsuccessfully to this point, reboot both machines, you may have hanging processes, or topics stuck with one of the instances that keep causing conflicts.

We can NOW finally run rviz2 on the laptop/VM machine now that we can see the topics turning up.

Type at the commandline on the laptop/VM:
rviz2

You’ll see a window open. First things first, because we’re just wanting to initially see the scan output, we don’t have a proper stack with a map, and origin body etc, so we want to “Fixed Frame” setting on the top left pane and change that from the default of “map” to “laser” which is the frame that we’re running with the LIDAR driver (remember we put “laser” at the end for the “frame_id” in the command?).

Now we can press the “add” button, and go to “by topic” tab in the window that pops up, and you should see “/scan” in the available topics. Choose the “LaserScan” under that, and you should now see your scan output results from the LIDAR!

Take a breath – this is a good moment!

So we now have 2D LIDAR scan results flowing, we have topics and messaging passing around our robotics network, and we have the ability to drive the robot.

9. Build a map of a space to keep for future navigation

Now we are going to use a SLAM engine to gather a couple of things:
– Wheel odometry
– 2D Lidar Scan results

And try to create a map so that the robot in future can decide how to navigate places on its own. We will remote control the robot via SSH on the Raspberry Pi, and watch the map grow on screen with rviz2.

We’re going to use the built-in “SLAM Toolbox” in ROS2 as a general all-rounder for this.
Install it on the Raspberry Pi with:
sudo apt install ros-jazzy-slam-toolbox

But before we run it:
Previously we’d launched the LIDAR driver with:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=laser

But the the frame_id is not linked to the base of the robot, it’s off in it’s own world. So we will kill off that process and launch it instead with:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=base_link

Now this is slightly lazy, as ideally we have a topology set up that places the laser where it actually sits on the robot, but for now, let’s just treat the laser as the origin at the centre of the robot to make things easy. Later on we’ll build a proper model of the robot, with transforms putting the sensors where they actually live on the robot.

Now it’s time to actually launch the SLAM Toolbox which will take all available sensor inputs (wheel odometry from the robot – /odom, distances in 360 degrees from the 2D LIDAR from /scan) and begin to build a map which will be available at topic /map:
ros2 launch slam_toolbox online_async_launch.py

Back to rviz2, if you set the fixed frame back to “map”, and add by topic the /map, you’ll now start to see the beginning of a simple map being built.

We’ll need to drive the robot around to be able to make it grow and refine, so in another terminal SSH to your Raspberry pi and run the remote control tool we used above to drive it around your room/home:
ros2 run teleop_twist_keyboard teleop_twist_keyboard

So it doesn’t work? Yes that’s right. It will begin to show an occupancy map, but we’re not actually going to get much sense (including making a huge mess), as our odometry from the robot base isn’t been transformed properly to work with SLAM with the base link of the body and everything else going on, and needs some filtering with other sensors to provide a nice fusion that works properly.

QUICK STOP HERE: I’ve come back from the future here to point out that although we can see these things, they’re not working properly because the clock is slightly off on the iRobot Create (unsure why – probably remote servers it uses by default being slightly off), and messaging only works properly when everyone shares the same clock close enough to not cause alignment problems. This took me a while to figure out further down as my SLAM engine just couldn’t get it together.

So? We have to install something called chrony (a network time keeper) on the Raspberry Pi as it will be the master clock, and then reconfigure the iRobot Create to point to it for network time so that their clocks align closely.

Install the time server on the RPi:
sudo apt install chrony

Configure the timeserver using nano to edit the config:
sudo nano /etc/chrony/chrony.conf
Go to the bottom of the file and put:
allow 192.168.186.0/24
local stratum 10

This allows the iRobot subnet (the usb gadget link) to be able to access time from the RPi.
Restart the chrony service to read the config:
sudo /etc/init.d/chrony restart

Now we have to go to the web interface of the iRobot Create, go to the “beta features” menu, and click “edit ntp.conf”. All you need to have in here is:
server 192.168.186.1 prefer iburst minpoll 4 maxpoll 4

Be sure to restart the iRobot and give it some time to catch up its clock. It won’t always happen immediate, as it doesn’t like big time skips.

Now back to robot localisation:

We’re going to install a package:
sudo apt install ros-jazzy-robot-localization

Now we will create some preferences for the localisation. Let’s put this in a new config folder under the ros2_ws folder in our home directory for neatness – call it ekf_odom.yaml (full path ~/ros_ws/config/ekf_odom.yaml):
mkdir -p ~/ros_ws/config
nano ~/ros_ws/config/ekf_odom.yaml


This is my config file contents:

ekf_filter_node:
    ros__parameters:
    frequency: 30.0
    sensor_timeout: 1.0
    two_d_mode: true # Assuming Create 3 is 2D motion only
    odom0: /odom
    odom0_config: [true, true, false, # x, y, z
                   false, false, true, # roll, pitch, yaw
                   true, true, false, # vx, vy, vz
                   false, false, true, # vroll, vpitch, vyaw
                   false, false, false] # ax, ay, az
    odom0_differential: false
    odom0_relative: false
    imu0: /imu/data
    imu0_config: [false, false, false,
                  true, true, true,
                  false, false, false,
                  false, false, true,
                  true, true, true]
    imu0_differential: false
    publish_tf: true
    map_frame: map
    odom_frame: odom
    base_link_frame: base_link
    transform_time_offset: 0.1

Why EKF? Extended Kalman Filter – it takes noisy and imperfect inputs, weights them against others (eg wheel measurements vs IMU) and decides roughly where it thinks the robot must be (pose).

Let’s now launch our localization node using the preferences file above:
ros2 run robot_localization ekf_node --ros-args --params-file ~/ros_ws/config/ekf_odom.yaml

Great! So we have inputs being smoothed now, ready for a SLAM engine to process and confidently build a map.

The SLAM engine can run in two modes:
1. Mapping – creating a map
2. Localisation – using an existing map to figure out where we are

We don’t run in mapping mode constantly, as we could possibly get lost and make a messy map, so generally speaking we map, then use a nice clean map as a base for localising the robot (pose).

Let’s create a config file for the mapping mode. Create a new file in the config folder:
nano ~/ros2_ws/config/slam_toolbox_mapping_config.yaml

The contents of mine:

slam_toolbox:
    ros__parameters:
       solver_plugin: "solver_plugins::CeresSolver"
        solver_threads: 4 #max number compiled
        ceres_solver_options:
            linear_solver_type: "SPARSE_NORMAL_CHOLESKY"
        mode: "mapping"
        map_frame: "map"
        odom_frame: "odom" # <-- This matches what EKF publishes
        odom_topic: "/odometry/filtered"
        base_frame: "base_link"
        scan_frame: "base_link"
        scan_topic: "/scan"
        use_scan_matching: true
        use_scan_barycenter: false
        map_update_interval: 0.2
        resolution: 0.05 # Map resolution (meters per pixel)
        max_laser_range: 5.0 #12m is full range, reliability drops 5/6m
        update_min_d: 0.02 # Minimum movement before update (distance)
        update_min_a: 0.02 # Minimum rotation before update (radians)
        transform_publish_period: 0.05 # How often to publish tf transforms
        use_pose_extrapolator: true
        transform_timeout: 1.5
        tf_buffer_duration: 10.0 # seconds

You may want to tweak these for your purposes, but they work for me.

Let’s now launch the Slam Toolbox mapping node:
ros2 launch slam_toolbox online_async_launch.py params_file:=~/ros2_ws/config/slam_toolbox_config.yaml log_level:=debug

You’ll see some messages flowing by, but don’t be alarmed if you see some messages like:
Failed to compute odom pose
For just a few repeats. This is just everything being a little slow on loading, and while mapping this may happen some more as the CPU on the RPi struggles to keep up. It’s only a concern if you see it rolling non stop. In this case you probably don’t have a synchronised clock on the iRobot as described above.

Launch rviz2 on your laptop/VM to see the beginnings of the map being built by adding by node /map, as well as adding /pose to see where the SLAM engine sees your robot.

In another terminal window launch the keyboard remote control again, and carefully and slowly drive your iRobot Create around your space, watching the /map topic with rviz2 (above), and being careful to try and stay out of its vision. You may need a couple of goes before you get the hang of it. To clear the map just kill off the slam engine and relaunch (in case it gets messy etc)

Once you have a solid map happening and all of the walls around your space are mapped enough, close your keyboard remote control in the terminal, and make a couple of service calls to save the map and data.

Save the occupancy grid if you want to – we don’t seem to need this anymore (change the filename to whatever you’re mapping):
ros2 service call /slam_toolbox/save_map slam_toolbox/srv/SaveMap "{name: {data: 'apartment_map'}}"
(you should get a result of 0 – if not, then it didn’t save)

Save the serialised pose graph (this is important for localisation):
ros2 service call /slam_toolbox/serialize_map slam_toolbox/srv/SerializePoseGraph "{filename: 'apartment_map'}"

These will turn up in the folder relative to where the slam-toolbox was launched in our other terminal. You’ll want to move these maps to somewhere you know like creating a maps folder in our workspace:
mkdir -p ~/ros_ws/maps
And from the folder in which the maps have been saved (should have 3 files with the name of apartment, with a different extension on the end):
mv ./apartment.* ~/ros_ws/maps/

That’s it – we have a solid map saved! Now quit the slam-toolbox (ctrl-c in the terminal running it).

We are now going to create a config file for the localisation mode only of slam_toolbox:
nano ~/ros_ws/config/slam_toolbox_localisation_config.yaml

This is the contents of mine:

slam_toolbox:
    ros__parameters:
        solver_plugin: "solver_plugins::CeresSolver"
        solver_threads: 4 #max number compiled
        ceres_solver_options:
            linear_solver_type: "SPARSE_NORMAL_CHOLESKY"
        mode: "localization"
        map_file_name: "/home/jesse/ros2_ws/maps/apartment_map"
        map_frame: "map"
        odom_frame: "odom" # <-- This matches what EKF publishes
        odom_topic: "/odometry/filtered"
        base_frame: "base_link"
        scan_frame: "base_link"
        scan_topic: "/scan"
        use_scan_matching: true
        resolution: 0.05 # Map resolution (meters per pixel)
        max_laser_range: 5.0 # 12m is full range, reliability drops 5/6m
        update_min_d: 0.02 # Minimum movement before update (distance)
        update_min_a: 0.02 # Minimum rotation before update (radians)
        transform_publish_period: 0.05 # How often to publish tf transforms
        use_pose_extrapolator: true
        transform_timeout: 1.5
        tf_buffer_duration: 10.0  # seconds

Now we can launch a localisation only engine that doesn’t change the map, but still provides mapping services:
ros2 launch slam_toolbox localization_launch.py slam_params_file:=$HOME/ros2_ws/config/slam_toolbox_localisation_config.yaml

Not very impressive huh? Only really works slowly and if you start from the same position each time (where the map started).

So we will launch a map server, host the map on a node, then use other tools to provide localisation, and navigation (route planning, control etc).

But…. things are getting messy, and our config files and maps are just sitting in a folder under our ~/ros2_ws/ folder… It’s time to neaten up and create a total robot package because a robot isn’t just the hardware, its the collection of software and configuration as a whole also.

We’re going to create a new package for ROS2 under ~/ros2_ws/src/ that will begin to hold our configs, maps, and any special launch packages, so eventually it’ll all be in a single place.

Do:
cd ~/ros2_ws/src

ros2 pkg create --build-type ament_python myrobot --dependencies launch_ros rclpy
(of course change the name of “myrobot” if you want to name it something else)
This will create the directory structure in the src folder ready to begin adding your files.

We’ll create a config folder inside our new structure:
mkdir ~/ros2_ws/src/myrobot/config

And move things we’ve maybe been keeping in other folders here (adjust to suit):
mv ~/ros2_ws/config/* ~/ros2_ws/src/myrobot/config/
mv ~/ros2_ws/maps/* ~/ros2_ws/src/myrobot/config/

For the rest of this blog I’ll assume that our configs are all in this new folder structure.

For the Navigation and localisation components, we’re going to need a bunch of configs and install some packages, as well as launch them:
(for the packages, you could also just install with sudo apt install ros-jazzy-navigation2 which would pull most if not all of the below)

1. AMCL (adaptive monte-carlo localiser – figures out where the robot is on the map):
Install package:
sudo apt install ros-jazzy-nav2-amcl
Config:
nano ~/ros2_ws/src/myrobot/config/amcl_params.yaml
Contents of my config file:

amcl:
    ros__parameters:
        use_sim_time: false
        scan_topic: "/scan"
       odom_topic: "/odometry/filtered" # use fused odometry
        odom_frame_id: "odom"
        base_frame_id: "base_link"
        global_frame_id: "map"
        update_min_d: 0.2
        update_min_a: 0.2
        min_particles: 500
        max_particles: 2000
        alpha1: 0.2
        alpha2: 0.2
        alpha3: 0.2
        alpha4: 0.2
        alpha5: 0.2
        laser_max_range: 5.0 # match your RPLIDAR range if limiting
        laser_min_range: 0.05
        pf_err: 0.05
        pf_z: 0.99
        resample_interval: 1
        transform_tolerance: 1.0


Launch:
ros2 run nav2_amcl amcl --ros-args --params-file ~/ros2_ws/src/myrobot/config/amcl_params.yaml

2. NAV2 map server (hosts the pre-built map we made earlier)
Install package:
sudo apt install ros-jazzy-nav2-map-server
Config:
nano ~/ros2_ws/src/myrobot/config/map_server_params.yaml
Contents of my config file:

map_server:
    ros__parameters:
        use_sim_time: false
        yaml_filename: "/home/jesse/ros2_ws/src/myrobot/config/apartment_map.yaml"

Launch:
ros2 run nav2_map_server map_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/map_server_params.yaml

3. NAV2 Planner (plans global path through map)

Install packages:
sudo apt install ros-jazzy-nav2-navfn-planner ros-jazzy-nav2-planner ros-jazzy-nav2-util
Config for global planner:

nano ~/ros2_ws/src/myrobot/config/planner_server_params.yaml
Contents of my planner config file:

planner_server:
    ros__parameters:
       odom_topic: "/odometry/filtered" # use fused odometry
        expected_planner_frequency: 1.0
        planner_plugins: ["GridBased"]
        GridBased:
            plugin: "nav2_navfn_planner::NavfnPlanner"

Config for global costmap :
nano ~/ros2_ws/src/myrobot/config/planner_server_params.yaml
Contents of my global costmap config file:

global_costmap:
  global_costmap:
    ros__parameters:
      global_frame: map
      robot_base_frame: base_link
      resolution: 0.05
      publish_frequency: 1.0        # slower is fine for global map
      update_frequency: 1.0
      always_send_full_costmap: true

      plugins: ["static_layer", "obstacle_layer", "inflation_layer"]

      # static layer picks up /map by default
      static_layer:
        plugin: "nav2_costmap_2d::StaticLayer"
        map_subscribe_transient_local: true
        subscribe_to_updates: true

      obstacle_layer:
        plugin: "nav2_costmap_2d::ObstacleLayer"
        enabled: true
        footprint_clearing_enabled: true
        # global planning in case of whole hallway block etc
        observation_sources: scan
        scan:
          topic: /scan
          data_type: LaserScan
          max_obstacle_height: 0.5
          clearing: true
          marking: true

      inflation_layer:
        plugin: "nav2_costmap_2d::InflationLayer"
        enabled: true
        inflation_radius: 0.55
        cost_scaling_factor: 3.0

Launch:
ros2 run nav2_planner planner_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/planner_server_params.yaml --params-file ~/ros2_ws/src/myrobot/config/global_costmap_params.yaml
(notice we give 2 parameters here, one for the global planner, one for the config of the global costmap – they run together)

4. NAV2 Controller (drives the robot along the path)
Install packages:
sudo apt install ros-jazzy-nav2-controller ros-jazzy-dwb-core ros-jazzy-dwb-critics ros-jazzy-dwb-plugins
Config for controller itself:
nano ~/ros2_ws/src/myrobot/config/controller_server_params.yaml
Contents of my controller config file:

controller_server:
  ros__parameters:
    controller_plugins:
      - FollowPath

    goal_checker_plugins:
      - goal_checker
    current_goal_checker: "goal_checker"

    progress_checker_plugins:
      - progress_checker
    current_progress_checker: "progress_checker"

    goal_checker:
      plugin: "nav2_controller::SimpleGoalChecker"
      #stateful: true
      xy_goal_tolerance: 0.25
      yaw_goal_tolerance: 0.25

    progress_checker:
      plugin: "nav2_controller::SimpleProgressChecker"
      #required_movement_radius: 0.5
      #movement_time_allowance: 10.0

    odom_topic: "/odometry/filtered"
    use_sim_time: false
    controller_frequency: 10.0

    FollowPath:
      plugin: "dwb_core::DWBLocalPlanner"
      #debugging:
      debug_trajectory_details: true         # <--- Enables extra debug output about trajectory scoring
      publish_evaluation: true               # <--- Publishes the critic scores as a topic
      publish_trajectories: true             # <--- Publishes the candidate trajectories
      publish_local_plan: true
      publish_global_plan: true
      #debug_trajectory_details: false
      #velocity limits:
      trans_stopped_velocity: 0.0  # or 0.05 for smaller robots
      min_vel_x: 0.0
      max_vel_x: 0.2
      min_speed_xy: 0.0
      max_speed_xy: 0.2
      min_vel_theta: 0.0
      max_vel_theta: 3.0
      #acceleration/deceleration:
      acc_lim_x: 0.5   # quick turns
      acc_lim_theta: 1.0
      decel_lim_x: -1.5
      decel_lim_theta: -3.2
      #trajectory sampling:
      vx_samples: 20            # linear velocity samples
      vtheta_samples: 20        # angular velocity samples
      sim_time: 1.0                   # how far to simulate trajectories forward (sec)
      time_granularity: 0.1           # simulation timestep resolution
      #linear_granularity: 0.05        # position steps along the trajectory
      #angular_granularity: 0.025      # angular steps
      #stopping tolerances:
      #xy_goal_tolerance: 0.15   # within 15cm of goal is "good enough"
      #yaw_goal_tolerance: 0.2  # within ~11 degrees yaw alignment of goal
      # Behavior settings
      transform_tolerance: 0.3        # allow small TF delays
      short_circuit_trajectory_evaluation: false
      #prune_plan: true
      #forward_prune_distance: 1.0
      #include_last_point: true

      stateful: True

      critics:
        - RotateToGoal
        - ObstacleFootprint
        #- Oscillation
        #- BaseObstacle
        - GoalAlign
        - PathAlign
        - PathDist
        - GoalDist

      PathAlign:
        scale: 32.0
      GoalAlign:
        scale: 12.0
      PathDist:
        scale: 32.0
      GoalDist:
        scale: 24.0
      RotateToGoal:
        scale: 32.0
      ObstacleFootprint:
        scale: 0.5
      BaseObstacle:
        scale: 30.0

Config for local costmap (looks at sensors to determine locally if we can see obstacles, and make plans in case):
nano ~/ros2_ws/src/myrobot/config/local_costmap_params.yaml
Contents of my local costmap config file:

local_costmap:
  local_costmap:
    ros__parameters:
      global_frame: odom
      robot_base_frame: base_link
      rolling_window: true
      width: 3
      height: 3
      resolution: 0.05
      robot_radius: 0.17          # realistic!
      publish_frequency: 5.0
      update_frequency: 10.0
      always_send_full_costmap: true

      plugins: ["obstacle_layer", "inflation_layer"]

      obstacle_layer:
        plugin: nav2_costmap_2d::ObstacleLayer
        enabled: true
        observation_sources: scan
        scan:
          topic: /scan
          data_type: LaserScan
          max_obstacle_height: 0.5
          clearing: true
          marking: true

      inflation_layer:
        plugin: nav2_costmap_2d::InflationLayer
        enabled: true
        inflation_radius: 0.3
        cost_scaling_factor: 3.0

Launch:
ros2 run nav2_controller controller_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/controller_server_params.yaml -- params-file ~/ros2_ws/src/myrobot/config/local_costmap_params.yaml
(notice we give 2 parameters here, one for the controller, one for the config of the local costmap – they run together)

5. NAV2 Behaviours Server (provides recovery behaviours like spin, wait, and backup – needed by following BT Navigator):
Install packages:
sudo apt install ros-jazzy-nav2-behaviors
Config:
nano ~/ros2_ws/src/myrobot/config/behavior_server_params.yaml
Contents of my config:

behavior_server:
  ros__parameters:
    costmap_topic: "/local_costmap/costmap_raw"
    footprint_topic: "/local_costmap/footprint"
    use_sim_time: false
    recovery_plugins: ["spin", "wait"]
    spin:
      plugin: "nav2_behaviors::Spin"
      simulate_ahead_time: 2.0
      time_allowance: 10.0      # max spin duration
      max_rotational_vel: 2.5   # matches controller
      min_rotational_vel: 0.5
      angular_dist_threshold: 0.25  # rad left to spin when we accept success

    backup:
      plugin: "nav2_behaviors::BackUp"
      simulate_ahead_time: 2.0
      time_allowance: 8.0
      backup_vel: -0.15
      backup_dist: 0.25

    wait:
      plugin: "nav2_behaviors::Wait"
      wait_duration: 5.0

Launch:
ros2 run nav2_behaviors behavior_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/behavior_server_params.yaml

6. NAV2 Behaviour Tree Navigator (coordinates behaviours as a tree to work through for complex behaviour)
Install packages:
sudo apt install ros-jazzy-nav2-bt-navigator ros2-jazzy-nav2-behavior-tree
Config for the behaviour tree server:
nano ~/ros2_ws/src/config/bt_navigator_params.yaml
Contents of my behaviour tree server config:

bt_navigator:
  ros__parameters:
    odom_topic: "/odometry/filtered"
    default_nav_to_pose_bt_xml: "/home/jesse/ros2_ws/src/myrobot/config/navigate_spin_only.xml"
    use_sim_time: false

Be sure to adjust the path above to the location of the following config:

Config for the behaviour tree itself:
nano ~/ros2_ws/src/config/navigate_spin_only.xml
Contents of my behaviour tree config:

<root BTCPP_format="4" main_tree_to_execute="MainTree">
  <BehaviorTree ID="MainTree">
    <RecoveryNode number_of_retries="6" name="NavigateRecovery">
      <PipelineSequence name="NavigateWithReplanning">
        <ControllerSelector selected_controller="{selected_controller}" default_controller="FollowPath" topic_name="controller_selector"/>
        <PlannerSelector selected_planner="{selected_planner}" default_planner="GridBased" topic_name="planner_selector"/>
        <RateController hz="2.0">
          <RecoveryNode number_of_retries="1" name="ComputePathToPose">
            <Fallback>
              <ReactiveSequence>
                <Inverter>
                  <PathExpiringTimer seconds="10" path="{path}"/>
                </Inverter>
                <Inverter>
                  <GlobalUpdatedGoal/>
                </Inverter>
                <IsPathValid path="{path}"/>
              </ReactiveSequence>
              <ComputePathToPose goal="{goal}" path="{path}" planner_id="{selected_planner}" error_code_id="{compute_path_error_code}"/>
            </Fallback>
            <ClearEntireCostmap name="ClearGlobalCostmap-Context" service_name="global_costmap/clear_entirely_global_costmap"/>
          </RecoveryNode>
        </RateController>
        <RecoveryNode number_of_retries="1" name="FollowPath">
          <FollowPath path="{path}" controller_id="{selected_controller}" error_code_id="{follow_path_error_code}"/>
          <ClearEntireCostmap name="ClearLocalCostmap-Context" service_name="local_costmap/clear_entirely_local_costmap"/>
        </RecoveryNode>
      </PipelineSequence>
      <ReactiveFallback name="RecoveryFallback">
        <GoalUpdated/>
        <RoundRobin name="RecoveryActions">
          <Sequence name="ClearingActions">
            <ClearEntireCostmap name="ClearLocalCostmap-Subtree" service_name="local_costmap/clear_entirely_local_costmap"/>
            <ClearEntireCostmap name="ClearGlobalCostmap-Subtree" service_name="global_costmap/clear_entirely_global_costmap"/>
          </Sequence>
          <Spin spin_dist="1.57"/>
          <Wait wait_duration="5.0"/>
          <!-- <BackUp backup_dist="0.30" backup_speed="0.15"/>  CANCELLING BACKUP FOR NOW -->
        </RoundRobin>
      </ReactiveFallback>
    </RecoveryNode>
  </BehaviorTree>
</root>

Launch:
ros2 run nav2_bt_navigator bt_navigator --ros-args --params-file ~/ros2_ws/src/myrobot/config/bt_navigator_params.yaml

6. NAV2 lifecycle manager (coordinates starting and stopping of NAV2 components)**
None of the above NAV2 components including AMCL will actually fully start properly, instead waiting in a deactivated state – they need to be told to via backend commands, as part of the complex NAV2 lifecycle ecosystem. See note below this block on the lifecycle manager.
Install Packages:
sudo apt install ros-jazzy-nav2-lifecycle-manager
Config:
nano ~/ros2_ws/src/myrobot/config/lifecycle_manager_nav2.yaml
Contents of my config:

lifecycle_manager_navigation:
    ros_parameters:
        autostart: true # Automatically configures and activates below
        node_names:
          - map_server
          - amcl
          - planner_server
          - controller_server
          - behavior_server
          - bt_navigator
        use_sim_time: false

Launch:
ros2 run nav2_lifecycle_manager lifecycle_manager --ros-args --params-file ~/ros2_ws/src/myrobot/config/lifecycle_manager_nav2.yaml

**NOTES ON THE LIFECYCLE MANAGER:
You can manually start each of the above NAV2 components yourself without using the lifecycle manager initially to check that they run and ensure they’re configured correctly by bringing them up one at a time in order and activating them so you can see the logs.
For example if we want to configure and enable the AMCL server:
Run it like above:
ros2 run nav2_amcl amcl --ros-args --params-file ~/ros2_ws/src/myrobot/config/amcl_params.yaml
In another terminal set it to configure (the name of the service is the 4th argument above “amcl”):
ros2 lifecycle set /amcl configure
(this reads the config file, and you’ll see errors if there is a problem loading)
Now set it to activate and start running:
ros2 lifecycle set /amcl activate
This way you can make sure each component runs properly before bundling it all together with the lifecycle manager which can get difficult.

It would be fun to just keep going here, but we are really starting to get messy, and starting each thing up is becoming a pain, so before we continue, lets:
1. test the individual components (including adding the NAV2 components)
2. Create a unified launch for all of these components to start and stop neatly, so we’re not continuing to chew up terminal windows.

Recap on tools we’re bringing up now that we already have a saved map:

1. EK filter to fuse and filter IMU and wheel odometry for better localisation:
ros2 run robot_localization ekf_node --ros-args --params-file ~/ros2_ws/src/myrobot/config/ekf_odom.yaml
(provides /odometry/filtered topic)

2. LIDAR driver for RPLIDAR A1 sensor:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=base_link
(provides /scan topic)

3. NAV2 map server with our existing map:
ros2 run nav2_map_server map_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/map_server_params.yaml
(provides /map topic)

4. AMCL localising server to know where we are:
ros2 run nav2_amcl amcl --ros-args --params-file ~/ros2_ws/src/myrobot/config/amcl_params.yaml

5. NAV2 planner server to plan routes
ros2 run nav2_planner planner_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/planner_server_params.yaml --params-file ~/ros2_ws/src/myrobot/config/global_costmap_params.yaml

6. NAV2 controller server to drive the robot on route
ros2 run nav2_controller controller_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/controller_server_params.yaml --params-file ~/ros2_ws/src/myrobot/config/local_costmap_params.yaml

7. NAV2 Behaviours Server (provides recovery behaviours like spin, wait, and backup – needed by following BT Navigator):
ros2 run nav2_behaviors behavior_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/behavior_server_params.yaml

8. NAV2 behaviour tree navigator to set behaviour
ros2 run nav2_bt_navigator bt_navigator --ros-args --params-file ~/ros2_ws/src/myrobot/config/bt_navigator_params.yaml

9. NAV2 lifecycle manager to activate and monitor the NAV2 components (if using yet)
ros2 run nav2_lifecycle_manager lifecycle_manager --ros-args --params-file ~/ros2_ws/src/myrobot/config/lifecycle_manager_nav2.yaml

Try testing each of the above. It’s messy isn’t it. Also very hard to manage, with a high risk of accidentally running two instances of a service if you’re not careful. So we’ll now create what is called a bring up package that does all of the above in a single command and should die gracefully when quitting.

We’ll put a special file into our myrobot package:
nano ~/ros2_ws/src/myrobot/launch/bringup_launch.py

Contents of the file:

from launch import LaunchDescription
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch_ros.actions import Node
from launch.actions import TimerAction
from ament_index_python.packages import get_package_share_directory
import os

def generate_launch_description():
	bringup_dir = get_package_share_directory('myrobot')
	ekf_params = bringup_dir + '/config/ekf_odom.yaml'
	map_params = bringup_dir + '/config/map_server_params.yaml'
	amcl_params = bringup_dir + '/config/amcl_params.yaml'
	controller_params = bringup_dir + '/config/controller_server_params.yaml'
	planner_params = bringup_dir + '/config/planner_server_params.yaml'
	local_costmap_params = bringup_dir + '/config/local_costmap_params.yaml'
	global_costmap_params = bringup_dir + '/config/global_costmap_params.yaml'
	behavior_params = bringup_dir + '/config/behavior_server_params.yaml'
	bt_params = bringup_dir + '/config/bt_navigator_params.yaml'

	#debugging:
	#print("Controller params file:", controller_params)

	# EKF node
	ekf_node = Node(
		package='robot_localization',
		executable='ekf_node',
		name='ekf_filter_node',
		output='screen',
		parameters=[ekf_params]
	)
	# LIDAR - using original launch file because it's a bit odd
	lidar_launch = IncludeLaunchDescription(
		PythonLaunchDescriptionSource(
            		os.path.join(get_package_share_directory('sllidar_ros2'), 'launch', 'sllidar_a1_launch.py')
        	),
		launch_arguments={
            		'serial_port': '/dev/ttyUSB0',
            		'serial_baudrate': '115200',
            		'frame_id': 'base_link',
            		#'inverted': 'false',
            		#'angle_compensate': 'true',
            		#'scan_mode': 'Sensitivity'
        	}.items()
	)
	# Map server
	map_server = Node(
		package='nav2_map_server',
		executable='map_server',
		name='map_server',
		output='screen',
		parameters=[map_params]
	)
	# AMCL localization
	amcl = Node(
		package='nav2_amcl',
		executable='amcl',
		name='amcl',
		output='screen',
		parameters=[amcl_params]
	)
	# Controller server (local costmap)
	controller_server = Node(
		package='nav2_controller',
		executable='controller_server',
		name='controller_server',
		output='screen',
		parameters=[controller_params, local_costmap_params],
		#arguments=['--ros-args', '--log-level', 'nav2_controller:=debug']
	)
	# Planner server (global costmap)
	planner_server = Node(
		package='nav2_planner',
		executable='planner_server',
		name='planner_server',
		output='screen',
		parameters=[planner_params, global_costmap_params]
	)
	# Behaviour server (recovery behaviours)
	behavior_server = Node(
		package='nav2_behaviors',
		executable='behavior_server',
		name='behavior_server',
		output='screen',
		parameters=[behavior_params]
	)
	# BT Navigator
	bt_navigator = Node(
		package='nav2_bt_navigator',
		executable='bt_navigator',
		name='bt_navigator',
		output='screen',
		parameters=[bt_params]
	)

	# Lifecycle Manager
	lifecycle_manager_node = Node(
		package='nav2_lifecycle_manager',
		executable='lifecycle_manager',
		name='lifecycle_manager_navigation',
		output='screen',
		parameters=[{
			'use_sim_time': False,
			'autostart': True,
			'node_names': [
				'map_server',
				'amcl',
				'behavior_server',
				'bt_navigator',
				'planner_server',
				'controller_server',
			]
		}]
	)

	lifecycle_manager = TimerAction(
		period=5.0,  # wait 5 seconds for other nodes
		actions=[lifecycle_manager_node]
	)

	return LaunchDescription([
		ekf_node,
		lidar_launch,
		map_server,
		amcl,
		planner_server,
		controller_server,
		behavior_server,
		bt_navigator,
		lifecycle_manager
	])

Now add/edit a file in your ~/ros2_ws/src/myrobot folder called setup.py (we’re running a python ament build). This is what the build system uses to put files in the right place etc:
nano ~/ros2_ws/src/myrobot/setup.py
Contents of mine:

from setuptools import find_packages, setup
import os
from glob import glob

package_name = 'myrobot'

setup(
    name=package_name,
    version='0.0.0',
    packages=find_packages(exclude=['test']),
    data_files=[
        ('share/ament_index/resource_index/packages',
            ['resource/' + package_name]),
        ('share/' + package_name, ['package.xml']),
        # THIS installs the launch files:
        (os.path.join('share', package_name, 'launch'), glob('launch/*.py')),
        # And this installs your config:
        (os.path.join('share', package_name, 'config'), glob('config/*.yaml')),
    ],
    install_requires=['setuptools'],
    zip_safe=True,
    maintainer='jesse',
    maintainer_email='jesse@cake.net.au',
    description='Basic package for my robot',
    license='TODO: License declaration',
    tests_require=['pytest'],
    entry_points={
        'console_scripts': [
        ],
    },
)

You should already have a package.xml in the folder from creating the package earlier on.

Now we can build the “myrobot” package ready for ros2 to call in standard commands:
Go to the root directory of the workspace:
cd ~/ros2_ws/
Run the build process:
colcon build --symlink-install
Source the workspace again:
source install/setup.bash

Now we can finally just call a single launch command to bring the whole lot up!
ros2 launch myrobot bringup_launch.py
(if you have problems, you can get debugging info from your launch with ros2 launch myrobot bringup_launch.py --debug )

Your control should all now launch, but it may hang at launch initially as you need to provide an initial pose estimation using rviz2 on your laptop/VM – once you send this, everything else will come up neatly.

You should now be able to (using rviz2) be able to give 2D goals by clicking on your map, and your robot will attempt to drive there. Of course there is lots of tuning to do, and lots more serious config to go (creating a model of the robot itself, where its wheels are, where the sensors are actually mounted etc, but for now, your robot can run!

To be continued…


This is a rolling blog of my journey on this particular project, so I’ll keep adding as I work through it.

Compiling and installing ROS Noetic and compiling raspicam-node for Raspberry Pi OS “buster” for accelerated camera capture

Long title, I know. But this was something that turned out to be surprisingly complex and took lots of troubleshooting steps to get right, so I thought I’d share.

So initially, why would I do this? raspicam-node is already available as binaries for Ubuntu, so why try and compile it for Raspberry Pi OS (Raspbian) Bust

Well for one thing, with Raspberry Pi, we’re still kinda stuck in the land of 32 bit if you want accelerated graphics or at least accelerated video operations because of the GPU hardware on the current Raspberry Pi offerings (RPi 3, 4 etc). I’m not completely over all the detail, but apparently right now we just have to accept this and move on.

So this means that if we were to follow through and install a nice 64 bit version of Ubuntu Server to run ROS on our Raspberry Pi, we wouldn’t be able to benefit from accelerated video bits and pieces, and instead rely on (quite slow) CPU operations, which would mean that any video would be quite slow.

So this means that if I want to use my Raspberry Pi, on a mobile robot, to capture stereo camera input to do mapping with, my only real option to make it fast enough to be useful for stereo vision mapping is to figure out how to build the fantastic raspicam-node by Ubiquity Robotics for 32 bit Raspbian.

Here’s the Raspicam_node source:
https://github.com/UbiquityRobotics/raspicam_node

I also got some great help to get started with ROS on Raspbian from :
https://varhowto.com/install-ros-noetic-raspberry-pi-4/

This guide is still a work in progress, and I will in the near future clean it up and edit it to improve it, so take this as a quick dump of info for now to help get you started before I forget.

Enjoy!

Download and image the Raspberry Pi OS 32 bit “buster” lite image onto an SD card and boot up your Raspberry Pi 4 (which is what I used here) with a screen attached (and keyboard if you’re just going to use it directly rather than SSH in). Any command starting with “sudo” may require your password, which the standard for the default “pi” user is “raspberry”.

Now we need to add the official ROS software sources to download from using the following command:

sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu buster main” > /etc/apt/sources.list.d/ros-noetic.list’

Next step is to add the key for this server so that it will be accepted as a usable source with this command:

sudo apt-key adv –keyserver ‘hkp://keyserver.ubuntu.com:80’ –recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

Now update the available packages so we can see the new sources:

sudo apt update

Make sure the whole system (including the kernel) is fully update to date:

sudo apt full-upgrade

Reboot the system into the updates:

sudo reboot

Install the required packages from the ROS sources:

sudo apt-get install -y python3-rosdep python3-rosinstall-generator python-wstool python3-rosinstall build-essential cmake

Initialise the ROS dependency tool (adds hidden files to your home directory):

sudo rosdep init

Update the ROS dependency tool:

rosdep update

Create a “catkin” workspace (catkin is the official build tool for ROS, so directories to hold the sources, build requirements and binaries built are called catkin workspaces) by simply creating a directory called “ros_catkin_ws” in your home directory:

mkdir ~/ros_catkin_ws

Change to this directory:

cd ~/ros_catkin_ws

Use the rosinstall_generator tool to get ready to flesh out the catkin workspace we created above. This basically sets up a special file that will be used to create all of the requirements needed to make a functional catkin workspace for ROS “Noetic” (wet here means released packages):

rosinstall_generator ros_comm –rosdistro noetic –deps –wet-only –tar > noetic-ros_comm-wet.rosinstall

This will initialise the sources for “Noetic” to be built in our catkin workspace:

wstool init src noetic-ros_comm-wet.rosinstall

ROS depencies will be downloaded and put into the ./src directory based on our workspace (required libraries etc) so we can build ROS:

rosdep install -y –from-paths src –ignore-src –rosdistro noetic -r –os=debian:buster

Compiling things takes lots of ram, of which the Raspberry Pi has relatively little in today’s standards, so in order to not ever accidentally bump over the limit it’s a wise idea to modify the amount of swap file we have available to soak up any overruns.

First turn off the swap file:

sudo dphys-swapfile swapoff

Edit the swapfile configuration:

sudoedit /etc/dphys-swapfile

Edit the line in the file that says “CONF_SWAPFILE” to equal 1024 (1GB):

CONF_SWAPSIZE=1024

Save and exit the nano file editor by pressing CTRL-O (O for ostrich) and hitting enter, then press CTRL-X

Setup the required new swap file:

sudo dphys-swapfile setup

Turn swapping back on with the new settings and file:

sudo dphys-swapfile swapon

Now let’s compile ROS Noetic (here I’ve used option -j3 which means use 3 simultaneous processes for compiling to speed things up, this uses more RAM and works the processor harder, but for me works fine for Raspberry Pi 4 with 2GB of ram, if this fails, try -j1):

sudo src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/noetic -j3 -DPYTHON_EXECUTABLE=/usr/bin/python3

The main build and installation of ROS Noetic is now finished. You’ll find your new compiled binaries are in /opt/ros/noetic/

Each time you use ROS you’ll need to source some bash terminal bits with the following command:

source /opt/ros/noetic/setup.bash

If this works you can put this at the end of your .bashrc file which will make bash load it every time you log in. Simply type:

nano ~/.bashrc

And you’ll be using the nano editor like above to see the contents. Scroll to the very bottom, and press enter for a new line and put the above “source” line into this file. Press ctrl-o and press enter to save. Then press ctrl-x to exit.

Try running ROS core to see if it runs to test your installation and bash source with:

roscore

Now we have a fully running ROS installation on Rapsberry Pi and have tested our ability to setup and compile a catkin workspace. So we can move ahead and use these tools to compile the Raspicam_node tool to allow ROS to access the onboard Rasperry Pi camera.

Why do we want this? Well because apart from the amazingly fast interface the special port on the Raspberry Pi has with a wide range of cameras compatible with it, we can also use camera boards like those available from Arducam that put two cameras into a single camera source side by side to source a stereo image. And we know that from a stereo camera source we can then pull 3d image data for things like SLAM mapping. Very useful for mobile computers like Raspberry Pi!

Let’s add the repository for rosdep to understand the dependencies that are laid out in the raspicam source to compile (this is mainly regarding libraspberrypi-dev stuff, without this step, rosdep won’t know where to find the required libraries to build with) – we’ll use the nano editor to create a file:

sudo nano /etc/ros/rosdep/sources.list.d/30-ubiquity.list

Now inside the nano editor we will add this line:

yaml https://raw.githubusercontent.com/UbiquityRobotics/rosdep/master/raspberry-pi.yaml

Save the file in nano, then exit. Now we can run rosdep update to use this new source:

rosdep update

Now that rosdep has knowledge of where to find the stuff needed to build raspicam_node, let’s go and setup a new catkin workspace (perhaps not needed, but let’s do a fresh one just in case) – the “p” here creates the parent directory as well as we’re creating the “catkin_ws” directory in the user home, then the “src” directory underneath it:

mkdir -p ~/catkin_ws/src

Change into this new src subdirectory:

cd ~/catkin_ws/src

Let’s get the raspicam_node source code directly from their Github page:

git clone https://github.com/UbiquityRobotics/raspicam_node.git

Let’s move out of the “src” directory into the top of the new catkin workspace we created:

cd ~/catkin_ws

Let’s have ROS initialise for use the src directory and everything in it to use:

wstool init src

Use rosinstall_generator to set up what is needed in 4 different ways:

Step 1:

rosinstall_generator compressed_image_transport –rosdistro noetic –deps –wet-only –tar > compressed_image_transport-wet.rosinstall

Step 2:

rosinstall_generator camera_info_manager –rosdistro noetic –deps –wet-only –tar > camera_info_manager-wet.rosinstall

Step 3:

rosinstall_generator dynamic_reconfigure –rosdistro noetic –deps –wet-only –tar > dynamic_reconfigure-wet.rosinstall

Step 4:

rosinstall_generator diagnostics –rosdistro noetic –deps –wet-only –tar > diagnostics-wet.rosinstall

Merge these into the “src” directory with wstool in 5 steps:

Step 1:

wstool merge -t src compressed_image_transport-wet.rosinstall

Step 2:

wstool merge -t src camera_info_manager-wet.rosinstall

Step 3:

wstool merge -t src dynamic_reconfigure-wet.rosinstall

Step 4:

wstool merge -t src diagnostics-wet.rosinstall

Step 5:

wstool update -t src

Let’s make rosdep find all the dependencies required to now build all of this:

rosdep install –from-paths src –ignore-src –rosdistro noetic -y –os=debian:buster

Finally, we can now build raspicam_node:

sudo src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/noetic -j3 -DPYTHON_EXECUTABLE=/usr/bin/python3 

raspicam_node is now built, and you will find the binaries in /opt/ros/noetic with the rest of ROS we built earlier. Before we can use it though, we must enable the camera port using:

sudo raspi_config

Look for interfaces, and camera – it will ask if you wish to enable it. If it doesn’t automatically reboot, reboot the Raspberry Pi yourself:

sudo reboot

Make sure after reboot that the default pi user (or whatever user you’re using) is added to the video group to access the camera:

sudo adduser pi video

With my Arducam module (probably not necessary for other modules), I had to make sure that the I2C module was added to the kernel options by editing the boot config:

sudo nano /boot/config.txt

Put in:

dtparam=i2c_vc=on

Save (ctrl-o, enter, ctrl-x) and reboot again:

sudo reboot

Test that the camera works directly using the built in Raspberry Pi camera tools:

raspistill -o temp.jpg

You should see an image from the camera on screen for a short moment. If so, success! Time to use the module in ROS!

Let’s source the bash setup file (this might need work below, we should probably only need to source what is in the /opt/ros/noetic directory):

source ~/catkin_ws/devel_isolated/setup.bash

Run roscore in the background:

roscore &

Launch the raspicam_node with the built in config for a v2 camera at 640×480 5fps (there are several build in modes, simply type roslaunch raspicam_node and press TAB a couple of times to see the options). We are again pushing this process to run in the background by putting the “&” symbol at the end:

roslaunch raspicam_node camera_module_v2_640x480_5fps_autocapture.launch  &

Now let’s see how fast the update speed is in one of the Raspicam_node topics:

rostopic hz /raspicam_node/image/compressed

If you tried to run the above and got an error about calibration, do the following:

cp -r ~/catkin_ws/src/raspicam_node/camera_info ~/.ros

If you got no errors, and you’re seeing an update of how fast the updates are happening per second, then you’re up and running!

To stop the running processes above first press CTRL-C to kill off the rostopic command. This should now return you to a commandline. Now use the process management tools to bring those other 2 commands to the front to kill by typing:

fg

you’ll now see you can kill off the second process with CTRL-c, and then repeat to kill off the initial roscore.

Sucess! You can now use the Raspberry Pi camera for ROS in a nice fast way with a neat node.

This is not the end though, as for my Arducam, the image comes in as a side by side stereo image in a single image. It needs to be sliced in half in order for us to do stereo image processing. So I’m looking at using another node that does this job (depending on how fast it runs) or otherwise I’ll see if it’s possible to add the feature to raspicam_node itself so it’ll be a one-stop-shop for fast and cheap stereo image sourcing for 3D outcomes.

Stay tuned..