Linking WT32-SC01 Display Module with Home Assistant using Tasmota

There’s not a massive amount of information on how to use the new HASP powered Tasmota GUI builder and the universal display driver offered, so I thought I’d share my config so that it may help others.

Product page (currently): https://en.wireless-tag.com/product-item-25.html

This is a simple 4 button little wall controller for things in my kitchen, and I’ll revisit this with more complexity in future (adding things like screen dimming etc).

First you’ll need to install the firmware to your device. In a compatible browser that offers serial links (Chrome) go to:
https://tasmota.github.io/install/

And install Tasmota32 LVGL by plugging in your device with USB and choosing the correct port. This will take about 2 mins and you will also set up your wifi and name of the device.

Once it is installed, go to the IP address on your network where it lands and go to Configuration->Other->Template and put the following into the text box:

{"NAME":"WT32-SC01","GPIO":[6210,1,1,1,1,1,0,0,1,704,736,768,1,1,640,608,1,800,1024,992,0,1,1,1,0,0,0,0,1,1,1,1,1,0,0,1],"FLAG":0,"BASE":1}

Make sure to check the “Activate” box and hit save. This will set up the correct pins for the display, backlight, touchscreen etc.

With the new universal display driver, we no longer use the console to set it up, instead we have to upload a display.ini file which carries all of the config. Go to Tools->Manage File System and browse for your display.ini file, and press upload. You should see it listed on the panel after uploading. Here are the contents you need:

display.ini:

:H,ST7796,480,320,16,SPI,1,*,*,*,*,*,*,*,40
:S,2,1,1,0,40,20
:I
EF,3,03,80,02
CF,3,00,C1,30
ED,4,64,03,12,81
E8,3,85,00,78
CB,5,39,2C,00,34,02
F7,1,20
EA,2,00,00
C0,1,23
C1,1,10
C5,2,3e,28
C7,1,86
36,1,48
37,1,00
3A,1,55
B1,2,00,18
B6,3,08,82,27
F2,1,00
26,1,01
E0,0F,0F,31,2B,0C,0E,08,4E,F1,37,07,10,03,0E,09,00
E1,0F,00,0E,14,03,11,07,31,C1,48,08,0F,0C,31,36,0F
11,80
29,80
:o,28
:O,29
:A,2A,2B,2C
:R,36
:0,28,00,00,01
:1,88,00,00,02
:2,E8,00,00,03
:3,48,00,00,00
:i,20,21
:UTI,FT6336U,I1,38,*,*
RD A0
CP 02
RTF
RT
:UTT
RDM 00 16
MV 2 1
RT
:UTX
MV 3 2
RT
:UTY
MV 5 2
RT

Reboot your device and you should see thhttps://tasmota.github.io/install/e splash screen for Tasmota pop up and draw on screen as it boots.

Now that we have the display working properly, we are going to add a super simple gui with a title at the top, the time, wifi strength, and 4 buttons on screen to do things in Home Assistant.

We will do this with 2 files:
1. autoexec.be (runs on boot to set up various things and load the pages)
2. pages.jsonl (contains the details of what to draw on screen)

In my autoexec.be I first import the hasp and mqtt libraries, and then I define some actions that happen when pressing buttons (publishing to certain MQTT topics), then finally start running the GUI.

autoexec.be:

import haspmota
import mqtt

tasmota.add_rule("hasp#p1b1#event=down", / -> mqtt.publish("kitchenpanel/button/espresso", "pressed"))
tasmota.add_rule("hasp#p1b2#event=down", / -> mqtt.publish("kitchenpanel/button/toaster", "pressed"))
tasmota.add_rule("hasp#p1b3#event=down", / -> mqtt.publish("kitchenpanel/button/aircon", "pressed"))
tasmota.add_rule("hasp#p1b4#event=down", / -> mqtt.publish("kitchenpanel/button/neonc", "pressed"))

haspmota.start()

In my pages.jsonl I have 2 pages. Page 0 is a special page which is always shown in hasp, it’s like an overlay over everything else, and in mine I have a top bar with a title, a clock, and wifi signal strength. Page 1 is where I put my buttons which show in the middle. Both pages are shown simultaneously to make a single GUI.

pages.jsonl:

{"page":0,"comment":"---------- Upper stat line ----------"}

{"id":11,"obj":"label","x":0,"y":0,"w":480,"pad_right":90,"h":22,"bg_color":"#BD5B06","bg_opa":255,"radius":0,"border_side":0,"text":"Kitchen Control","text_font":"montserrat-20"}

{"id":15,"obj":"lv_wifi_arcs","x":450,"y":0,"w":29,"h":22,"radius":0,"border_side":0,"bg_color":"#000000","line_color":"#FFFFFF"}
{"id":16,"obj":"lv_clock","x":395,"y":3,"w":55,"h":16,"radius":0,"border_side":0}


{"page":1,"comment":"---------- Page 1 ----------"}
{"id":0,"bg_color":"#000000","bg_grad_color":"#000000","bg_grad_dir":1,"text_color":"#FFFFFF"}

{"id":1,"obj":"btn","x":35,"y":50,"w":200,"h":120,"text":"Espresso", "bg_color":"#BD5B06", "text_font":"robotocondensed-24"}
{"id":2,"obj":"btn","x":245,"y":50,"w":200,"h":120,"text":"Toaster", "bg_color":"#BD5B06", "text_font":"robotocondensed-24"}
{"id":3,"obj":"btn","x":35,"y":180,"w":200,"h":120,"text":"Air Conditioner", "bg_color":"#BD5B06", "text_font":"robotocondensed-24"}
{"id":4,"obj":"btn","x":245,"y":180,"w":200,"h":120,"text":"Neon C", "bg_color":"#BD5B06", "text_font":"robotocondensed-24"}

Make sure to upload these to Tasmota using the same method above, then reboot the device. You should now see the simple GUI being drawn on screen, as well as the buttons changing colour briefly when you press them.

But currently these buttons don’t connect to anything, as we’re not yet linked with our home assistant installation.

On your Tasmota interface webpage (the IP address of the device) go to Configuration->MQTT and fill in the important info:
Host: The IP or domain name of your Home Assistant host
Port: usually leave this as is (default 1883)
Client: usually leave as is (unique name)
User: username you connect your device to HA with
Password: password for this username
Topic: I like to name this the name of the device (“kitchenpanel” in this case)
Full Topic: Leave as is

Hit save

Head back to the main menu, and go to Tools->Console and type:
weblog 4
Which will show you full logging.

When you press a button on the device, you should now see messages like:

08:23:18.555 MQT: stat/kitchenpanel/RESULT = {"WebLog":4}
08:23:18.768 CFG: Saved, Count 104, Bytes 4096
08:23:20.447 TS : touched  x=169 y=107 gest=0x00 (raw x=213 y=169)
08:23:20.474 LVG: Refreshed 19095 pixels in 10 ms (1909 pix/ms)
08:23:20.485 MQT: kitchenpanel/button/espresso = pressed
08:23:20.489 HSP: publish {"hasp":{"p1b1":{"event":"down"}}}
08:23:20.517 LVG: Refreshed 19095 pixels in 10 ms (1909 pix/ms)
08:23:20.555 LVG: Refreshed 19095 pixels in 10 ms (1909 pix/ms)
08:23:20.564 TS : released x=169 y=107 (raw x=213 y=169)
08:23:20.570 BRY: GC from 24994 to 14905 bytes, objects freed 155/237 (in 2 ms) - slots from 649/655 to 228/594
08:23:20.592 LVG: Refreshed 19095 pixels in 10 ms (1909 pix/ms)
08:23:20.601 HSP: publish {"hasp":{"p1b1":{"event":"release"}}}
08:23:20.606 HSP: publish {"hasp":{"p1b1":{"event":"up"}}}

Notice the:
MQT: kitchenpanel/button/espresso = pressed
This means we’re sending a message of “pressed” to the topic “kitchenpanel/button/espresso” (I pressed the espresso button).

Now it’s time to plumb these messages over MQTT into an automation on Home Assistant so that they do things.

In HA, go to Settings->Automations&Scenes->Automations Tab and press “Create Automation” and “Create new automation”

Under the “When” header, press “add trigger” and type “MQTT” and press the plus symbol. We want the topic to be kitchenpanel/button/+ Which will catch all messages going to the button topic so that everything is nicely grouped together. Put pressed in the payload box.

Under the “Then Do” header, press “add action” and type “Choose” and press the plus symbol.
Press “add option” and type “Template”. Under the “Conditions” subheading press “add condition” and type “template” and we’re going to add the first button condition.
Put in the “value template”:
{{ trigger.topic == 'kitchenpanel/button/espresso' }}
To capture presses from only the espresso button.
Now we want an action in the action subheader, for mine, I’m adding an action based on device, so I’ve chosen the device as “espresso” (which matches my device name) and chosen “Toggle Espresso”.
Do this for the other three value templates:
{{ trigger.topic == 'kitchenpanel/button/toaster' }}
{{ trigger.topic == 'kitchenpanel/button/aircon' }}
{{ trigger.topic == 'kitchenpanel/button/neonc' }}

Now you should have full pass through of all of your buttons to devices!

I’ll do more on this in the near future with automatic dimming of the screen, more flexible layouts, and symbols instead of text for the buttons.

There’s also a pretty nice case here:
https://www.printables.com/model/380229-wt32-sc01-case

Also while we’re here, make sure to set your timezone so that the clock shows your local time, for me here in Melbourne Australia I set it through the console in Tasmota with this:
Backlog0 Timezone 99; TimeStd 1,1,4,1,3,600; TimeDst 1,1,10,1,2,660

(your local timezone will require looking up here: https://tasmota.github.io/docs/Timezone-Table/ )

Raspberry Pi 5, Ubuntu, ROS2 Jazzy Jalisco, iRobot Create 3 – all playing together

This will be a rolling post as I work through this, in the hopes that others also can find the help they need to put these things together. ROS can be a difficult mess to just get the initial plumbing up and running, before you even get to do the fun stuff. Compatibility, difficult documentation, and the niche nature of ROS means that often those who attempt to jump in get disappointed and stop before they can make the robot move. There will be times where this guide makes mistakes, skips parts, or generally is confusing, but I write it in the hopes that my ramble as I work through the latest LTS stable release of ROS2 on the latest Raspberry Pi hardware will help others get past these difficult early parts. I will do my best to offer full commands for things like moving/copying/creating/editing files in Linux, but the assumption is that you have at least some experience with these things. Please look up how to use nano as a command line text editor if you’re not familiar.

For those who don’t know, the Raspberry Pi 5 only supports Ubuntu 23.10 and onwards because of a kernel support problem with the Raspberry Pi 5 hardware. This creates a complex situation with those wanting to use the hardware with ROS2 Humble Hawksbill as it only supports (easily) Ubuntu 22.04.

The only clean and easily replicable method here is to move onto the next just released LTS version of ROS2: Jazzy Jalisco. This is made for Ubuntu 24.04 LTS, which means we can finally fit these things together.

These are the broad steps to get rolling:
1. Image the SD card
Use the Raspberry Pi imager tool to image Ubuntu 24.04 server onto an SD card. You’ll find this option in the “Other general-purpose OS” section of the menu, then Ubuntu, and finally “Ubuntu Server 24.04.x LTS (64 bit). (we use server because desktop is not needed, we don’t want to eat up all the processing power and RAM to render a desktop, ssh will suffice).
IMPORTANT: make sure you use the options area when you’re about to image to set up things like wifi and login information, and also enable SSH. This will mean your pi will turn up on your network with credentials and SSH ready to log in

2. Update the iRobot Create 3
You’ll need to follow the instructions provided by iRobot. This usually involves going to the iRobot Create 3 web page, downloading the latest firmware file, powering on the robot, waiting for it to create its own wifi access point, connecting to it, opening the default config website on it, uploading the firmware file, waiting for it to finish.
Update: It seems at the time of writing, that they officially only support up to ROS2 “Iron” (older stable release) but should still be compatible with “Jazzy” (latest long term support release).
https://iroboteducation.github.io/create3_docs/releases/i_0_0/
At this point I downloaded manually and uploaded to the robot the Cyclone DDS version of the firmware, as this didn’t need a discovery server, and should “just work” happily with Jazzy Jalisco (plus it seems Jazzy moving forward is using cyclone by default? Correct me if I’m wrong).

3. Mount the Raspberry Pi 5 in the back case of the robot
This may involve 3D printing one of the many mounts for Raspberry pis to fit in the case, or doing whatever you see best to mount it safely. Be mindful of contact with metal etc.

4. Plugging the Raspberry Pi 5 into onboard power
There is a hidden USB C connector inside the back caddy of the robot, so with a short USB C to USB C cable you can power the Raspberry Pi, as well as provide it with USB networking to control the robot

5. Setting up the Raspberry Pi 5 to run networking over USB
This one was a little complex, as it wasn’t immediately clear what was wrong, and there being mixed messages about the USB C port on the Raspberry Pi 5. Many say that it’s for power only, and various sources say it’s not officially supported, but the USB C connector can power as well as have data run over it. Basically you have to load a kernel module to enable gadget ethernet over USB, then configure the new network interface to be the right subnet to reach the robot.

First, add the module to load on boot:
echo "g_ether" | sudo tee -a /etc/modules
This loads the “g_ether” (gadget ethernet) module to load on boot. This will create a new network connection called “usb0” when plugged into the robot.

Next, add the network config for this new network connection:
sudo nano /etc/netplan/99-usb0.yaml
The contents:

network:
    version: 2
    ethernets:
        usb0:
            dhcp4: no
            addresses:
                - 192.168.186.3/24

(save and quit)

This creates a new config file called “99-usb0.yaml” in the /etc/netplan folder and puts the config for the new network interface in place. Notice the address/subnet? That’s because the iRobot Create uses address 192.168.186.2/24 by default. If your robot is configured differently, then change the address accordingly.

Apply the new netplan config:
sudo netplan apply

Check it worked:
ip addr show usb0
This should show your connection with an address assigned, and up.

6. Installing ROS2
This step I won’t step out, as it’s covered well here:
https://docs.ros.org/en/jazzy/Installation/Ubuntu-Install-Debs.html
I would however recommend keeping it to the base packages (sudo apt install ros-jazzy-ros-base) because you don’t have a desktop installation of Ubuntu, so you want to keep it to the basics, and connect using your laptop with ROS2 Jazzy installed on it to run any visualisation etc.

7. Set up the correct middleware for ROS2 and the iRobot Create
The middleware is what is used to pass messages between different components in ROS2. We have to have the robot using the same middleware as the Raspberry Pi in order for all of the components to talk to each other.

You should have installed the firmware with the cycloneDDS version in a previous step. Now we want to install and set up ROS2 on the Raspberry Pi.

Run:
sudo apt install ros-jazzy-rmw-cyclonedds-cpp
Which will install the CycloneDDS middleware for ROS2
Then:
export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
Which will tell ROS2 to use it.
echo 'export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp' >> ~/.bashrc
To make this permanently happen without having to export it each login

We then need to confirm that the robot is set up properly:
Go to the Application->Configuration menu of the robot’s webpage
Ensure that in “RMW_IMPLEMENTATION” setting it has “rmw_cyclonedds_cpp” selected
IMPORTANT: untick the “Enable Fast DDS discovery server?” setting (if you don’t it still appears to try and use FastDDS instead of CycloneDDS)
Press the menu item Application->RestartApplication to restart it. This should then have it turn up on discovery on the network for ROS2

Finally:
Run command ros2 topic list and you should see something like (probably way more):
/cmd_audio
/cmd_lightring
/cmd_vel
/cmd_vel_stamped
/parameter_events
/rosout

This means it’s visible and things are talking! If not, check you’ve done everything above, and ensured that the networking is up.

7. Have some fun testing manual driving
Because we have our /cmd_vel topic exposed now to ROS2, we have the ability to send commands to drive.

First we’ll need to install a ROS2 package which is a lightweight command line keyboard controller:
sudo apt install ros-jazzy-teleop-twist-keyboard

Then we’ll run it with standard arguments (expects /cmd_vel to exist etc):
ros2 run teleop_twist_keyboard teleop_twist_keyboard
(there are instructions on screen)

You should be now able to drive the robot around!

8. Connect an RPLIDAR A1 on top and scan the room
A remote control robot is pretty boring, we want it to make its own decisions and drive itself, so we need to have some sensing of the environment and mapping to be able to decide how to drive and where to drive. This is called SLAM (Simultaneous Location And Mapping), and to get there we need spatial awareness.

We’re going to use a cheap and easy to get RPLIDAR A1 2D lidar to see with. I 3D printed a bracket to mount it on top in the middle of the robot to make it simple for now. Connect it to the Raspberry Pi with a USB cable.

We will now create a build environment, and grab the driver for this to build.

Create the ROS workspace to build from in the home directory:
mkdir -p ~/ros2_ws/src
Move to the src directory inside:
cd ~/ros2_ws/src
Clone the source code from the Slamtec github:
git clone https://github.com/Slamtec/sllidar_ros2.git
Move back to the top of the ROS workspace:
cd ~/ros2_ws
Grab the system dependencies this source code will want:
rosdep install --from-paths src --ignore-src -r -y
Build the driver/utilities for the LIDAR:
colcon build --symlink-install
“Source” the build environment to overlay the current ROS2 system environment (allows for the new driver to be used in place, not having to install it systemwide):
source install/setup.bash

We’re ready to launch the ROS node for the lidar (this is for the defaults with my A1 LIDAR, if yours is different you will need a different launch file – my launch file is sllidar_a1_launch.py):
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=laser

Let’s check that the topic for /scan exists with:
ros2 topic list

This is great – we appear to be running if you see it.

But it’s not much use unless we can actually see the output.
One method is to just run:
ros2 topic echo /scan
But you’ll be quickly overwhelmed with data – we humans need visuals!

ROS2 uses rviz2 as a tool to visualise data.

It’s best you don’t run this on the Raspberry Pi, so install ROS2 Jazzy Desktop package onto your own laptop. This can be messy if your system isn’t compatible, but let’s leave that for you to follow the instructions and figure out. On my laptop running Ubuntu 22.04 it as messy, so I just decided to run up a virtual Ubuntu 24.04 desktop that I can install ROS2 Jazzy in.

Then we can run rviz2 to see the scan data from the /scan topic

Or can we? No…

We’ve created a new problem here, in that we are now jumping across multiple networks, and hoping that the middleware (CycloneDDS) will jump through these worlds with us with its multicasting. It unfortunately won’t.

We’ll have to set up cycloneDDS to unicast to particular endpoints or subnets instead using the only unifying point on the network that all parties can reach: the Raspberry Pi onboard the robot.

So we’ll configure ROS on both the Raspberry Pi, and the laptop/VM to unicast to definite endpoints instead of relying on network broadcasts for discoveries.

On the Raspberry Pi, we’ll create a new file in the home directory called cyclonedds.xml and put this in it (using nano or another command line text editor of choice):

<CycloneDDS>
    <Domain id="any">
        <General>
            <Interfaces>
                <NetworkInterface name="wlan0"/>
                <NetworkInterface name="usb0"/>
            </Interfaces>
        </General>
        <Discovery>
            <Peers>
             <Peer address="192.168.20.101"/>  <!-- Laptop/VM IP-->
             <Peer address="192.168.186.2"/>    <!--iRobot IP-->
            </Peers>
        </Discovery>
    </Domain>
</CycloneDDS>


And to export this as a system variable for ROS to find it type this at the commandline:
export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml

And to make this persist across logins/reboots, add it to your bashrc file that is read/sourced each time you login:
echo 'export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml' >> ~/.bashrc

This ensures that ROS2 on the Raspberry pi points both at the robot via its usb network link, and to the laptop/VM via the wifi network link.

Now we must do the same on the Laptop/VM to make it point back at the Raspberry Pi:

Again, we put the following in a file called cyclonedds.xml in the home directory (enp0s3 is the name of the network adaptor on mine, adjust yours accordingly by checking with “ip address” at the commandline on the laptop/VM):

<CycloneDDS>
<Domain id="any">
<General>
<Interfaces>
<NetworkInterface name="enp0s3"/>
</Interfaces>
</General>
<Discovery>
<Peers>
<Peer address="192.168.20.117"/> <!-- The IP of the RPi5-->
</Peers>
</Discovery>
</Domain>
</CycloneDDS>

And again export this system variable, and add it to the bashrc of the laptop/VM:
export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml

echo 'export CYCLONEDDS_URI=file://$HOME/cyclonedds.xml' >> ~/.bashrc

Now we can run the LIDAR driver on the Raspberry Pi:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=laser

Making sure that runs successfully, we then jump to our laptop/VM and try to look for published ROS topics made available by the LIDAR – this should be at minimum /scan:
ros2 topic list

If you’re lucky you’ll see a whole bunch more. I’m not a super expert on DDS messaging, but it seems to me like my Raspberry Pi is also acting as a relay, passing through the topics from the Robot itself, which is more than I had hoped for!

If you’ve been trying to make this work unsuccessfully to this point, reboot both machines, you may have hanging processes, or topics stuck with one of the instances that keep causing conflicts.

We can NOW finally run rviz2 on the laptop/VM machine now that we can see the topics turning up.

Type at the commandline on the laptop/VM:
rviz2

You’ll see a window open. First things first, because we’re just wanting to initially see the scan output, we don’t have a proper stack with a map, and origin body etc, so we want to “Fixed Frame” setting on the top left pane and change that from the default of “map” to “laser” which is the frame that we’re running with the LIDAR driver (remember we put “laser” at the end for the “frame_id” in the command?).

Now we can press the “add” button, and go to “by topic” tab in the window that pops up, and you should see “/scan” in the available topics. Choose the “LaserScan” under that, and you should now see your scan output results from the LIDAR!

Take a breath – this is a good moment!

So we now have 2D LIDAR scan results flowing, we have topics and messaging passing around our robotics network, and we have the ability to drive the robot.

9. Build a map of a space to keep for future navigation

Now we are going to use a SLAM engine to gather a couple of things:
– Wheel odometry
– 2D Lidar Scan results

And try to create a map so that the robot in future can decide how to navigate places on its own. We will remote control the robot via SSH on the Raspberry Pi, and watch the map grow on screen with rviz2.

We’re going to use the built-in “SLAM Toolbox” in ROS2 as a general all-rounder for this.
Install it on the Raspberry Pi with:
sudo apt install ros-jazzy-slam-toolbox

But before we run it:
Previously we’d launched the LIDAR driver with:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=laser

But the the frame_id is not linked to the base of the robot, it’s off in it’s own world. So we will kill off that process and launch it instead with:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=base_link

Now this is slightly lazy, as ideally we have a topology set up that places the laser where it actually sits on the robot, but for now, let’s just treat the laser as the origin at the centre of the robot to make things easy. Later on we’ll build a proper model of the robot, with transforms putting the sensors where they actually live on the robot.

Now it’s time to actually launch the SLAM Toolbox which will take all available sensor inputs (wheel odometry from the robot – /odom, distances in 360 degrees from the 2D LIDAR from /scan) and begin to build a map which will be available at topic /map:
ros2 launch slam_toolbox online_async_launch.py

Back to rviz2, if you set the fixed frame back to “map”, and add by topic the /map, you’ll now start to see the beginning of a simple map being built.

We’ll need to drive the robot around to be able to make it grow and refine, so in another terminal SSH to your Raspberry pi and run the remote control tool we used above to drive it around your room/home:
ros2 run teleop_twist_keyboard teleop_twist_keyboard

So it doesn’t work? Yes that’s right. It will begin to show an occupancy map, but we’re not actually going to get much sense (including making a huge mess), as our odometry from the robot base isn’t been transformed properly to work with SLAM with the base link of the body and everything else going on, and needs some filtering with other sensors to provide a nice fusion that works properly.

QUICK STOP HERE: I’ve come back from the future here to point out that although we can see these things, they’re not working properly because the clock is slightly off on the iRobot Create (unsure why – probably remote servers it uses by default being slightly off), and messaging only works properly when everyone shares the same clock close enough to not cause alignment problems. This took me a while to figure out further down as my SLAM engine just couldn’t get it together.

So? We have to install something called chrony (a network time keeper) on the Raspberry Pi as it will be the master clock, and then reconfigure the iRobot Create to point to it for network time so that their clocks align closely.

Install the time server on the RPi:
sudo apt install chrony

Configure the timeserver using nano to edit the config:
sudo nano /etc/chrony/chrony.conf
Go to the bottom of the file and put:
allow 192.168.186.0/24
local stratum 10

This allows the iRobot subnet (the usb gadget link) to be able to access time from the RPi.
Restart the chrony service to read the config:
sudo /etc/init.d/chrony restart

Now we have to go to the web interface of the iRobot Create, go to the “beta features” menu, and click “edit ntp.conf”. All you need to have in here is:
server 192.168.186.1 prefer iburst minpoll 4 maxpoll 4

Be sure to restart the iRobot and give it some time to catch up its clock. It won’t always happen immediate, as it doesn’t like big time skips.

Now back to robot localisation:

We’re going to install a package:
sudo apt install ros-jazzy-robot-localization

Now we will create some preferences for the localisation. Let’s put this in a new config folder under the ros2_ws folder in our home directory for neatness – call it ekf_odom.yaml (full path ~/ros_ws/config/ekf_odom.yaml):
mkdir -p ~/ros_ws/config
nano ~/ros_ws/config/ekf_odom.yaml


This is my config file contents:

ekf_filter_node:
    ros__parameters:
    frequency: 30.0
    sensor_timeout: 1.0
    two_d_mode: true # Assuming Create 3 is 2D motion only
    odom0: /odom
    odom0_config: [true, true, false, # x, y, z
                   false, false, true, # roll, pitch, yaw
                   true, true, false, # vx, vy, vz
                   false, false, true, # vroll, vpitch, vyaw
                   false, false, false] # ax, ay, az
    odom0_differential: false
    odom0_relative: false
    imu0: /imu/data
    imu0_config: [false, false, false,
                  true, true, true,
                  false, false, false,
                  false, false, true,
                  true, true, true]
    imu0_differential: false
    publish_tf: true
    map_frame: map
    odom_frame: odom
    base_link_frame: base_link
    transform_time_offset: 0.1

Why EKF? Extended Kalman Filter – it takes noisy and imperfect inputs, weights them against others (eg wheel measurements vs IMU) and decides roughly where it thinks the robot must be (pose).

Let’s now launch our localization node using the preferences file above:
ros2 run robot_localization ekf_node --ros-args --params-file ~/ros_ws/config/ekf_odom.yaml

Great! So we have inputs being smoothed now, ready for a SLAM engine to process and confidently build a map.

The SLAM engine can run in two modes:
1. Mapping – creating a map
2. Localisation – using an existing map to figure out where we are

We don’t run in mapping mode constantly, as we could possibly get lost and make a messy map, so generally speaking we map, then use a nice clean map as a base for localising the robot (pose).

Let’s create a config file for the mapping mode. Create a new file in the config folder:
nano ~/ros2_ws/config/slam_toolbox_mapping_config.yaml

The contents of mine:

slam_toolbox:
    ros__parameters:
       solver_plugin: "solver_plugins::CeresSolver"
        solver_threads: 4 #max number compiled
        ceres_solver_options:
            linear_solver_type: "SPARSE_NORMAL_CHOLESKY"
        mode: "mapping"
        map_frame: "map"
        odom_frame: "odom" # <-- This matches what EKF publishes
        odom_topic: "/odometry/filtered"
        base_frame: "base_link"
        scan_frame: "base_link"
        scan_topic: "/scan"
        use_scan_matching: true
        use_scan_barycenter: false
        map_update_interval: 0.2
        resolution: 0.05 # Map resolution (meters per pixel)
        max_laser_range: 5.0 #12m is full range, reliability drops 5/6m
        update_min_d: 0.02 # Minimum movement before update (distance)
        update_min_a: 0.02 # Minimum rotation before update (radians)
        transform_publish_period: 0.05 # How often to publish tf transforms
        use_pose_extrapolator: true
        transform_timeout: 1.5
        tf_buffer_duration: 10.0 # seconds

You may want to tweak these for your purposes, but they work for me.

Let’s now launch the Slam Toolbox mapping node:
ros2 launch slam_toolbox online_async_launch.py params_file:=~/ros2_ws/config/slam_toolbox_config.yaml log_level:=debug

You’ll see some messages flowing by, but don’t be alarmed if you see some messages like:
Failed to compute odom pose
For just a few repeats. This is just everything being a little slow on loading, and while mapping this may happen some more as the CPU on the RPi struggles to keep up. It’s only a concern if you see it rolling non stop. In this case you probably don’t have a synchronised clock on the iRobot as described above.

Launch rviz2 on your laptop/VM to see the beginnings of the map being built by adding by node /map, as well as adding /pose to see where the SLAM engine sees your robot.

In another terminal window launch the keyboard remote control again, and carefully and slowly drive your iRobot Create around your space, watching the /map topic with rviz2 (above), and being careful to try and stay out of its vision. You may need a couple of goes before you get the hang of it. To clear the map just kill off the slam engine and relaunch (in case it gets messy etc)

Once you have a solid map happening and all of the walls around your space are mapped enough, close your keyboard remote control in the terminal, and make a couple of service calls to save the map and data.

Save the occupancy grid if you want to – we don’t seem to need this anymore (change the filename to whatever you’re mapping):
ros2 service call /slam_toolbox/save_map slam_toolbox/srv/SaveMap "{name: {data: 'apartment_map'}}"
(you should get a result of 0 – if not, then it didn’t save)

Save the serialised pose graph (this is important for localisation):
ros2 service call /slam_toolbox/serialize_map slam_toolbox/srv/SerializePoseGraph "{filename: 'apartment_map'}"

These will turn up in the folder relative to where the slam-toolbox was launched in our other terminal. You’ll want to move these maps to somewhere you know like creating a maps folder in our workspace:
mkdir -p ~/ros_ws/maps
And from the folder in which the maps have been saved (should have 3 files with the name of apartment, with a different extension on the end):
mv ./apartment.* ~/ros_ws/maps/

That’s it – we have a solid map saved! Now quit the slam-toolbox (ctrl-c in the terminal running it).

We are now going to create a config file for the localisation mode only of slam_toolbox:
nano ~/ros_ws/config/slam_toolbox_localisation_config.yaml

This is the contents of mine:

slam_toolbox:
    ros__parameters:
        solver_plugin: "solver_plugins::CeresSolver"
        solver_threads: 4 #max number compiled
        ceres_solver_options:
            linear_solver_type: "SPARSE_NORMAL_CHOLESKY"
        mode: "localization"
        map_file_name: "/home/jesse/ros2_ws/maps/apartment_map"
        map_frame: "map"
        odom_frame: "odom" # <-- This matches what EKF publishes
        odom_topic: "/odometry/filtered"
        base_frame: "base_link"
        scan_frame: "base_link"
        scan_topic: "/scan"
        use_scan_matching: true
        resolution: 0.05 # Map resolution (meters per pixel)
        max_laser_range: 5.0 # 12m is full range, reliability drops 5/6m
        update_min_d: 0.02 # Minimum movement before update (distance)
        update_min_a: 0.02 # Minimum rotation before update (radians)
        transform_publish_period: 0.05 # How often to publish tf transforms
        use_pose_extrapolator: true
        transform_timeout: 1.5
        tf_buffer_duration: 10.0  # seconds

Now we can launch a localisation only engine that doesn’t change the map, but still provides mapping services:
ros2 launch slam_toolbox localization_launch.py slam_params_file:=$HOME/ros2_ws/config/slam_toolbox_localisation_config.yaml

Not very impressive huh? Only really works slowly and if you start from the same position each time (where the map started).

So we will launch a map server, host the map on a node, then use other tools to provide localisation, and navigation (route planning, control etc).

But…. things are getting messy, and our config files and maps are just sitting in a folder under our ~/ros2_ws/ folder… It’s time to neaten up and create a total robot package because a robot isn’t just the hardware, its the collection of software and configuration as a whole also.

We’re going to create a new package for ROS2 under ~/ros2_ws/src/ that will begin to hold our configs, maps, and any special launch packages, so eventually it’ll all be in a single place.

Do:
cd ~/ros2_ws/src

ros2 pkg create --build-type ament_python myrobot --dependencies launch_ros rclpy
(of course change the name of “myrobot” if you want to name it something else)
This will create the directory structure in the src folder ready to begin adding your files.

We’ll create a config folder inside our new structure:
mkdir ~/ros2_ws/src/myrobot/config

And move things we’ve maybe been keeping in other folders here (adjust to suit):
mv ~/ros2_ws/config/* ~/ros2_ws/src/myrobot/config/
mv ~/ros2_ws/maps/* ~/ros2_ws/src/myrobot/config/

For the rest of this blog I’ll assume that our configs are all in this new folder structure.

For the Navigation and localisation components, we’re going to need a bunch of configs and install some packages, as well as launch them:
(for the packages, you could also just install with sudo apt install ros-jazzy-navigation2 which would pull most if not all of the below)

1. AMCL (adaptive monte-carlo localiser – figures out where the robot is on the map):
Install package:
sudo apt install ros-jazzy-nav2-amcl
Config:
nano ~/ros2_ws/src/myrobot/config/amcl_params.yaml
Contents of my config file:

amcl:
    ros__parameters:
        use_sim_time: false
        scan_topic: "/scan"
       odom_topic: "/odometry/filtered" # use fused odometry
        odom_frame_id: "odom"
        base_frame_id: "base_link"
        global_frame_id: "map"
        update_min_d: 0.2
        update_min_a: 0.2
        min_particles: 500
        max_particles: 2000
        alpha1: 0.2
        alpha2: 0.2
        alpha3: 0.2
        alpha4: 0.2
        alpha5: 0.2
        laser_max_range: 5.0 # match your RPLIDAR range if limiting
        laser_min_range: 0.05
        pf_err: 0.05
        pf_z: 0.99
        resample_interval: 1
        transform_tolerance: 1.0


Launch:
ros2 run nav2_amcl amcl --ros-args --params-file ~/ros2_ws/src/myrobot/config/amcl_params.yaml

2. NAV2 map server (hosts the pre-built map we made earlier)
Install package:
sudo apt install ros-jazzy-nav2-map-server
Config:
nano ~/ros2_ws/src/myrobot/config/map_server_params.yaml
Contents of my config file:

map_server:
    ros__parameters:
        use_sim_time: false
        yaml_filename: "/home/jesse/ros2_ws/src/myrobot/config/apartment_map.yaml"

Launch:
ros2 run nav2_map_server map_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/map_server_params.yaml

3. NAV2 Planner (plans global path through map)

Install packages:
sudo apt install ros-jazzy-nav2-navfn-planner ros-jazzy-nav2-planner ros-jazzy-nav2-util
Config for global planner:

nano ~/ros2_ws/src/myrobot/config/planner_server_params.yaml
Contents of my planner config file:

planner_server:
    ros__parameters:
       odom_topic: "/odometry/filtered" # use fused odometry
        expected_planner_frequency: 1.0
        planner_plugins: ["GridBased"]
        GridBased:
            plugin: "nav2_navfn_planner::NavfnPlanner"

Config for global costmap :
nano ~/ros2_ws/src/myrobot/config/planner_server_params.yaml
Contents of my global costmap config file:

global_costmap:
  global_costmap:
    ros__parameters:
      global_frame: map
      robot_base_frame: base_link
      resolution: 0.05
      publish_frequency: 1.0        # slower is fine for global map
      update_frequency: 1.0
      always_send_full_costmap: true

      plugins: ["static_layer", "obstacle_layer", "inflation_layer"]

      # static layer picks up /map by default
      static_layer:
        plugin: "nav2_costmap_2d::StaticLayer"
        map_subscribe_transient_local: true
        subscribe_to_updates: true

      obstacle_layer:
        plugin: "nav2_costmap_2d::ObstacleLayer"
        enabled: true
        footprint_clearing_enabled: true
        # global planning in case of whole hallway block etc
        observation_sources: scan
        scan:
          topic: /scan
          data_type: LaserScan
          max_obstacle_height: 0.5
          clearing: true
          marking: true

      inflation_layer:
        plugin: "nav2_costmap_2d::InflationLayer"
        enabled: true
        inflation_radius: 0.55
        cost_scaling_factor: 3.0

Launch:
ros2 run nav2_planner planner_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/planner_server_params.yaml --params-file ~/ros2_ws/src/myrobot/config/global_costmap_params.yaml
(notice we give 2 parameters here, one for the global planner, one for the config of the global costmap – they run together)

4. NAV2 Controller (drives the robot along the path)
Install packages:
sudo apt install ros-jazzy-nav2-controller ros-jazzy-dwb-core ros-jazzy-dwb-critics ros-jazzy-dwb-plugins
Config for controller itself:
nano ~/ros2_ws/src/myrobot/config/controller_server_params.yaml
Contents of my controller config file:

controller_server:
  ros__parameters:
    controller_plugins:
      - FollowPath

    goal_checker_plugins:
      - goal_checker
    current_goal_checker: "goal_checker"

    progress_checker_plugins:
      - progress_checker
    current_progress_checker: "progress_checker"

    goal_checker:
      plugin: "nav2_controller::SimpleGoalChecker"
      #stateful: true
      xy_goal_tolerance: 0.25
      yaw_goal_tolerance: 0.25

    progress_checker:
      plugin: "nav2_controller::SimpleProgressChecker"
      #required_movement_radius: 0.5
      #movement_time_allowance: 10.0

    odom_topic: "/odometry/filtered"
    use_sim_time: false
    controller_frequency: 10.0

    FollowPath:
      plugin: "dwb_core::DWBLocalPlanner"
      #debugging:
      debug_trajectory_details: true         # <--- Enables extra debug output about trajectory scoring
      publish_evaluation: true               # <--- Publishes the critic scores as a topic
      publish_trajectories: true             # <--- Publishes the candidate trajectories
      publish_local_plan: true
      publish_global_plan: true
      #debug_trajectory_details: false
      #velocity limits:
      trans_stopped_velocity: 0.0  # or 0.05 for smaller robots
      min_vel_x: 0.0
      max_vel_x: 0.2
      min_speed_xy: 0.0
      max_speed_xy: 0.2
      min_vel_theta: 0.0
      max_vel_theta: 3.0
      #acceleration/deceleration:
      acc_lim_x: 0.5   # quick turns
      acc_lim_theta: 1.0
      decel_lim_x: -1.5
      decel_lim_theta: -3.2
      #trajectory sampling:
      vx_samples: 20            # linear velocity samples
      vtheta_samples: 20        # angular velocity samples
      sim_time: 1.0                   # how far to simulate trajectories forward (sec)
      time_granularity: 0.1           # simulation timestep resolution
      #linear_granularity: 0.05        # position steps along the trajectory
      #angular_granularity: 0.025      # angular steps
      #stopping tolerances:
      #xy_goal_tolerance: 0.15   # within 15cm of goal is "good enough"
      #yaw_goal_tolerance: 0.2  # within ~11 degrees yaw alignment of goal
      # Behavior settings
      transform_tolerance: 0.3        # allow small TF delays
      short_circuit_trajectory_evaluation: false
      #prune_plan: true
      #forward_prune_distance: 1.0
      #include_last_point: true

      stateful: True

      critics:
        - RotateToGoal
        - ObstacleFootprint
        #- Oscillation
        #- BaseObstacle
        - GoalAlign
        - PathAlign
        - PathDist
        - GoalDist

      PathAlign:
        scale: 32.0
      GoalAlign:
        scale: 12.0
      PathDist:
        scale: 32.0
      GoalDist:
        scale: 24.0
      RotateToGoal:
        scale: 32.0
      ObstacleFootprint:
        scale: 0.5
      BaseObstacle:
        scale: 30.0

Config for local costmap (looks at sensors to determine locally if we can see obstacles, and make plans in case):
nano ~/ros2_ws/src/myrobot/config/local_costmap_params.yaml
Contents of my local costmap config file:

local_costmap:
  local_costmap:
    ros__parameters:
      global_frame: odom
      robot_base_frame: base_link
      rolling_window: true
      width: 3
      height: 3
      resolution: 0.05
      robot_radius: 0.17          # realistic!
      publish_frequency: 5.0
      update_frequency: 10.0
      always_send_full_costmap: true

      plugins: ["obstacle_layer", "inflation_layer"]

      obstacle_layer:
        plugin: nav2_costmap_2d::ObstacleLayer
        enabled: true
        observation_sources: scan
        scan:
          topic: /scan
          data_type: LaserScan
          max_obstacle_height: 0.5
          clearing: true
          marking: true

      inflation_layer:
        plugin: nav2_costmap_2d::InflationLayer
        enabled: true
        inflation_radius: 0.3
        cost_scaling_factor: 3.0

Launch:
ros2 run nav2_controller controller_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/controller_server_params.yaml -- params-file ~/ros2_ws/src/myrobot/config/local_costmap_params.yaml
(notice we give 2 parameters here, one for the controller, one for the config of the local costmap – they run together)

5. NAV2 Behaviours Server (provides recovery behaviours like spin, wait, and backup – needed by following BT Navigator):
Install packages:
sudo apt install ros-jazzy-nav2-behaviors
Config:
nano ~/ros2_ws/src/myrobot/config/behavior_server_params.yaml
Contents of my config:

behavior_server:
  ros__parameters:
    costmap_topic: "/local_costmap/costmap_raw"
    footprint_topic: "/local_costmap/footprint"
    use_sim_time: false
    recovery_plugins: ["spin", "wait"]
    spin:
      plugin: "nav2_behaviors::Spin"
      simulate_ahead_time: 2.0
      time_allowance: 10.0      # max spin duration
      max_rotational_vel: 2.5   # matches controller
      min_rotational_vel: 0.5
      angular_dist_threshold: 0.25  # rad left to spin when we accept success

    backup:
      plugin: "nav2_behaviors::BackUp"
      simulate_ahead_time: 2.0
      time_allowance: 8.0
      backup_vel: -0.15
      backup_dist: 0.25

    wait:
      plugin: "nav2_behaviors::Wait"
      wait_duration: 5.0

Launch:
ros2 run nav2_behaviors behavior_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/behavior_server_params.yaml

6. NAV2 Behaviour Tree Navigator (coordinates behaviours as a tree to work through for complex behaviour)
Install packages:
sudo apt install ros-jazzy-nav2-bt-navigator ros2-jazzy-nav2-behavior-tree
Config for the behaviour tree server:
nano ~/ros2_ws/src/config/bt_navigator_params.yaml
Contents of my behaviour tree server config:

bt_navigator:
  ros__parameters:
    odom_topic: "/odometry/filtered"
    default_nav_to_pose_bt_xml: "/home/jesse/ros2_ws/src/myrobot/config/navigate_spin_only.xml"
    use_sim_time: false

Be sure to adjust the path above to the location of the following config:

Config for the behaviour tree itself:
nano ~/ros2_ws/src/config/navigate_spin_only.xml
Contents of my behaviour tree config:

<root BTCPP_format="4" main_tree_to_execute="MainTree">
  <BehaviorTree ID="MainTree">
    <RecoveryNode number_of_retries="6" name="NavigateRecovery">
      <PipelineSequence name="NavigateWithReplanning">
        <ControllerSelector selected_controller="{selected_controller}" default_controller="FollowPath" topic_name="controller_selector"/>
        <PlannerSelector selected_planner="{selected_planner}" default_planner="GridBased" topic_name="planner_selector"/>
        <RateController hz="2.0">
          <RecoveryNode number_of_retries="1" name="ComputePathToPose">
            <Fallback>
              <ReactiveSequence>
                <Inverter>
                  <PathExpiringTimer seconds="10" path="{path}"/>
                </Inverter>
                <Inverter>
                  <GlobalUpdatedGoal/>
                </Inverter>
                <IsPathValid path="{path}"/>
              </ReactiveSequence>
              <ComputePathToPose goal="{goal}" path="{path}" planner_id="{selected_planner}" error_code_id="{compute_path_error_code}"/>
            </Fallback>
            <ClearEntireCostmap name="ClearGlobalCostmap-Context" service_name="global_costmap/clear_entirely_global_costmap"/>
          </RecoveryNode>
        </RateController>
        <RecoveryNode number_of_retries="1" name="FollowPath">
          <FollowPath path="{path}" controller_id="{selected_controller}" error_code_id="{follow_path_error_code}"/>
          <ClearEntireCostmap name="ClearLocalCostmap-Context" service_name="local_costmap/clear_entirely_local_costmap"/>
        </RecoveryNode>
      </PipelineSequence>
      <ReactiveFallback name="RecoveryFallback">
        <GoalUpdated/>
        <RoundRobin name="RecoveryActions">
          <Sequence name="ClearingActions">
            <ClearEntireCostmap name="ClearLocalCostmap-Subtree" service_name="local_costmap/clear_entirely_local_costmap"/>
            <ClearEntireCostmap name="ClearGlobalCostmap-Subtree" service_name="global_costmap/clear_entirely_global_costmap"/>
          </Sequence>
          <Spin spin_dist="1.57"/>
          <Wait wait_duration="5.0"/>
          <!-- <BackUp backup_dist="0.30" backup_speed="0.15"/>  CANCELLING BACKUP FOR NOW -->
        </RoundRobin>
      </ReactiveFallback>
    </RecoveryNode>
  </BehaviorTree>
</root>

Launch:
ros2 run nav2_bt_navigator bt_navigator --ros-args --params-file ~/ros2_ws/src/myrobot/config/bt_navigator_params.yaml

6. NAV2 lifecycle manager (coordinates starting and stopping of NAV2 components)**
None of the above NAV2 components including AMCL will actually fully start properly, instead waiting in a deactivated state – they need to be told to via backend commands, as part of the complex NAV2 lifecycle ecosystem. See note below this block on the lifecycle manager.
Install Packages:
sudo apt install ros-jazzy-nav2-lifecycle-manager
Config:
nano ~/ros2_ws/src/myrobot/config/lifecycle_manager_nav2.yaml
Contents of my config:

lifecycle_manager_navigation:
    ros_parameters:
        autostart: true # Automatically configures and activates below
        node_names:
          - map_server
          - amcl
          - planner_server
          - controller_server
          - behavior_server
          - bt_navigator
        use_sim_time: false

Launch:
ros2 run nav2_lifecycle_manager lifecycle_manager --ros-args --params-file ~/ros2_ws/src/myrobot/config/lifecycle_manager_nav2.yaml

**NOTES ON THE LIFECYCLE MANAGER:
You can manually start each of the above NAV2 components yourself without using the lifecycle manager initially to check that they run and ensure they’re configured correctly by bringing them up one at a time in order and activating them so you can see the logs.
For example if we want to configure and enable the AMCL server:
Run it like above:
ros2 run nav2_amcl amcl --ros-args --params-file ~/ros2_ws/src/myrobot/config/amcl_params.yaml
In another terminal set it to configure (the name of the service is the 4th argument above “amcl”):
ros2 lifecycle set /amcl configure
(this reads the config file, and you’ll see errors if there is a problem loading)
Now set it to activate and start running:
ros2 lifecycle set /amcl activate
This way you can make sure each component runs properly before bundling it all together with the lifecycle manager which can get difficult.

It would be fun to just keep going here, but we are really starting to get messy, and starting each thing up is becoming a pain, so before we continue, lets:
1. test the individual components (including adding the NAV2 components)
2. Create a unified launch for all of these components to start and stop neatly, so we’re not continuing to chew up terminal windows.

Recap on tools we’re bringing up now that we already have a saved map:

1. EK filter to fuse and filter IMU and wheel odometry for better localisation:
ros2 run robot_localization ekf_node --ros-args --params-file ~/ros2_ws/src/myrobot/config/ekf_odom.yaml
(provides /odometry/filtered topic)

2. LIDAR driver for RPLIDAR A1 sensor:
ros2 launch sllidar_ros2 sllidar_a1_launch.py serial_port:=/dev/ttyUSB0 serial_baudrate:=115200 frame_id:=base_link
(provides /scan topic)

3. NAV2 map server with our existing map:
ros2 run nav2_map_server map_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/map_server_params.yaml
(provides /map topic)

4. AMCL localising server to know where we are:
ros2 run nav2_amcl amcl --ros-args --params-file ~/ros2_ws/src/myrobot/config/amcl_params.yaml

5. NAV2 planner server to plan routes
ros2 run nav2_planner planner_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/planner_server_params.yaml --params-file ~/ros2_ws/src/myrobot/config/global_costmap_params.yaml

6. NAV2 controller server to drive the robot on route
ros2 run nav2_controller controller_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/controller_server_params.yaml --params-file ~/ros2_ws/src/myrobot/config/local_costmap_params.yaml

7. NAV2 Behaviours Server (provides recovery behaviours like spin, wait, and backup – needed by following BT Navigator):
ros2 run nav2_behaviors behavior_server --ros-args --params-file ~/ros2_ws/src/myrobot/config/behavior_server_params.yaml

8. NAV2 behaviour tree navigator to set behaviour
ros2 run nav2_bt_navigator bt_navigator --ros-args --params-file ~/ros2_ws/src/myrobot/config/bt_navigator_params.yaml

9. NAV2 lifecycle manager to activate and monitor the NAV2 components (if using yet)
ros2 run nav2_lifecycle_manager lifecycle_manager --ros-args --params-file ~/ros2_ws/src/myrobot/config/lifecycle_manager_nav2.yaml

Try testing each of the above. It’s messy isn’t it. Also very hard to manage, with a high risk of accidentally running two instances of a service if you’re not careful. So we’ll now create what is called a bring up package that does all of the above in a single command and should die gracefully when quitting.

We’ll put a special file into our myrobot package:
nano ~/ros2_ws/src/myrobot/launch/bringup_launch.py

Contents of the file:

from launch import LaunchDescription
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch_ros.actions import Node
from launch.actions import TimerAction
from ament_index_python.packages import get_package_share_directory
import os

def generate_launch_description():
	bringup_dir = get_package_share_directory('myrobot')
	ekf_params = bringup_dir + '/config/ekf_odom.yaml'
	map_params = bringup_dir + '/config/map_server_params.yaml'
	amcl_params = bringup_dir + '/config/amcl_params.yaml'
	controller_params = bringup_dir + '/config/controller_server_params.yaml'
	planner_params = bringup_dir + '/config/planner_server_params.yaml'
	local_costmap_params = bringup_dir + '/config/local_costmap_params.yaml'
	global_costmap_params = bringup_dir + '/config/global_costmap_params.yaml'
	behavior_params = bringup_dir + '/config/behavior_server_params.yaml'
	bt_params = bringup_dir + '/config/bt_navigator_params.yaml'

	#debugging:
	#print("Controller params file:", controller_params)

	# EKF node
	ekf_node = Node(
		package='robot_localization',
		executable='ekf_node',
		name='ekf_filter_node',
		output='screen',
		parameters=[ekf_params]
	)
	# LIDAR - using original launch file because it's a bit odd
	lidar_launch = IncludeLaunchDescription(
		PythonLaunchDescriptionSource(
            		os.path.join(get_package_share_directory('sllidar_ros2'), 'launch', 'sllidar_a1_launch.py')
        	),
		launch_arguments={
            		'serial_port': '/dev/ttyUSB0',
            		'serial_baudrate': '115200',
            		'frame_id': 'base_link',
            		#'inverted': 'false',
            		#'angle_compensate': 'true',
            		#'scan_mode': 'Sensitivity'
        	}.items()
	)
	# Map server
	map_server = Node(
		package='nav2_map_server',
		executable='map_server',
		name='map_server',
		output='screen',
		parameters=[map_params]
	)
	# AMCL localization
	amcl = Node(
		package='nav2_amcl',
		executable='amcl',
		name='amcl',
		output='screen',
		parameters=[amcl_params]
	)
	# Controller server (local costmap)
	controller_server = Node(
		package='nav2_controller',
		executable='controller_server',
		name='controller_server',
		output='screen',
		parameters=[controller_params, local_costmap_params],
		#arguments=['--ros-args', '--log-level', 'nav2_controller:=debug']
	)
	# Planner server (global costmap)
	planner_server = Node(
		package='nav2_planner',
		executable='planner_server',
		name='planner_server',
		output='screen',
		parameters=[planner_params, global_costmap_params]
	)
	# Behaviour server (recovery behaviours)
	behavior_server = Node(
		package='nav2_behaviors',
		executable='behavior_server',
		name='behavior_server',
		output='screen',
		parameters=[behavior_params]
	)
	# BT Navigator
	bt_navigator = Node(
		package='nav2_bt_navigator',
		executable='bt_navigator',
		name='bt_navigator',
		output='screen',
		parameters=[bt_params]
	)

	# Lifecycle Manager
	lifecycle_manager_node = Node(
		package='nav2_lifecycle_manager',
		executable='lifecycle_manager',
		name='lifecycle_manager_navigation',
		output='screen',
		parameters=[{
			'use_sim_time': False,
			'autostart': True,
			'node_names': [
				'map_server',
				'amcl',
				'behavior_server',
				'bt_navigator',
				'planner_server',
				'controller_server',
			]
		}]
	)

	lifecycle_manager = TimerAction(
		period=5.0,  # wait 5 seconds for other nodes
		actions=[lifecycle_manager_node]
	)

	return LaunchDescription([
		ekf_node,
		lidar_launch,
		map_server,
		amcl,
		planner_server,
		controller_server,
		behavior_server,
		bt_navigator,
		lifecycle_manager
	])

Now add/edit a file in your ~/ros2_ws/src/myrobot folder called setup.py (we’re running a python ament build). This is what the build system uses to put files in the right place etc:
nano ~/ros2_ws/src/myrobot/setup.py
Contents of mine:

from setuptools import find_packages, setup
import os
from glob import glob

package_name = 'myrobot'

setup(
    name=package_name,
    version='0.0.0',
    packages=find_packages(exclude=['test']),
    data_files=[
        ('share/ament_index/resource_index/packages',
            ['resource/' + package_name]),
        ('share/' + package_name, ['package.xml']),
        # THIS installs the launch files:
        (os.path.join('share', package_name, 'launch'), glob('launch/*.py')),
        # And this installs your config:
        (os.path.join('share', package_name, 'config'), glob('config/*.yaml')),
    ],
    install_requires=['setuptools'],
    zip_safe=True,
    maintainer='jesse',
    maintainer_email='jesse@cake.net.au',
    description='Basic package for my robot',
    license='TODO: License declaration',
    tests_require=['pytest'],
    entry_points={
        'console_scripts': [
        ],
    },
)

You should already have a package.xml in the folder from creating the package earlier on.

Now we can build the “myrobot” package ready for ros2 to call in standard commands:
Go to the root directory of the workspace:
cd ~/ros2_ws/
Run the build process:
colcon build --symlink-install
Source the workspace again:
source install/setup.bash

Now we can finally just call a single launch command to bring the whole lot up!
ros2 launch myrobot bringup_launch.py
(if you have problems, you can get debugging info from your launch with ros2 launch myrobot bringup_launch.py --debug )

Your control should all now launch, but it may hang at launch initially as you need to provide an initial pose estimation using rviz2 on your laptop/VM – once you send this, everything else will come up neatly.

You should now be able to (using rviz2) be able to give 2D goals by clicking on your map, and your robot will attempt to drive there. Of course there is lots of tuning to do, and lots more serious config to go (creating a model of the robot itself, where its wheels are, where the sensors are actually mounted etc, but for now, your robot can run!

To be continued…


This is a rolling blog of my journey on this particular project, so I’ll keep adding as I work through it.

Fix Ubuntu 20.04 windows not appearing

I have a few old laptops kicking around that I keep running because with an SSD they’re perfectly fine to keep running. My 10 year old (approx) Samsung laptop is one of these.

Recently I did a fresh install of Ubuntu 20.04 onto this laptop and everything just worked – except for the fact that certain windows, including the Gnome Control Center (gnome-control-center) would open (showing the icon on the left bar) but seemed to quickly move off screen to the right.

I puzzled over this for quite some time trying all sorts of things including launching gnome-control-center from a terminal with -v flag set to see if something was wrong.

I stumbled across a few people talking about windows being off screen from 2014, and methods to bring them back so I tried the following:

  1. Open the control centre so that you see the icon on the left bar, and the red dot next to it showing that it’s running.
  2. Press alt-TAB until you see it highlighted so it’s definitely the window in focus
  3. Hold down alt-F7 and keep holding it, and tap the left arrow. Don’t release these keys yet
  4. Your mouse cursor should disappear now
  5. When you move your mouse left, you should see the window appearing into view
  6. Release alt-F7 and click your mouse when the window is in the middle of the screen

So this will bring the window back, but next time you launch it, it will disappear again. So there’s another problem going on here.

I choose to use the open source Nouveau Xorg video card drivers rather than the closed source, and often buggy on older machines, NVidia drivers, and this laptop has an NVidia card in it (Optimus era). It seems there is a problem, on this laptop at least, that it thinks there’s another display connected to the video output port on the card when I haven’t got anything plugged in.

So while we have the control panel on screen, we’ll go to the “Screen Display” section on the left side, and on the right we’ll choose “Single Display” up the top.

You should now no longer get windows launching off screen or seemingly not launching.

Compiling and installing ROS Noetic and compiling raspicam-node for Raspberry Pi OS “buster” for accelerated camera capture

Long title, I know. But this was something that turned out to be surprisingly complex and took lots of troubleshooting steps to get right, so I thought I’d share.

So initially, why would I do this? raspicam-node is already available as binaries for Ubuntu, so why try and compile it for Raspberry Pi OS (Raspbian) Bust

Well for one thing, with Raspberry Pi, we’re still kinda stuck in the land of 32 bit if you want accelerated graphics or at least accelerated video operations because of the GPU hardware on the current Raspberry Pi offerings (RPi 3, 4 etc). I’m not completely over all the detail, but apparently right now we just have to accept this and move on.

So this means that if we were to follow through and install a nice 64 bit version of Ubuntu Server to run ROS on our Raspberry Pi, we wouldn’t be able to benefit from accelerated video bits and pieces, and instead rely on (quite slow) CPU operations, which would mean that any video would be quite slow.

So this means that if I want to use my Raspberry Pi, on a mobile robot, to capture stereo camera input to do mapping with, my only real option to make it fast enough to be useful for stereo vision mapping is to figure out how to build the fantastic raspicam-node by Ubiquity Robotics for 32 bit Raspbian.

Here’s the Raspicam_node source:
https://github.com/UbiquityRobotics/raspicam_node

I also got some great help to get started with ROS on Raspbian from :
https://varhowto.com/install-ros-noetic-raspberry-pi-4/

This guide is still a work in progress, and I will in the near future clean it up and edit it to improve it, so take this as a quick dump of info for now to help get you started before I forget.

Enjoy!

Download and image the Raspberry Pi OS 32 bit “buster” lite image onto an SD card and boot up your Raspberry Pi 4 (which is what I used here) with a screen attached (and keyboard if you’re just going to use it directly rather than SSH in). Any command starting with “sudo” may require your password, which the standard for the default “pi” user is “raspberry”.

Now we need to add the official ROS software sources to download from using the following command:

sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu buster main” > /etc/apt/sources.list.d/ros-noetic.list’

Next step is to add the key for this server so that it will be accepted as a usable source with this command:

sudo apt-key adv –keyserver ‘hkp://keyserver.ubuntu.com:80’ –recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

Now update the available packages so we can see the new sources:

sudo apt update

Make sure the whole system (including the kernel) is fully update to date:

sudo apt full-upgrade

Reboot the system into the updates:

sudo reboot

Install the required packages from the ROS sources:

sudo apt-get install -y python3-rosdep python3-rosinstall-generator python-wstool python3-rosinstall build-essential cmake

Initialise the ROS dependency tool (adds hidden files to your home directory):

sudo rosdep init

Update the ROS dependency tool:

rosdep update

Create a “catkin” workspace (catkin is the official build tool for ROS, so directories to hold the sources, build requirements and binaries built are called catkin workspaces) by simply creating a directory called “ros_catkin_ws” in your home directory:

mkdir ~/ros_catkin_ws

Change to this directory:

cd ~/ros_catkin_ws

Use the rosinstall_generator tool to get ready to flesh out the catkin workspace we created above. This basically sets up a special file that will be used to create all of the requirements needed to make a functional catkin workspace for ROS “Noetic” (wet here means released packages):

rosinstall_generator ros_comm –rosdistro noetic –deps –wet-only –tar > noetic-ros_comm-wet.rosinstall

This will initialise the sources for “Noetic” to be built in our catkin workspace:

wstool init src noetic-ros_comm-wet.rosinstall

ROS depencies will be downloaded and put into the ./src directory based on our workspace (required libraries etc) so we can build ROS:

rosdep install -y –from-paths src –ignore-src –rosdistro noetic -r –os=debian:buster

Compiling things takes lots of ram, of which the Raspberry Pi has relatively little in today’s standards, so in order to not ever accidentally bump over the limit it’s a wise idea to modify the amount of swap file we have available to soak up any overruns.

First turn off the swap file:

sudo dphys-swapfile swapoff

Edit the swapfile configuration:

sudoedit /etc/dphys-swapfile

Edit the line in the file that says “CONF_SWAPFILE” to equal 1024 (1GB):

CONF_SWAPSIZE=1024

Save and exit the nano file editor by pressing CTRL-O (O for ostrich) and hitting enter, then press CTRL-X

Setup the required new swap file:

sudo dphys-swapfile setup

Turn swapping back on with the new settings and file:

sudo dphys-swapfile swapon

Now let’s compile ROS Noetic (here I’ve used option -j3 which means use 3 simultaneous processes for compiling to speed things up, this uses more RAM and works the processor harder, but for me works fine for Raspberry Pi 4 with 2GB of ram, if this fails, try -j1):

sudo src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/noetic -j3 -DPYTHON_EXECUTABLE=/usr/bin/python3

The main build and installation of ROS Noetic is now finished. You’ll find your new compiled binaries are in /opt/ros/noetic/

Each time you use ROS you’ll need to source some bash terminal bits with the following command:

source /opt/ros/noetic/setup.bash

If this works you can put this at the end of your .bashrc file which will make bash load it every time you log in. Simply type:

nano ~/.bashrc

And you’ll be using the nano editor like above to see the contents. Scroll to the very bottom, and press enter for a new line and put the above “source” line into this file. Press ctrl-o and press enter to save. Then press ctrl-x to exit.

Try running ROS core to see if it runs to test your installation and bash source with:

roscore

Now we have a fully running ROS installation on Rapsberry Pi and have tested our ability to setup and compile a catkin workspace. So we can move ahead and use these tools to compile the Raspicam_node tool to allow ROS to access the onboard Rasperry Pi camera.

Why do we want this? Well because apart from the amazingly fast interface the special port on the Raspberry Pi has with a wide range of cameras compatible with it, we can also use camera boards like those available from Arducam that put two cameras into a single camera source side by side to source a stereo image. And we know that from a stereo camera source we can then pull 3d image data for things like SLAM mapping. Very useful for mobile computers like Raspberry Pi!

Let’s add the repository for rosdep to understand the dependencies that are laid out in the raspicam source to compile (this is mainly regarding libraspberrypi-dev stuff, without this step, rosdep won’t know where to find the required libraries to build with) – we’ll use the nano editor to create a file:

sudo nano /etc/ros/rosdep/sources.list.d/30-ubiquity.list

Now inside the nano editor we will add this line:

yaml https://raw.githubusercontent.com/UbiquityRobotics/rosdep/master/raspberry-pi.yaml

Save the file in nano, then exit. Now we can run rosdep update to use this new source:

rosdep update

Now that rosdep has knowledge of where to find the stuff needed to build raspicam_node, let’s go and setup a new catkin workspace (perhaps not needed, but let’s do a fresh one just in case) – the “p” here creates the parent directory as well as we’re creating the “catkin_ws” directory in the user home, then the “src” directory underneath it:

mkdir -p ~/catkin_ws/src

Change into this new src subdirectory:

cd ~/catkin_ws/src

Let’s get the raspicam_node source code directly from their Github page:

git clone https://github.com/UbiquityRobotics/raspicam_node.git

Let’s move out of the “src” directory into the top of the new catkin workspace we created:

cd ~/catkin_ws

Let’s have ROS initialise for use the src directory and everything in it to use:

wstool init src

Use rosinstall_generator to set up what is needed in 4 different ways:

Step 1:

rosinstall_generator compressed_image_transport –rosdistro noetic –deps –wet-only –tar > compressed_image_transport-wet.rosinstall

Step 2:

rosinstall_generator camera_info_manager –rosdistro noetic –deps –wet-only –tar > camera_info_manager-wet.rosinstall

Step 3:

rosinstall_generator dynamic_reconfigure –rosdistro noetic –deps –wet-only –tar > dynamic_reconfigure-wet.rosinstall

Step 4:

rosinstall_generator diagnostics –rosdistro noetic –deps –wet-only –tar > diagnostics-wet.rosinstall

Merge these into the “src” directory with wstool in 5 steps:

Step 1:

wstool merge -t src compressed_image_transport-wet.rosinstall

Step 2:

wstool merge -t src camera_info_manager-wet.rosinstall

Step 3:

wstool merge -t src dynamic_reconfigure-wet.rosinstall

Step 4:

wstool merge -t src diagnostics-wet.rosinstall

Step 5:

wstool update -t src

Let’s make rosdep find all the dependencies required to now build all of this:

rosdep install –from-paths src –ignore-src –rosdistro noetic -y –os=debian:buster

Finally, we can now build raspicam_node:

sudo src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/noetic -j3 -DPYTHON_EXECUTABLE=/usr/bin/python3 

raspicam_node is now built, and you will find the binaries in /opt/ros/noetic with the rest of ROS we built earlier. Before we can use it though, we must enable the camera port using:

sudo raspi_config

Look for interfaces, and camera – it will ask if you wish to enable it. If it doesn’t automatically reboot, reboot the Raspberry Pi yourself:

sudo reboot

Make sure after reboot that the default pi user (or whatever user you’re using) is added to the video group to access the camera:

sudo adduser pi video

With my Arducam module (probably not necessary for other modules), I had to make sure that the I2C module was added to the kernel options by editing the boot config:

sudo nano /boot/config.txt

Put in:

dtparam=i2c_vc=on

Save (ctrl-o, enter, ctrl-x) and reboot again:

sudo reboot

Test that the camera works directly using the built in Raspberry Pi camera tools:

raspistill -o temp.jpg

You should see an image from the camera on screen for a short moment. If so, success! Time to use the module in ROS!

Let’s source the bash setup file (this might need work below, we should probably only need to source what is in the /opt/ros/noetic directory):

source ~/catkin_ws/devel_isolated/setup.bash

Run roscore in the background:

roscore &

Launch the raspicam_node with the built in config for a v2 camera at 640×480 5fps (there are several build in modes, simply type roslaunch raspicam_node and press TAB a couple of times to see the options). We are again pushing this process to run in the background by putting the “&” symbol at the end:

roslaunch raspicam_node camera_module_v2_640x480_5fps_autocapture.launch  &

Now let’s see how fast the update speed is in one of the Raspicam_node topics:

rostopic hz /raspicam_node/image/compressed

If you tried to run the above and got an error about calibration, do the following:

cp -r ~/catkin_ws/src/raspicam_node/camera_info ~/.ros

If you got no errors, and you’re seeing an update of how fast the updates are happening per second, then you’re up and running!

To stop the running processes above first press CTRL-C to kill off the rostopic command. This should now return you to a commandline. Now use the process management tools to bring those other 2 commands to the front to kill by typing:

fg

you’ll now see you can kill off the second process with CTRL-c, and then repeat to kill off the initial roscore.

Sucess! You can now use the Raspberry Pi camera for ROS in a nice fast way with a neat node.

This is not the end though, as for my Arducam, the image comes in as a side by side stereo image in a single image. It needs to be sliced in half in order for us to do stereo image processing. So I’m looking at using another node that does this job (depending on how fast it runs) or otherwise I’ll see if it’s possible to add the feature to raspicam_node itself so it’ll be a one-stop-shop for fast and cheap stereo image sourcing for 3D outcomes.

Stay tuned..

Tasmota on ESP8266 can speak

I’m a huge fan of synthetic voices. I love devices around me chattering away letting me know things so I don’t have to look at another screen and interpret what’s being displayed.

But the thing is, these voices don’t have to be great. In fact I prefer if they’re a little clunky and jagged as they realise the dream of the future I had as a kid growing up in the 80s. I thought that when the year 2000 rolled around, we’d have talking robots wandering our houses, our kitchen appliances would announce when they’re being turned on and off, and our houses would announce “night mode” as dusk rolled around. Unfortunately the world has turned out to be far more conservative in weirdness than I had hoped, so I realised I had to make this happen for myself.

I already have home automation happening in my home, built with Home Assistant at its core, and with a focus on locally-processed everything rather than relying on cloud based services like Google and Amazon offer. This has allowed me the freedom to get as weird as I want, and to make the look and feel exactly what I want.

Part of this process has been to reflash every smart bulb and smart switch I use with the amazing open source Tasmota. It allows for truly locally processed and linked devices, that don’t need an external service, just your local Home Assistant controller. In my case I have Home Assistant running on a tiny Raspberry Pi 4 upstairs on the wall.

Tasmota is breathtaking in complexity and ability. It can adapt to almost every smart device and is constantly being expanded, and yet still fits on the super tiny and super cheap ESP8266 and ESP32 chips that are found in almost every smart iot device on the market (and of course you can buy them standalone for your own builds).

Recently I was forced to compile Tasmota from sources to enable some built in functions that aren’t enabled in the default binary builds (for a kitchen control interface requiring a multiplexor chip). While I was doing this I stumbled across some very promising libraries that were in the source code for audio and “SAM” text to speech. My heart skipped a beat.

For those not in the know, “SAM” (or Software Automated Mouth) was a program for the ancient Commodore 64 computer, that allowed for some of the earliest domestic speech synthesis. It’s very recognisable as it was in so many things from movies, to music, to TV, as well as being every 80s kid’s dream. A computer that can talk!

Turns out, this software was ported to a C library by Sebastian Macke and put up on GitHub some time ago, and then adapted to run on microcontrollers by Earle F. Philhower, III. (especially the ESP8266). This meant you could already make this happen if you wrote your own code from scratch and use the library on ESP8266, but somewhere along the way it was added to Tasmota. I couldn’t find documentation for it, but there it was, hiding away, along with commands to make your Tasmota speak.

I quickly realised, though, that in order to perform this trick, I’d need to also buy an I2S IC/board and amplifier as the audio output library relied on I2S which is a simple audio interfacing specification. Being that I wanted to use this voice inside my doorbell button, I didn’t want to spend the money, or make the doorbell button that large to fit all of this.

That’s where I did some digging and found that the ESP8266audio library had a mode where it could roughly bit-bang audio out of the RX pin of the ESP board. From this output, you could make a very simple amplifier with 2 basic transistors to drive a speaker at an audible volume.

Unfortunately, Tasmota source code didn’t have this ability yet, so I set about forking the source code, modifying it, and merging it back (pull request) to Tasmota’s team to add this ability.

The nimble team have already merged this into the Tasmota Development branch so it’s ready to use, but you’ll need to compile it yourself. I won’t go into setting up an IDE for Tasmota compilation from source as that’s been covered quite well by other people including in the readme for Tasmota itself (I recommend the Atom + PlatformIO method):

https://github.com/arendst/Tasmota

Make sure you clone the Development branch (as at 10th Feb 2021) – it’ll move into the main releases at some point.

In order to enable audio output for Tasmota without I2S hardare, you’ll need to add to your “tasmota/user_config_override.h” file the following:

#ifndef USE_I2S_AUDIO
#define USE_I2S_AUDIO
#endif

#ifdef USE_I2S_EXTERNAL_DAC
#undef USE_I2S_EXTERNAL_DAC
#endif

#ifndef USE_I2S_NO_DAC
#define USE_I2S_NO_DAC
#endif

This allows you to enable audio, override the default (to use an external I2S DAC board), and enable the use of direct output.

But before we do anything more, we definitely need to hook up at least one transistor to the output from the ESP chip as you definitely cannot drive a speaker directly (it’ll also probably burn out the chip, or the pin on the chip trying to do so). For the following I assume that you’re running your ESP board from 5V to its 5v/USB input so that it regulates its required 3.3v onboard. We’ll use some of this 5V to feed the transistor and in turn the speaker.

You’ll need:
1 x 2N3904 transistor (NPN type, driven by positive voltage, but switching the negative)
1 x 1k resistor
1 x 3w or so speaker (nothing under 4 ohms)

When driving the audio output with this method, it will always come out of the RX pin of the ESP board. So when I say audio output, I mean the RX pin.

  1. Connect the resistor between the RX pin and the base of the transistor (middle leg).
  2. Connect the collector of your transistor (right pin of transistor with flat face side facing you) to the negative side of your speaker
  3. Connect the positive side of your speaker to 5 volts
  4. Connect the emmitter of your transistor (left pin of transistor with flat face side facing you) to the Ground or negative from the 5V supply, or the ground of your ESP board.

This is a very basic single transistor amplifier. This is what’s outlined on the ESP8266audio library page here:

https://github.com/earlephilhower/ESP8266Audio

Yes the output can be a little rough, and yes if you went to use some of the other capabilities like playback of files, playing of web radio stations (Which is actually pretty cool), they would sound pretty rough which a whistle over the top, but the SAM voice sounds perfectly the same as it originally did.

So we’ve uploaded our custom-compiled Tasmota binary to the board, how can we make it speak? Well documentation is thin (I’ll contribute some to the Tasmota project to help out of course), but you only need to issue the following at the console of Tasmota:


I2SSay(text goes here)

If you’ve played with old speech synthesizers before, you’ll know that they don’t always pronounce words correctly, so you’ll need to craft words at times to sound the way they’re supposed to. For example the word “house” can sound a little strange, so I use the word “howse”. Sometimes adding an h after vowels in words can help too. It’s all up to experimentation.

So it can speak when we issue commands at the console of Tasmota now, but that’s not super useful yet. We want automation!

I use Home Assistant, combined with the MQTT integration for my Tasmota linked automation, so it’s quite easy to issue anything that can be done at the console in Tasmota as an MQTT message.

In whatever script or automation you’re building in Home Assistant, all you need to do is add action type “Call Service” with the service being “mqtt.publish”, and the service data as:

payload: (hello I am home assistant. I am pleased to meet you!)
topic: cmnd/speakboy/I2SSay

You’ll see in the topic above that my Tasmota device has “device name” in config -> other config set to “speakboy”. The payload is simply what you want to say, surrounded by brackets. You can of course put substitution into play to drop in current weather conditions, or variables or whatever you want using Home Assistant methods, as long as it comes out as something that SAM can say.

You may find in your case, like mine, that audio output wasn’t high enough in volume for your purposes. I’m using mine as a doorbell announcer (at the button end, to speak to visitors while they wait for me to run down the stairs for the door) so there is road noise to compete with.

The first step is to try the gain control. It is set at 10 by default, but I found a balance between loudness and distortion to be at 20. Simply issue the command in the console in Tasmota:

I2SGain 20

If we also want to improve the speaker, mount it in a hole in a hollow box or cavity, or even a short length of pvc pipe glued to the back. The back pressure will give the speaker more ooph, as well as allowing some more resonance.

If it still isn’t loud enough we can go further with another transistor. It’s quite easy to use a suitable PNP transistor in combination with the already explained NPN transistor to amplify that current even higher for the speaker.

I’m using a BC559 PNP transistor for the purpose. By modifying how we above ran our simple amplifier, we can get more current to the speaker:

  1. Disconnect the speaker, connect the collector on the 2N3904 to 5V
  2. Disconnect the emitter of the 2N3904 from ground and connect it instead to the base of the BC559 (middle pin)
  3. Connect the Collector of the BC559 (left pin when facing the flat front) to ground/negative.
  4. Connect the Emitter of the BC559 (right pin when facing the flat front) to the negative of the speaker.
  5. Connect the positive of the speaker to 5V

You should now be much MUCH louder, but just make sure you’re not overdoing it by feeling the transistors. They shouldn’t be getting hot.

A quick note here: Never connect this to an actual amplifier. It’s switched DC voltage, not variable AC which is what audio is. It’s also WAY too high for line level audio, at around 5 times the gain. Bad things will happen to the amplifier, and if they don’t, it’ll also sound terrible!

So there you have it. Tasmota speaking everywhere all the time! Get in touch if you have problems or comments – always happy to help!

Keeping a 2006 Roomba Discovery running in 2021: adventures in patience

It was 2008, I was very excited and I had just brought home my first commercial domestic robot: the Roomba Discovery 4220.

It was second-hand from an Ebay seller who claimed it just needed a new battery, but after getting one, cleaning it up and having it scuttle around my house and workshop cleaning I tried to put it onto its home-base to charge. This of course didn’t go well, and when I attempted to wake it up the next day, it was dead flat despite charging for over 12 hours.

It was the dreaded burned-out U2/U4 MOSFET transistors that were very underrated for the current and heat they would handle to charge the battery. For some time I charged the battery with an external charger and popped it back in to make him clean, but eventually I had to tackle the problem.

At the time, there wasn’t a huge amount of info about this problem, so the advice from many at the time was to just replace these two tiny transistors with an equivalent match. It was difficult, and I wasn’t super across surface-mount components but I managed to change both out with the same replacement. The advice was to just make sure it never ran completely flat, or pre-charge the battery for a bit before putting it in the robot to charge, and the transistors shouldn’t burn out again.

Of course Roombas would sometimes get stuck somewhere for long periods of time, and it only took a couple of years before it burned them out again with a flat battery after it was wedged under a couch all night.

For the next 5 years I charged the battery externally, and this dance went on until I’d had enough, and put it away until a few months ago, when taking it apart I put the vacuum and side brush plugs the wrong way around on reassembly which burned out their transistors. I’d had enough.

The Onboard Charging Fix:

I was determined this year, and with the advice of those who had solved the problem on the Robot Reviews website forum years earlier, I set about putting huge MOSFET transistors that should never overheat or blow no matter the state of the battery. Instead of using tiny surface-mount components, I sourced much larger TO-220 form factor transistors: FQP27P06 (rated for 60v 27amps, way WAY higher than will ever be experienced by the bot). I found space above the battery compartment where they would fit inside the plastic top shell and set about gluing them first to small pieces of flat aluminium (to act as small heatsinks) and then gluing these two assemblies to the plastic case.

I carefully removed the mainboard, taking photos to ensure I get the leads in the right sockets again after (some have the same sockets, they’ll burn your transistors that drive things like the vacuum and side brush out). I carefully de-soldered U2 and U4 transistors from the board (on opposite sides). I like to snip the legs off them using super sharp tiny side-cutters, and then heat the head of them to remove them, to avoid pulling the pads off the board.

Using appropriate thickness wire I then ran the 3 pads that were connected to each of the legs out to the transistors I had glued in place (making sure to match the specs sheets for gate, drain, and source). I made sure to use enough to move the mainboard around, but not so much that it’s hard to coil it back inside the robot (maybe 5cm or so?).

Without the case on, I plugged the Roomba directly into the power supply and bingo – the transistors got warm and it was charging. Or so I thought, as they quickly cooled back down and I wasn’t so sure. So how can we know what the robot is actually doing?

Learning to speak robot:

Turns out the Roombas all have a great serial interface called SCI that has been around since the first models. It’s pretty well documented, but the most useful thing I’ve found is simply the feedback you can get from it charging with highly detailed info about battery voltage and charge.

But to do so, we need a method of plugging this interface into our computer. Computers use RS232 for their serial (or USB serial) which is -12 to +12 volts. The Roomba however runs at TTL levels with is 0v to 5v signalling. It would probably do damage to simply try and directly plug this in. So we need to make a cable and method to plug in.

First up the cable is a mini-din8 (technically the roomba has mini-din7 but din8 will plug into it, and din7 is hard to find). I found in a box of old cables for Apple macs there was a mini din8 that was for Appletalk between machines in the 80s/90s. I was lucky, but if you can’t find this you can look for these through suppliers, or even whole cables on ebay.

We’ll cut one end off and strip the wires carefully apart, and strip the insulation from their tips. Using a multimeter we need to find the TX, RX and Ground pins. When looking at the male connector on the other end of the cable, turn the connector so that the single notch is upright, and the flat part of the connector is at the top. The pins are numbered starting at the bottom left, and from left to right. So the bottom row is 1,2, next row up is 3,4,5, top row is 6,7,8. Use your multimeter on continuity mode (where it will beep to show connection between the two leads) and carefully pick your way through the pins and match these with the coloured leads coming out of the freshly stripped area. On my lead yellow was TX, red was RX, and blue and purple were ground.

Now we need to interface with the computer. I always have on hand for other projects the handy little FTDI Basic boards from Sparkfun. These are great because you can plug in TTL level devices to interface via USB.

So with my FTDI Basic board at the read I simply soldered little pins onto the ends of the leads we identified earlier, plugged the TX line into the Rx of the FTDI, then the RX into the TX of the FTDI, and the two ground leads linked together into the GND of the FTDI.

You’ll then need to use a serial program to show the output from the robot. I use Ubuntu Linux on my computer so I used GTKTerm with the serial port settings of:
port: /dev/ttyUSB0 (this could change depending on what you have plugged in)
baud rate: 57600
parity: none
bits: 8
stop bits: 1
flow control: none

Bingo – you should now be receiving data from the Roomba when it’s plugged into the charger. It’ll report every 1 second what’s happening. The important piece of info here is the charge rate. It should be something around 1500ma when fast charging, something like 280ma when slowing down, and maybe 100ma when trickle charging. If you see negative numbers, you haven’t fixed your transistor problem properly and the robot is discharging.

A first charge can sometimes be something like 16 hours as it attempts to recondition the battery, but as mine was externally charged, I simply unplugged and ran the roomba for a short bit before plugging in again to snap it out of this mode and charge fast. I wouldn’t recommend leaving the roomba alone to charge overnight until you’re sure it’s safe and happy as you could cause a fire if something is wrong and the battery overcharges.

It should just charge on the home base right?

So I excitedly unplugged the charger from the Roomba, and plugged in the home base, then put the Roomba back on the home base, and….. not much. Exercising patience…

So what was up with the home base? Plugging in my serial interface from above showed me that when on the home base the charge rate was in negative numbers. It was actually discharging while on it.

It turns out the home base also has a switching MOSFET transistor inside that turns on the power to the pads only when the Roomba has made contact – it was the exact same type that had failed in the bot, so I replaced it in the same way also, squeezing the bigger transistor to the bottom of the case with glue. This time, the home base worked (be careful on assembly and disassembly, there are screws in 8 places underneath pads and foam).

So it can charge now for the first time since 2010 or so.

But the vacuum fan and side brush is running all the time?

From previous adventures in assembly and disassembly, I’d mixed some of the connectors up and burned some regular bipolar junction transistors (BJT) out, meaning they were shorted (switched on) all the time. Obviously not ideal.

I had to locate the transistors in question: Q35, Q36, Q17

Then replace them with something similar: BC337

BUT: make sure you follow the data sheets, the legs of the new transistors were reversed to the original (originally SS8050), so these needed to be flipped.

Soldered together, reassembled, with the dance complete, finally the vacuum and side brush have stopped, and they start when you start the robot up. Excellent.

Put it back together: pull it back apart

I assembled the casing following all of this testing, and…. the vacuum motor won’t run. Why? A multimeter doesn’t show any power coming from the prongs on the side.

Let’s take it apart again.

On closer inspection we find that all of this plugging and unplugging has made the poor little connectors quite loose, and tracing back the vacuum lead to the mainboard shows that when in test mode for the vacuum (that’s a whole other story to get there) wiggling the lead starts the vacuum.

This is the same kind of connector you’ll find on all of these kinds of things. I’ve found them in my other Neato XV-21 robots, when their wheels start to misbehave and drive erratically.

It’s a simple fix. There isn’t anything as drastic as corrosion, it’s simply the internal prongs in the connector have bent apart and no longer squeeze the pins when plugged in.

All we need to do here is gently use a sewing pin to lift the super micr0-tiny plastic tab on the side of the connector for each pin, and gently slide the pin out of the connector by pulling the cable slowly. DO THIS ONE AT A TIME SO YOU DON’T MIX THEM UP. Even take photos so you don’t accidentally reverse the polarity. This takes some practice and skill so take your time. When you have the connector out, use tiny pliers to squeeze the tiny prongs back together, but don’t be rough. The last thing you want to do is try and make a new connector.

Slide them back together and plug it back in. You should notice straight away that it’s now quite tight.

Test the other connectors to feel if they feel tight. If they feel loose it’s better to do this maintenance now than later.

So, does it work now?

Yes. Yes it does. OH MY GOD IT WORKS. And it works very well.

Of course normal maintenance now applies, and for this model that’s the usual Roomba brush deck clearing and cleaning, wheel and cliff sensor blowing out with air, and troubleshooting when you see odd things happen (like circle dances etc). They can be a little rougher than newer models, and sometimes docking with the home base can take a couple of goes, but they still clean very well and do it reliably.

The biggest thing though with this model of Roomba is that the front wheel is a non-swivel castor, which can be be rough for the little wheel, so I thoroughly recommend cleaning the wheel, making sure it spins easily, and tightly pulling electrical tape around it. Winding it around a few times means that it has a protective layer that you can replace time to time so that it doesn’t grind off. If you’re cleaning concrete like mine does my workshop, definitely paint it with glossy concrete paint because otherwise you’re going to just sand that wheel off.

Let’s keep these things working for as long as we can. It’s something that can reduce so much waste, but also can be another cleaning buddy to keep your lungs healthy indoors. If you don’t want yours, don’t throw it away, offer it for very cheap in online trading websites, or give it to someone who will put it to use again.

Reviving Chumby Classics to connect to Home Assistant

I absolutely love the ability to create weird and wonderful things for smart homes and find it frustrating that many efforts are just about recreating standard things to be smart. This is our chance to get weird people!

I’ve continued down the rabbit hole of my style of smart home and have joined some original Chumby Classics (the beanbag shaped devices from 2008 or so) up to my “Home Assistant” based smart home system.

Much of the Chumby excitement that was pretty great 12 years ago has faded away, but I’m still keen on the little fellas and have 3 of them around my house.

Of course some time ago the company was sold, and things got a little wonky, and though I’m thankful for the people keeping up the online service, I miss the days of things feeling more active and useful, and with the standard firmware there wasn’t really any methods I could link these devices with my Home Assistant.

I found on github that “phineasthecat” has ported the most recent (V34) Zurks offline firmware to be compatible with the Chumby Classic (the official Zurks firmware is only compatible with classic models up to v21). This is great because there were many things introduced after v21, and I personally mostly love the classics with their beanbag shape.

https://github.com/phineasthecat/zurks-offline-firmware-classic

UPDATE: I’ve forked this work into my own repo for now with the changes I’ve outlined below until a time I can hear back about fixing the bugs with the original author, otherwise I’ll just keep working on my fork instead:

https://github.com/JesseCake/zurks-offline-firmware-classic

There are some problems with this firmware though, and it doesn’t work in its current state. I spent yesterday tinkering with it and managed to fix a couple of bugs and make it run on my Chumby Classic. I’ve submitted an issue on github so hopefully that person is still active, otherwise I might fork it and keep developing from there on my own.

If you want to use it, make sure you use a nice fast USB drive as this kind of thing just doesn’t suit $2 sticks. I use Patriot XT usb thumbdrives (unsure if they’re still current, I have a few of them) for this job.

Here are the main 2 issues with the firmware that you can fix yourself to make it run on your chumby classic:

  1. The “tmp” folder is missing, so it won’t work properly. Simply add this folder to the root of your usb
  2. There is an error in the way that it uses a swap file in the startup scripts, so it’ll painfully slowly create the swap, but won’t go on to use it in subsequent reboots. Go to: https://github.com/phineasthecat/zurks-offline-firmware-classic/issues/4 to see how to fix this. You’ll just need to edit the “debugchumby” file with a text editor.

The first boot scripts actually create the swap file. It’s 500MB though and I assume the Chumby Classic is USB v1 because it took so so long to do this job. It was so long that I gave up and created the swap myself onto the thumbdrive using my computer (Linux computer). I used the command from the script to create using a terminal (bash) window (whilst in the directory of the thumbdrive):

dd if=/dev/zero of=./.swap bs=1 count=0 seek=512M

Then when you put it into the chumby, it should speak to you, have no errors, and still take a while to start but will get there. Subsequent reboots will be faster.

Make sure you still follow their directions though, and follow their recommendation on updating the SSL of the chumby base firmware with the provided fix.

Something not explained anywhere is that this offline firmware does not wipe the onboard chumby firmware, and the USB has to remain in the Chumby to keep working. It boots and runs off the thumbdrive as an active filesystem.

So why would I be so keen on this? Well the amazing work of the original firmware hackers has meant that many of the built in functions of the chumby become accessible through a web interface (http://ip.of.your.chumby/index.html) as well as scripts you can directly access to automate it. I’m keen on home automation and use Home Assistant extensively around the house and my workshop. I love making reminders for myself so I don’t get too into projects and forget to feed my ducks or cat, and normal alarm clocks on phones are boring, so I have a megaphone and 1940s industrial bell wake me up.

Now using the html triggered scripts I can have the Chumbys join the fun and they can use text to speech, MP3s stored on the usb drive, as well as visual cues to show me things.

Here’s an example html script already built in to play any kind of remote stream (here playing my favourite internet radio station Shirley and Spinoza) – yes I know I have a weird IP range at home:
http://192.168.8.164/cgi-bin/zmote_play.sh?http://s2.radio.co:80/sec5fa6199/listen

Here’s another where I can make it use built in text to speech to say whatever I need it to:
http://192.168.8.164/cgi-bin/speak.pl?action=say&words=hello%20person

There are heaps of these functions built in, even to turn the screen on and off, change widgets etc etc.

In Home Assistant, you just need to use the RESTful stuff to trigger it (it just needs to access the HTML links to trigger them on the Chumby). I may not be doing this in the most graceful way, but it was late and I was admittedly a few drinks in.. here’s some basics with the black chumby I got working (I also have an espresso coloured one and a grey one):

  1. Add to your configuration.yaml:
    rest_command:
    blackchumby:
    url: “http://192.168.8.164/cgi-bin/{{ urly }}”
  2. reload your core or whole HA (unsure which reloads configuration.yaml)
  3. Create a script named whatever you want (mine will be an alarm that speaks “good morning”, and starts playing my favourite internet radio quietly, but slowly increasing in volume
  4. for the first in the sequence we’ll speak “good morning”:
    call a service: this service will be (from above) “rest_command.blackchumby”
    put in the service data box: urly: “speak.pl?action=say&words=Good%20morning
    the “%20” is a space, I haven’t created a neat way to filter spaces and make them %20 yet
    Here is the raw yaml:
    data:
    urly: “speak.pl?action=say&words=Good%20morning
    service: rest_command.blackchumby
  5. give some delay of a few seconds at least between each command, so we’re not overloading the chumby
  6. do the same kind of call service but with a command of “custom/setvol.sh?25” to set the volume nice and quiet
  7. short delay
  8. do the same kind of call service but with a command of “zmote_play.sh?http://s2.radio.co:80/sec5fa6199/listen” – this will start playing the web radio station using the built in player that can still be controlled on screen
  9. delay of 30 seconds before it gets louder
  10. command of “custom/setvol.sh?50” to get louder
  11. delay of a few minutes before full volume
  12. command of “custom/setvol.sh?100” for full volume

At this point you could manually turn off the stream or if you want something else to stop the music, you could use “zmote_play.sh?stop” – which isn’t actually a stop command, but the file doesn’t exist so it’ll stop playing. I’m sure there’s a more elegant way.

If you want to change the screen brightness there are more scripts and even turning off the light settings, they’re all in the thumbdrive of this firmware under /lighty/cgi-bin/ as well as /lighty/cgi-bin/custom

I recommend checking it all out. Some is a little rough, and I’ve also added my own script to turn the screen on after it’s been off which is just a copy of the off.sh script with a dim level being echoed of 0 instead of 2.

When I go to bed now, I press a button, and along with all of my house lights, my chumbys around the house turn their screens off. I love a completely dark house!

My next steps? I’d love to keep working on this, as development appears to have dropped off a cliff in 2014, but I’m just not sure about the build environment for Chumby – does anyone have any idea of the way these packages were built? I would love to update the built in DLNA server to a later version of the software to allow it to be an endpoint, so that my Home Assistant and devices can stream their music and sounds to it as needed without needing to preload sounds onto their USB, and using other voices, though I do love the TTS voice onboard this firmware (the built in DLNA server has problems with the scripts, but even when started is only able to choose music from a remote server, not be streamed to).

Funny bit of trivia, it sounds like the voice is the same voice as the robot that serves Rick butter in Rick and Morty.

Hit me up if you need any help – I love these these little guys, and think they’re still worth hacking on. I also think we can take them further along with us.

Transferring files between Windows 3.11 for Workgroups and Ubuntu Linux 20.04

I have an interest in vintage computers, and enjoy making them functional again to explore their operating systems and how they worked. I’m not a purist though, and the first thing I do is replace the noisy failing hard drive with some form of SD card with adaptor or similar for solid state for easy backing up etc.

Part of the difficulty in starting fresh with an empty drive on these older machines is actually getting an operating system installed in 2020. Most floppies I have are failing now, and finding new working floppies is getting hard, as well as the wear and tear of constantly imaging whole floppies with installation media etc. I try to keep this part to a minimum for the base OS, and then use alternate means to transfer files.

My usual go to is to get the machine network connected in some way, usually using an ancient ethernet card or device. You would assume that from here it’s all smooth sailing, however this can sometimes multiply the problems as in these times, communication protocols have moved on making it hard to interlink with them.

I really should probably just bite the bullet and set up a small FTP server on the network using a Raspberry Pi or something similar, but I haven’t done that yet, plus sometimes FTP transfers start to bring in other problems in the way you transfer etc.

For a recent resurrection of a 486 DX2 66mhz machine I managed to work my way through installing MSDOS 6.22, followed by Windows for Workgroups 3.11 on top of that. I then made sure to install the network card drivers in Windows 3.11 as well as the TCP32b driver and added the tcp/ip protocol to the network card in the network control panel (removing IPX/SPX protocol while I was there). I made sure to enter manual IP settings for my network, or you could hope that DHCP will work (I never trust DHCP on old machines, things can get screwy when troubleshooting). Windows will want to reboot after that, and you should be all set to transfer files via Windows shares via TCP/IP.

I assumed it would be as easy as firing up file sharing, and accessing windows drives on the network to transfer the hundreds of MB of games and utilities I’m keen on putting on there, but since Windows XP, the windows file sharing protocols have been updated, and older insecure protocols like those used in Windows 3.11 no longer work.

This is where I usually would use Ubuntu Linux which is my main operating system to open Samba sharing to do this job, but it it, too, has moved on, and by default on Ubuntu 20.04 the version of Samba no longer will talk easily to Windows 3.11.

After much head scratching and walking between the laptop and the 486, I figured out that I had to allow Samba to speak the correct version of the SMB protocol. By default it will only speak much later versions, whereas we need to enable SMB v1 for poor old Windows 3.11 to connect and talk.

Whether you are hosting the share on the Linux machine, or accessing the shares to push the files onto the Windows 3.11 machine as a client, the Samba config will affect it all.

In your /etc/samba/smb.conf file in the [global] area, I’ve put:

netbios name = samsung
lanman auth = yes
client lanman auth = yes
ntlm auth = yes
client min protocol = CORE

This allowed my laptop running linux to use the GUI tools in Ubuntu 20.04 to access the old Windows share.

netbios name is optional, but I gave it a simple under 8 character name as my laptop’s hostname is longer and I thought that may affect things.

lanman auth looks like it’s no longer working, but I put it anyway, along with client lanman auth.

ntlm auth is probably not needed but is in my config for other things.

client min protocol = CORE is what does the magic, lowering the minimum version of SMB protocol to old fashioned basics for the windows machine.

Make sure you restart Samba and the NMBD daemons or simply reboot your Linux machine and you can then follow on here:

On my 486, I then went to the file manager, created a folder named “shared” and used the menus to share it with no password. I found the password was messy as there didn’t seem to be a username associated with it, and I couldn’t connect no matter what I did. You may have windows want you to “log on” which for me I’d set up simple username and password during installation, which is part of enabling network sharing in Windows 3.11.

On my Ubuntu laptop I then opened the file browser, and pressed the “+ other locations” button on the lower left. In this window I went to the bottom of the window, where the “Connect to server” area is and entered “smb://486dos/shared” where 486dos is the name I gave my windows machine while setting it up, and shared is the name I gave to the shared folder I was sharing in Windows 3.11.

By magic, you should find that you can now transfer files to the old machine! *

* It’s not super straight forward however. There are a few quirks:

I found transferring lots of things at once would pop up some errors about overwriting files. This could be some kind of bug. I found it better to transfer single zip files and unzip them on the older machine rather than copy folders containing multiple files. Also remember you’re using a machine restricted to 8 character file names, so make sure this is the case. Try to avoid fancy characters also.

I recommend finding the last working version of Winzip for Windows 3.11 and install it, so the process will be, move the zip file to the windows network share, unzip it there into the final location.

So far so good, and happy I can now transfer the files to the massive 8GB SD card working as an IDE drive in the machine. Which was another headache I’ll put in another post.

Making Tasmota lights turn on urgently

I’m a huge user of Home Assistant and Tasmota open source firmware for ESP8266 based devices. It has allowed me to set up quite a nice smart home setup including light bulbs without using external services.

If you’re like me though, and sometimes just urgently need a light to turn on and for some reason the controller isn’t responding, or something has broken in your fiddling, then this rule is quite handy.

Using the powerful Tasmota Rules framework I’ve set up a rule to make certain lights turn on if I flick the original power switch off then on.

Simply go to your Tasmota console for the light you’d like to add this rule and put:

Rule1 ON Power1#Boot DO backlog delay 1; power on; ct 430; dimmer 100; ENDON

This will on boot up turn on the light to full brightness with a pretty warm colour temperature. Of course this is for my lights that have colour temperature, so you may need to adjust for your lights as needed.

I can think of times where maybe there’s a fire upstairs where the HA Raspberry Pi is set up and the controller is offline. You need light and right away. Or maybe your room controller for some reason has gone offline, or your wifi access point as died.

I’ll update this if I find a better method, as I’m worried it needs some more conditions (ie I don’t want it turning on with a system restart etc) but it’s good for now!

A better way to configure Cura to slice objects for your Makerbot Replicator 2 3D printer

**Update** This method has been proven up to Cura 4.10 on Ubuntu Linux. If you’re having problems, first check that it’s a Replicator 2 (I haven’t tested a 2X with the heated bed), then check that your PLA material info is set to printing at 230 degrees C, then double check that you’ve followed all of the instructions directly, skipping no steps (essential parts are the “r2” profile addition, and the GCode for start and stop). Also this method may require that you adjust your bed height on the fly while printing the first layer to get it just squashing onto the plate, but not blocking the nozzle.

I’ve posted previously about using Makerbot Replicator 2 3D printers with Cura, which involved hacking at the X3GWriter plugin, but was frankly a little hacky, and starts to cause problems when you update etc.

With more time on my hands now I’ve had a closer look and spoken to the author of the X3GWriter plugin. It turns out that the printer definition in Cura passes metadata to the plugins you use, and that his X3GWriter plugin was watching for the “machine_x3g_variant” value. When we modify the standard printer definition for Replicator 1 that comes with Cura, it still passes “r1” to the X3Gwriter plugin, which makes it take on values for the Replicator 1 which results of course in incorrect print scaling. For a replicator 2 we actually want “r2”. Makes sense.

So if you’ve been trying to use Cura on your Replicator 2, and getting things that are the wrong size, you’ll need to create or modify your profile for your printer.

Ideally, Cura would come with a Replicator 2 profile, which I’ll put time in to submit to the maintainers via github once I can understand how their provided profiles work, but for now here’s my little how to:

I’m using Cura 4.6 for my example, and this is specifically for the Replicator 2 – you may need to modify some things to make the 2X work

I also assume that you’ve installed the X3GWriter plugin already in Cura’s “marketplace”

1. Open Cura, and add a new printer. Click on non-networked printer, and select “Makerbot Replicator”

2. Once you’ve added this printer, rename the printer to something like “Makerbot Replicator 2” (doesn’t matter what, it won’t affect anything), and go to “machine settings” for this new printer.

3. Make the Gcode flavour “makerbot”, enable origin at center, disable heated bed, select build plate rectangular, and make the dimensions the following:

   x width = 225mm

   y depth = 145mm

   z height = 150mm

Here are my printer settings:

1127212819_Screenshotfrom2020-05-2519-03-25.thumb.png.3450bc6c7570b0616f83b1ca006950ec.png

4. We’ll also check settings for “extruder 1”. The standard nozzle size is 0.4mm, and the compatible material diameter is 1.75mm.
Here’s my extruder settings:

1191352320_Screenshotfrom2020-05-2519-06-24.png.de9bf5f4485a2601947017bbb194592c.png

5. Add the custom GCode to the printer settings. This is necessary as for some reason by default heated bed info is sent, which makes the printer stop straight away. You can look up what this means and tweak it as needed (maybe you want the bed to drop lower at the end etc).

Contents of my start Gcode:

; -- start of START GCODE –
M73 P0 (enable build progress)
;M103 (disable RPM)
;G21 (set units to mm)
M92 X88.8 Y88.8 Z400 E101 ; sets steps per mm for replicator
G90 (set positioning to absolute)
(**** begin homing ****)
G162 X Y F4000 (home XY axes maximum)
G161 Z F3500 (home Z axis minimum)
G92 Z-5 (set Z to -5)
G1 Z0.0 (move Z to "0")
G161 Z F100 (home Z axis minimum)
M132 X Y Z A B (Recall stored home offsets for XYZAB axis)
(**** end homing ****)
G92 X147 Y66 Z5
G1 X105 Y-60 Z10 F4000.0 (move to waiting position)
G130 X0 Y0 A0 B0 (Set Stepper motor Vref to lower value while heating)
G130 X127 Y127 A127 B127 (Set Stepper motor Vref to defaults)
G0 X105 Y-60 (Position Nozzle)
G0 Z0.6     (Position Height)
; -- end of START GCODE –

Contents of my end GCode:

; -- start of END GCODE –
G92 Z0
G1 Z10 F400
M18
M104 S0 T0
M73 P100 (end  build progress)
G162 X Y F3000
M18
; -- end of END GCODE –

Here’s what it should now look like in your printer settings (the gcode settings of course are longer than the box, so they scroll, don’t copy directly from this image for them):

228029716_Screenshotfrom2020-05-2519-09-13.thumb.png.306b9ed405e5c5ee818e7b0be88da208.png

So we now have the printer defined, but it’s missing the important piece of the puzzle which is the metadata to pass along to the X3GWriter plugin so that we get an X3G file suited for the Replicator 2.

6. Let’s manually edit the printer definition file. Close Cura before continuing. I’m using Ubuntu Linux, so my printer definition file is in:
/home/username/.local/share/cura/4.6/machine_instances/MakerbotReplicator2.global.cfg

I use nano, but any text editor (even gnome’s gedit) will be fine to edit this file.

If you’re in windows, try a system-wide search for the location (sorry I don’t know where it lives in Windows)

We are looking for the heading “[metadata]”, and anywhere under this heading block we’re going to put “machine_x3g_variant = r2”. For example, here’s what mine looks like (some details will be different for yours):

[general]
version = 4
name = MakerBotReplicator2
id = MakerBotReplicator2

[metadata]
setting_version = 13
machine_x3g_variant = r2
type = machine
group_id = 993612c3-052e-42e2-bb6b-c5c6b2617912

[containers]
0 = MakerBotReplicator #2_user
1 = empty_quality_changes
2 = empty_intent
3 = normal
4 = empty_material
5 = empty_variant
6 = MakerBotReplicator #2_settings #2
7 = makerbotreplicator

Notice where “machine_x3g_variant = r2” is?

Save this file where it is, and reopen Cura.

That should be it. You’ll be able to directly choose your printer in the normal way, choose your settings and object, and export. If you find it doesn’t successfully create a file, there’s something up with your config, so double check any syntax problems etc.

You can also check the output of Cura’s errors in (again I’m in Linux):

/home/username/.local/share/cura/stderr.log

So if you tweak Gcode and tinker with those bits and pieces you can see if X3GWriter is unhappy about any of it.

Quick note about filament: I should mention here, that I use PLA filament, and I set (under the materials profile settings) the nozzle to print at 230 degrees (celcius) because that’s what I find works well. I find that going too much lower than 220 degrees (Cura seems to default to 200!) tends to jam the nozzle. It could be that my filament needs this, or just the temp of the head is always slightly off on these printers, but that is what works for me, and could be the cause of problems I’m asked about where the head seems to not be extruding. Worth checking..


Happy printing!