Fix Ubuntu 20.04 windows not appearing

I have a few old laptops kicking around that I keep running because with an SSD they’re perfectly fine to keep running. My 10 year old (approx) Samsung laptop is one of these.

Recently I did a fresh install of Ubuntu 20.04 onto this laptop and everything just worked – except for the fact that certain windows, including the Gnome Control Center (gnome-control-center) would open (showing the icon on the left bar) but seemed to quickly move off screen to the right.

I puzzled over this for quite some time trying all sorts of things including launching gnome-control-center from a terminal with -v flag set to see if something was wrong.

I stumbled across a few people talking about windows being off screen from 2014, and methods to bring them back so I tried the following:

  1. Open the control centre so that you see the icon on the left bar, and the red dot next to it showing that it’s running.
  2. Press alt-TAB until you see it highlighted so it’s definitely the window in focus
  3. Hold down alt-F7 and keep holding it, and tap the left arrow. Don’t release these keys yet
  4. Your mouse cursor should disappear now
  5. When you move your mouse left, you should see the window appearing into view
  6. Release alt-F7 and click your mouse when the window is in the middle of the screen

So this will bring the window back, but next time you launch it, it will disappear again. So there’s another problem going on here.

I choose to use the open source Nouveau Xorg video card drivers rather than the closed source, and often buggy on older machines, NVidia drivers, and this laptop has an NVidia card in it (Optimus era). It seems there is a problem, on this laptop at least, that it thinks there’s another display connected to the video output port on the card when I haven’t got anything plugged in.

So while we have the control panel on screen, we’ll go to the “Screen Display” section on the left side, and on the right we’ll choose “Single Display” up the top.

You should now no longer get windows launching off screen or seemingly not launching.

Compiling and installing ROS Noetic and compiling raspicam-node for Raspberry Pi OS “buster” for accelerated camera capture

Long title, I know. But this was something that turned out to be surprisingly complex and took lots of troubleshooting steps to get right, so I thought I’d share.

So initially, why would I do this? raspicam-node is already available as binaries for Ubuntu, so why try and compile it for Raspberry Pi OS (Raspbian) Bust

Well for one thing, with Raspberry Pi, we’re still kinda stuck in the land of 32 bit if you want accelerated graphics or at least accelerated video operations because of the GPU hardware on the current Raspberry Pi offerings (RPi 3, 4 etc). I’m not completely over all the detail, but apparently right now we just have to accept this and move on.

So this means that if we were to follow through and install a nice 64 bit version of Ubuntu Server to run ROS on our Raspberry Pi, we wouldn’t be able to benefit from accelerated video bits and pieces, and instead rely on (quite slow) CPU operations, which would mean that any video would be quite slow.

So this means that if I want to use my Raspberry Pi, on a mobile robot, to capture stereo camera input to do mapping with, my only real option to make it fast enough to be useful for stereo vision mapping is to figure out how to build the fantastic raspicam-node by Ubiquity Robotics for 32 bit Raspbian.

Here’s the Raspicam_node source:
https://github.com/UbiquityRobotics/raspicam_node

I also got some great help to get started with ROS on Raspbian from :
https://varhowto.com/install-ros-noetic-raspberry-pi-4/

This guide is still a work in progress, and I will in the near future clean it up and edit it to improve it, so take this as a quick dump of info for now to help get you started before I forget.

Enjoy!

Download and image the Raspberry Pi OS 32 bit “buster” lite image onto an SD card and boot up your Raspberry Pi 4 (which is what I used here) with a screen attached (and keyboard if you’re just going to use it directly rather than SSH in). Any command starting with “sudo” may require your password, which the standard for the default “pi” user is “raspberry”.

Now we need to add the official ROS software sources to download from using the following command:

sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu buster main” > /etc/apt/sources.list.d/ros-noetic.list’

Next step is to add the key for this server so that it will be accepted as a usable source with this command:

sudo apt-key adv –keyserver ‘hkp://keyserver.ubuntu.com:80’ –recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

Now update the available packages so we can see the new sources:

sudo apt update

Make sure the whole system (including the kernel) is fully update to date:

sudo apt full-upgrade

Reboot the system into the updates:

sudo reboot

Install the required packages from the ROS sources:

sudo apt-get install -y python3-rosdep python3-rosinstall-generator python-wstool python3-rosinstall build-essential cmake

Initialise the ROS dependency tool (adds hidden files to your home directory):

sudo rosdep init

Update the ROS dependency tool:

rosdep update

Create a “catkin” workspace (catkin is the official build tool for ROS, so directories to hold the sources, build requirements and binaries built are called catkin workspaces) by simply creating a directory called “ros_catkin_ws” in your home directory:

mkdir ~/ros_catkin_ws

Change to this directory:

cd ~/ros_catkin_ws

Use the rosinstall_generator tool to get ready to flesh out the catkin workspace we created above. This basically sets up a special file that will be used to create all of the requirements needed to make a functional catkin workspace for ROS “Noetic” (wet here means released packages):

rosinstall_generator ros_comm –rosdistro noetic –deps –wet-only –tar > noetic-ros_comm-wet.rosinstall

This will initialise the sources for “Noetic” to be built in our catkin workspace:

wstool init src noetic-ros_comm-wet.rosinstall

ROS depencies will be downloaded and put into the ./src directory based on our workspace (required libraries etc) so we can build ROS:

rosdep install -y –from-paths src –ignore-src –rosdistro noetic -r –os=debian:buster

Compiling things takes lots of ram, of which the Raspberry Pi has relatively little in today’s standards, so in order to not ever accidentally bump over the limit it’s a wise idea to modify the amount of swap file we have available to soak up any overruns.

First turn off the swap file:

sudo dphys-swapfile swapoff

Edit the swapfile configuration:

sudoedit /etc/dphys-swapfile

Edit the line in the file that says “CONF_SWAPFILE” to equal 1024 (1GB):

CONF_SWAPSIZE=1024

Save and exit the nano file editor by pressing CTRL-O (O for ostrich) and hitting enter, then press CTRL-X

Setup the required new swap file:

sudo dphys-swapfile setup

Turn swapping back on with the new settings and file:

sudo dphys-swapfile swapon

Now let’s compile ROS Noetic (here I’ve used option -j3 which means use 3 simultaneous processes for compiling to speed things up, this uses more RAM and works the processor harder, but for me works fine for Raspberry Pi 4 with 2GB of ram, if this fails, try -j1):

sudo src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/noetic -j3 -DPYTHON_EXECUTABLE=/usr/bin/python3

The main build and installation of ROS Noetic is now finished. You’ll find your new compiled binaries are in /opt/ros/noetic/

Each time you use ROS you’ll need to source some bash terminal bits with the following command:

source /opt/ros/noetic/setup.bash

If this works you can put this at the end of your .bashrc file which will make bash load it every time you log in. Simply type:

nano ~/.bashrc

And you’ll be using the nano editor like above to see the contents. Scroll to the very bottom, and press enter for a new line and put the above “source” line into this file. Press ctrl-o and press enter to save. Then press ctrl-x to exit.

Try running ROS core to see if it runs to test your installation and bash source with:

roscore

Now we have a fully running ROS installation on Rapsberry Pi and have tested our ability to setup and compile a catkin workspace. So we can move ahead and use these tools to compile the Raspicam_node tool to allow ROS to access the onboard Rasperry Pi camera.

Why do we want this? Well because apart from the amazingly fast interface the special port on the Raspberry Pi has with a wide range of cameras compatible with it, we can also use camera boards like those available from Arducam that put two cameras into a single camera source side by side to source a stereo image. And we know that from a stereo camera source we can then pull 3d image data for things like SLAM mapping. Very useful for mobile computers like Raspberry Pi!

Let’s add the repository for rosdep to understand the dependencies that are laid out in the raspicam source to compile (this is mainly regarding libraspberrypi-dev stuff, without this step, rosdep won’t know where to find the required libraries to build with) – we’ll use the nano editor to create a file:

sudo nano /etc/ros/rosdep/sources.list.d/30-ubiquity.list

Now inside the nano editor we will add this line:

yaml https://raw.githubusercontent.com/UbiquityRobotics/rosdep/master/raspberry-pi.yaml

Save the file in nano, then exit. Now we can run rosdep update to use this new source:

rosdep update

Now that rosdep has knowledge of where to find the stuff needed to build raspicam_node, let’s go and setup a new catkin workspace (perhaps not needed, but let’s do a fresh one just in case) – the “p” here creates the parent directory as well as we’re creating the “catkin_ws” directory in the user home, then the “src” directory underneath it:

mkdir -p ~/catkin_ws/src

Change into this new src subdirectory:

cd ~/catkin_ws/src

Let’s get the raspicam_node source code directly from their Github page:

git clone https://github.com/UbiquityRobotics/raspicam_node.git

Let’s move out of the “src” directory into the top of the new catkin workspace we created:

cd ~/catkin_ws

Let’s have ROS initialise for use the src directory and everything in it to use:

wstool init src

Use rosinstall_generator to set up what is needed in 4 different ways:

Step 1:

rosinstall_generator compressed_image_transport –rosdistro noetic –deps –wet-only –tar > compressed_image_transport-wet.rosinstall

Step 2:

rosinstall_generator camera_info_manager –rosdistro noetic –deps –wet-only –tar > camera_info_manager-wet.rosinstall

Step 3:

rosinstall_generator dynamic_reconfigure –rosdistro noetic –deps –wet-only –tar > dynamic_reconfigure-wet.rosinstall

Step 4:

rosinstall_generator diagnostics –rosdistro noetic –deps –wet-only –tar > diagnostics-wet.rosinstall

Merge these into the “src” directory with wstool in 5 steps:

Step 1:

wstool merge -t src compressed_image_transport-wet.rosinstall

Step 2:

wstool merge -t src camera_info_manager-wet.rosinstall

Step 3:

wstool merge -t src dynamic_reconfigure-wet.rosinstall

Step 4:

wstool merge -t src diagnostics-wet.rosinstall

Step 5:

wstool update -t src

Let’s make rosdep find all the dependencies required to now build all of this:

rosdep install –from-paths src –ignore-src –rosdistro noetic -y –os=debian:buster

Finally, we can now build raspicam_node:

sudo src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/noetic -j3 -DPYTHON_EXECUTABLE=/usr/bin/python3 

raspicam_node is now built, and you will find the binaries in /opt/ros/noetic with the rest of ROS we built earlier. Before we can use it though, we must enable the camera port using:

sudo raspi_config

Look for interfaces, and camera – it will ask if you wish to enable it. If it doesn’t automatically reboot, reboot the Raspberry Pi yourself:

sudo reboot

Make sure after reboot that the default pi user (or whatever user you’re using) is added to the video group to access the camera:

sudo adduser pi video

With my Arducam module (probably not necessary for other modules), I had to make sure that the I2C module was added to the kernel options by editing the boot config:

sudo nano /boot/config.txt

Put in:

dtparam=i2c_vc=on

Save (ctrl-o, enter, ctrl-x) and reboot again:

sudo reboot

Test that the camera works directly using the built in Raspberry Pi camera tools:

raspistill -o temp.jpg

You should see an image from the camera on screen for a short moment. If so, success! Time to use the module in ROS!

Let’s source the bash setup file (this might need work below, we should probably only need to source what is in the /opt/ros/noetic directory):

source ~/catkin_ws/devel_isolated/setup.bash

Run roscore in the background:

roscore &

Launch the raspicam_node with the built in config for a v2 camera at 640×480 5fps (there are several build in modes, simply type roslaunch raspicam_node and press TAB a couple of times to see the options). We are again pushing this process to run in the background by putting the “&” symbol at the end:

roslaunch raspicam_node camera_module_v2_640x480_5fps_autocapture.launch  &

Now let’s see how fast the update speed is in one of the Raspicam_node topics:

rostopic hz /raspicam_node/image/compressed

If you tried to run the above and got an error about calibration, do the following:

cp -r ~/catkin_ws/src/raspicam_node/camera_info ~/.ros

If you got no errors, and you’re seeing an update of how fast the updates are happening per second, then you’re up and running!

To stop the running processes above first press CTRL-C to kill off the rostopic command. This should now return you to a commandline. Now use the process management tools to bring those other 2 commands to the front to kill by typing:

fg

you’ll now see you can kill off the second process with CTRL-c, and then repeat to kill off the initial roscore.

Sucess! You can now use the Raspberry Pi camera for ROS in a nice fast way with a neat node.

This is not the end though, as for my Arducam, the image comes in as a side by side stereo image in a single image. It needs to be sliced in half in order for us to do stereo image processing. So I’m looking at using another node that does this job (depending on how fast it runs) or otherwise I’ll see if it’s possible to add the feature to raspicam_node itself so it’ll be a one-stop-shop for fast and cheap stereo image sourcing for 3D outcomes.

Stay tuned..

Tasmota on ESP8266 can speak

I’m a huge fan of synthetic voices. I love devices around me chattering away letting me know things so I don’t have to look at another screen and interpret what’s being displayed.

But the thing is, these voices don’t have to be great. In fact I prefer if they’re a little clunky and jagged as they realise the dream of the future I had as a kid growing up in the 80s. I thought that when the year 2000 rolled around, we’d have talking robots wandering our houses, our kitchen appliances would announce when they’re being turned on and off, and our houses would announce “night mode” as dusk rolled around. Unfortunately the world has turned out to be far more conservative in weirdness than I had hoped, so I realised I had to make this happen for myself.

I already have home automation happening in my home, built with Home Assistant at its core, and with a focus on locally-processed everything rather than relying on cloud based services like Google and Amazon offer. This has allowed me the freedom to get as weird as I want, and to make the look and feel exactly what I want.

Part of this process has been to reflash every smart bulb and smart switch I use with the amazing open source Tasmota. It allows for truly locally processed and linked devices, that don’t need an external service, just your local Home Assistant controller. In my case I have Home Assistant running on a tiny Raspberry Pi 4 upstairs on the wall.

Tasmota is breathtaking in complexity and ability. It can adapt to almost every smart device and is constantly being expanded, and yet still fits on the super tiny and super cheap ESP8266 and ESP32 chips that are found in almost every smart iot device on the market (and of course you can buy them standalone for your own builds).

Recently I was forced to compile Tasmota from sources to enable some built in functions that aren’t enabled in the default binary builds (for a kitchen control interface requiring a multiplexor chip). While I was doing this I stumbled across some very promising libraries that were in the source code for audio and “SAM” text to speech. My heart skipped a beat.

For those not in the know, “SAM” (or Software Automated Mouth) was a program for the ancient Commodore 64 computer, that allowed for some of the earliest domestic speech synthesis. It’s very recognisable as it was in so many things from movies, to music, to TV, as well as being every 80s kid’s dream. A computer that can talk!

Turns out, this software was ported to a C library by Sebastian Macke and put up on GitHub some time ago, and then adapted to run on microcontrollers by Earle F. Philhower, III. (especially the ESP8266). This meant you could already make this happen if you wrote your own code from scratch and use the library on ESP8266, but somewhere along the way it was added to Tasmota. I couldn’t find documentation for it, but there it was, hiding away, along with commands to make your Tasmota speak.

I quickly realised, though, that in order to perform this trick, I’d need to also buy an I2S IC/board and amplifier as the audio output library relied on I2S which is a simple audio interfacing specification. Being that I wanted to use this voice inside my doorbell button, I didn’t want to spend the money, or make the doorbell button that large to fit all of this.

That’s where I did some digging and found that the ESP8266audio library had a mode where it could roughly bit-bang audio out of the RX pin of the ESP board. From this output, you could make a very simple amplifier with 2 basic transistors to drive a speaker at an audible volume.

Unfortunately, Tasmota source code didn’t have this ability yet, so I set about forking the source code, modifying it, and merging it back (pull request) to Tasmota’s team to add this ability.

The nimble team have already merged this into the Tasmota Development branch so it’s ready to use, but you’ll need to compile it yourself. I won’t go into setting up an IDE for Tasmota compilation from source as that’s been covered quite well by other people including in the readme for Tasmota itself (I recommend the Atom + PlatformIO method):

https://github.com/arendst/Tasmota

Make sure you clone the Development branch (as at 10th Feb 2021) – it’ll move into the main releases at some point.

In order to enable audio output for Tasmota without I2S hardare, you’ll need to add to your “tasmota/user_config_override.h” file the following:

#ifndef USE_I2S_AUDIO
#define USE_I2S_AUDIO
#endif

#ifdef USE_I2S_EXTERNAL_DAC
#undef USE_I2S_EXTERNAL_DAC
#endif

#ifndef USE_I2S_NO_DAC
#define USE_I2S_NO_DAC
#endif

This allows you to enable audio, override the default (to use an external I2S DAC board), and enable the use of direct output.

But before we do anything more, we definitely need to hook up at least one transistor to the output from the ESP chip as you definitely cannot drive a speaker directly (it’ll also probably burn out the chip, or the pin on the chip trying to do so). For the following I assume that you’re running your ESP board from 5V to its 5v/USB input so that it regulates its required 3.3v onboard. We’ll use some of this 5V to feed the transistor and in turn the speaker.

You’ll need:
1 x 2N3904 transistor (NPN type, driven by positive voltage, but switching the negative)
1 x 1k resistor
1 x 3w or so speaker (nothing under 4 ohms)

When driving the audio output with this method, it will always come out of the RX pin of the ESP board. So when I say audio output, I mean the RX pin.

  1. Connect the resistor between the RX pin and the base of the transistor (middle leg).
  2. Connect the collector of your transistor (right pin of transistor with flat face side facing you) to the negative side of your speaker
  3. Connect the positive side of your speaker to 5 volts
  4. Connect the emmitter of your transistor (left pin of transistor with flat face side facing you) to the Ground or negative from the 5V supply, or the ground of your ESP board.

This is a very basic single transistor amplifier. This is what’s outlined on the ESP8266audio library page here:

https://github.com/earlephilhower/ESP8266Audio

Yes the output can be a little rough, and yes if you went to use some of the other capabilities like playback of files, playing of web radio stations (Which is actually pretty cool), they would sound pretty rough which a whistle over the top, but the SAM voice sounds perfectly the same as it originally did.

So we’ve uploaded our custom-compiled Tasmota binary to the board, how can we make it speak? Well documentation is thin (I’ll contribute some to the Tasmota project to help out of course), but you only need to issue the following at the console of Tasmota:


I2SSay(text goes here)

If you’ve played with old speech synthesizers before, you’ll know that they don’t always pronounce words correctly, so you’ll need to craft words at times to sound the way they’re supposed to. For example the word “house” can sound a little strange, so I use the word “howse”. Sometimes adding an h after vowels in words can help too. It’s all up to experimentation.

So it can speak when we issue commands at the console of Tasmota now, but that’s not super useful yet. We want automation!

I use Home Assistant, combined with the MQTT integration for my Tasmota linked automation, so it’s quite easy to issue anything that can be done at the console in Tasmota as an MQTT message.

In whatever script or automation you’re building in Home Assistant, all you need to do is add action type “Call Service” with the service being “mqtt.publish”, and the service data as:

payload: (hello I am home assistant. I am pleased to meet you!)
topic: cmnd/speakboy/I2SSay

You’ll see in the topic above that my Tasmota device has “device name” in config -> other config set to “speakboy”. The payload is simply what you want to say, surrounded by brackets. You can of course put substitution into play to drop in current weather conditions, or variables or whatever you want using Home Assistant methods, as long as it comes out as something that SAM can say.

You may find in your case, like mine, that audio output wasn’t high enough in volume for your purposes. I’m using mine as a doorbell announcer (at the button end, to speak to visitors while they wait for me to run down the stairs for the door) so there is road noise to compete with.

The first step is to try the gain control. It is set at 10 by default, but I found a balance between loudness and distortion to be at 20. Simply issue the command in the console in Tasmota:

I2SGain 20

If we also want to improve the speaker, mount it in a hole in a hollow box or cavity, or even a short length of pvc pipe glued to the back. The back pressure will give the speaker more ooph, as well as allowing some more resonance.

If it still isn’t loud enough we can go further with another transistor. It’s quite easy to use a suitable PNP transistor in combination with the already explained NPN transistor to amplify that current even higher for the speaker.

I’m using a BC559 PNP transistor for the purpose. By modifying how we above ran our simple amplifier, we can get more current to the speaker:

  1. Disconnect the speaker, connect the collector on the 2N3904 to 5V
  2. Disconnect the emitter of the 2N3904 from ground and connect it instead to the base of the BC559 (middle pin)
  3. Connect the Collector of the BC559 (left pin when facing the flat front) to ground/negative.
  4. Connect the Emitter of the BC559 (right pin when facing the flat front) to the negative of the speaker.
  5. Connect the positive of the speaker to 5V

You should now be much MUCH louder, but just make sure you’re not overdoing it by feeling the transistors. They shouldn’t be getting hot.

A quick note here: Never connect this to an actual amplifier. It’s switched DC voltage, not variable AC which is what audio is. It’s also WAY too high for line level audio, at around 5 times the gain. Bad things will happen to the amplifier, and if they don’t, it’ll also sound terrible!

So there you have it. Tasmota speaking everywhere all the time! Get in touch if you have problems or comments – always happy to help!

Keeping a 2006 Roomba Discovery running in 2021: adventures in patience

It was 2008, I was very excited and I had just brought home my first commercial domestic robot: the Roomba Discovery 4220.

It was second-hand from an Ebay seller who claimed it just needed a new battery, but after getting one, cleaning it up and having it scuttle around my house and workshop cleaning I tried to put it onto its home-base to charge. This of course didn’t go well, and when I attempted to wake it up the next day, it was dead flat despite charging for over 12 hours.

It was the dreaded burned-out U2/U4 MOSFET transistors that were very underrated for the current and heat they would handle to charge the battery. For some time I charged the battery with an external charger and popped it back in to make him clean, but eventually I had to tackle the problem.

At the time, there wasn’t a huge amount of info about this problem, so the advice from many at the time was to just replace these two tiny transistors with an equivalent match. It was difficult, and I wasn’t super across surface-mount components but I managed to change both out with the same replacement. The advice was to just make sure it never ran completely flat, or pre-charge the battery for a bit before putting it in the robot to charge, and the transistors shouldn’t burn out again.

Of course Roombas would sometimes get stuck somewhere for long periods of time, and it only took a couple of years before it burned them out again with a flat battery after it was wedged under a couch all night.

For the next 5 years I charged the battery externally, and this dance went on until I’d had enough, and put it away until a few months ago, when taking it apart I put the vacuum and side brush plugs the wrong way around on reassembly which burned out their transistors. I’d had enough.

The Onboard Charging Fix:

I was determined this year, and with the advice of those who had solved the problem on the Robot Reviews website forum years earlier, I set about putting huge MOSFET transistors that should never overheat or blow no matter the state of the battery. Instead of using tiny surface-mount components, I sourced much larger TO-220 form factor transistors: FQP27P06 (rated for 60v 27amps, way WAY higher than will ever be experienced by the bot). I found space above the battery compartment where they would fit inside the plastic top shell and set about gluing them first to small pieces of flat aluminium (to act as small heatsinks) and then gluing these two assemblies to the plastic case.

I carefully removed the mainboard, taking photos to ensure I get the leads in the right sockets again after (some have the same sockets, they’ll burn your transistors that drive things like the vacuum and side brush out). I carefully de-soldered U2 and U4 transistors from the board (on opposite sides). I like to snip the legs off them using super sharp tiny side-cutters, and then heat the head of them to remove them, to avoid pulling the pads off the board.

Using appropriate thickness wire I then ran the 3 pads that were connected to each of the legs out to the transistors I had glued in place (making sure to match the specs sheets for gate, drain, and source). I made sure to use enough to move the mainboard around, but not so much that it’s hard to coil it back inside the robot (maybe 5cm or so?).

Without the case on, I plugged the Roomba directly into the power supply and bingo – the transistors got warm and it was charging. Or so I thought, as they quickly cooled back down and I wasn’t so sure. So how can we know what the robot is actually doing?

Learning to speak robot:

Turns out the Roombas all have a great serial interface called SCI that has been around since the first models. It’s pretty well documented, but the most useful thing I’ve found is simply the feedback you can get from it charging with highly detailed info about battery voltage and charge.

But to do so, we need a method of plugging this interface into our computer. Computers use RS232 for their serial (or USB serial) which is -12 to +12 volts. The Roomba however runs at TTL levels with is 0v to 5v signalling. It would probably do damage to simply try and directly plug this in. So we need to make a cable and method to plug in.

First up the cable is a mini-din8 (technically the roomba has mini-din7 but din8 will plug into it, and din7 is hard to find). I found in a box of old cables for Apple macs there was a mini din8 that was for Appletalk between machines in the 80s/90s. I was lucky, but if you can’t find this you can look for these through suppliers, or even whole cables on ebay.

We’ll cut one end off and strip the wires carefully apart, and strip the insulation from their tips. Using a multimeter we need to find the TX, RX and Ground pins. When looking at the male connector on the other end of the cable, turn the connector so that the single notch is upright, and the flat part of the connector is at the top. The pins are numbered starting at the bottom left, and from left to right. So the bottom row is 1,2, next row up is 3,4,5, top row is 6,7,8. Use your multimeter on continuity mode (where it will beep to show connection between the two leads) and carefully pick your way through the pins and match these with the coloured leads coming out of the freshly stripped area. On my lead yellow was TX, red was RX, and blue and purple were ground.

Now we need to interface with the computer. I always have on hand for other projects the handy little FTDI Basic boards from Sparkfun. These are great because you can plug in TTL level devices to interface via USB.

So with my FTDI Basic board at the read I simply soldered little pins onto the ends of the leads we identified earlier, plugged the TX line into the Rx of the FTDI, then the RX into the TX of the FTDI, and the two ground leads linked together into the GND of the FTDI.

You’ll then need to use a serial program to show the output from the robot. I use Ubuntu Linux on my computer so I used GTKTerm with the serial port settings of:
port: /dev/ttyUSB0 (this could change depending on what you have plugged in)
baud rate: 57600
parity: none
bits: 8
stop bits: 1
flow control: none

Bingo – you should now be receiving data from the Roomba when it’s plugged into the charger. It’ll report every 1 second what’s happening. The important piece of info here is the charge rate. It should be something around 1500ma when fast charging, something like 280ma when slowing down, and maybe 100ma when trickle charging. If you see negative numbers, you haven’t fixed your transistor problem properly and the robot is discharging.

A first charge can sometimes be something like 16 hours as it attempts to recondition the battery, but as mine was externally charged, I simply unplugged and ran the roomba for a short bit before plugging in again to snap it out of this mode and charge fast. I wouldn’t recommend leaving the roomba alone to charge overnight until you’re sure it’s safe and happy as you could cause a fire if something is wrong and the battery overcharges.

It should just charge on the home base right?

So I excitedly unplugged the charger from the Roomba, and plugged in the home base, then put the Roomba back on the home base, and….. not much. Exercising patience…

So what was up with the home base? Plugging in my serial interface from above showed me that when on the home base the charge rate was in negative numbers. It was actually discharging while on it.

It turns out the home base also has a switching MOSFET transistor inside that turns on the power to the pads only when the Roomba has made contact – it was the exact same type that had failed in the bot, so I replaced it in the same way also, squeezing the bigger transistor to the bottom of the case with glue. This time, the home base worked (be careful on assembly and disassembly, there are screws in 8 places underneath pads and foam).

So it can charge now for the first time since 2010 or so.

But the vacuum fan and side brush is running all the time?

From previous adventures in assembly and disassembly, I’d mixed some of the connectors up and burned some regular bipolar junction transistors (BJT) out, meaning they were shorted (switched on) all the time. Obviously not ideal.

I had to locate the transistors in question: Q35, Q36, Q17

Then replace them with something similar: BC337

BUT: make sure you follow the data sheets, the legs of the new transistors were reversed to the original (originally SS8050), so these needed to be flipped.

Soldered together, reassembled, with the dance complete, finally the vacuum and side brush have stopped, and they start when you start the robot up. Excellent.

Put it back together: pull it back apart

I assembled the casing following all of this testing, and…. the vacuum motor won’t run. Why? A multimeter doesn’t show any power coming from the prongs on the side.

Let’s take it apart again.

On closer inspection we find that all of this plugging and unplugging has made the poor little connectors quite loose, and tracing back the vacuum lead to the mainboard shows that when in test mode for the vacuum (that’s a whole other story to get there) wiggling the lead starts the vacuum.

This is the same kind of connector you’ll find on all of these kinds of things. I’ve found them in my other Neato XV-21 robots, when their wheels start to misbehave and drive erratically.

It’s a simple fix. There isn’t anything as drastic as corrosion, it’s simply the internal prongs in the connector have bent apart and no longer squeeze the pins when plugged in.

All we need to do here is gently use a sewing pin to lift the super micr0-tiny plastic tab on the side of the connector for each pin, and gently slide the pin out of the connector by pulling the cable slowly. DO THIS ONE AT A TIME SO YOU DON’T MIX THEM UP. Even take photos so you don’t accidentally reverse the polarity. This takes some practice and skill so take your time. When you have the connector out, use tiny pliers to squeeze the tiny prongs back together, but don’t be rough. The last thing you want to do is try and make a new connector.

Slide them back together and plug it back in. You should notice straight away that it’s now quite tight.

Test the other connectors to feel if they feel tight. If they feel loose it’s better to do this maintenance now than later.

So, does it work now?

Yes. Yes it does. OH MY GOD IT WORKS. And it works very well.

Of course normal maintenance now applies, and for this model that’s the usual Roomba brush deck clearing and cleaning, wheel and cliff sensor blowing out with air, and troubleshooting when you see odd things happen (like circle dances etc). They can be a little rougher than newer models, and sometimes docking with the home base can take a couple of goes, but they still clean very well and do it reliably.

The biggest thing though with this model of Roomba is that the front wheel is a non-swivel castor, which can be be rough for the little wheel, so I thoroughly recommend cleaning the wheel, making sure it spins easily, and tightly pulling electrical tape around it. Winding it around a few times means that it has a protective layer that you can replace time to time so that it doesn’t grind off. If you’re cleaning concrete like mine does my workshop, definitely paint it with glossy concrete paint because otherwise you’re going to just sand that wheel off.

Let’s keep these things working for as long as we can. It’s something that can reduce so much waste, but also can be another cleaning buddy to keep your lungs healthy indoors. If you don’t want yours, don’t throw it away, offer it for very cheap in online trading websites, or give it to someone who will put it to use again.

Reviving Chumby Classics to connect to Home Assistant

I absolutely love the ability to create weird and wonderful things for smart homes and find it frustrating that many efforts are just about recreating standard things to be smart. This is our chance to get weird people!

I’ve continued down the rabbit hole of my style of smart home and have joined some original Chumby Classics (the beanbag shaped devices from 2008 or so) up to my “Home Assistant” based smart home system.

Much of the Chumby excitement that was pretty great 12 years ago has faded away, but I’m still keen on the little fellas and have 3 of them around my house.

Of course some time ago the company was sold, and things got a little wonky, and though I’m thankful for the people keeping up the online service, I miss the days of things feeling more active and useful, and with the standard firmware there wasn’t really any methods I could link these devices with my Home Assistant.

I found on github that “phineasthecat” has ported the most recent (V34) Zurks offline firmware to be compatible with the Chumby Classic (the official Zurks firmware is only compatible with classic models up to v21). This is great because there were many things introduced after v21, and I personally mostly love the classics with their beanbag shape.

https://github.com/phineasthecat/zurks-offline-firmware-classic

UPDATE: I’ve forked this work into my own repo for now with the changes I’ve outlined below until a time I can hear back about fixing the bugs with the original author, otherwise I’ll just keep working on my fork instead:

https://github.com/JesseCake/zurks-offline-firmware-classic

There are some problems with this firmware though, and it doesn’t work in its current state. I spent yesterday tinkering with it and managed to fix a couple of bugs and make it run on my Chumby Classic. I’ve submitted an issue on github so hopefully that person is still active, otherwise I might fork it and keep developing from there on my own.

If you want to use it, make sure you use a nice fast USB drive as this kind of thing just doesn’t suit $2 sticks. I use Patriot XT usb thumbdrives (unsure if they’re still current, I have a few of them) for this job.

Here are the main 2 issues with the firmware that you can fix yourself to make it run on your chumby classic:

  1. The “tmp” folder is missing, so it won’t work properly. Simply add this folder to the root of your usb
  2. There is an error in the way that it uses a swap file in the startup scripts, so it’ll painfully slowly create the swap, but won’t go on to use it in subsequent reboots. Go to: https://github.com/phineasthecat/zurks-offline-firmware-classic/issues/4 to see how to fix this. You’ll just need to edit the “debugchumby” file with a text editor.

The first boot scripts actually create the swap file. It’s 500MB though and I assume the Chumby Classic is USB v1 because it took so so long to do this job. It was so long that I gave up and created the swap myself onto the thumbdrive using my computer (Linux computer). I used the command from the script to create using a terminal (bash) window (whilst in the directory of the thumbdrive):

dd if=/dev/zero of=./.swap bs=1 count=0 seek=512M

Then when you put it into the chumby, it should speak to you, have no errors, and still take a while to start but will get there. Subsequent reboots will be faster.

Make sure you still follow their directions though, and follow their recommendation on updating the SSL of the chumby base firmware with the provided fix.

Something not explained anywhere is that this offline firmware does not wipe the onboard chumby firmware, and the USB has to remain in the Chumby to keep working. It boots and runs off the thumbdrive as an active filesystem.

So why would I be so keen on this? Well the amazing work of the original firmware hackers has meant that many of the built in functions of the chumby become accessible through a web interface (http://ip.of.your.chumby/index.html) as well as scripts you can directly access to automate it. I’m keen on home automation and use Home Assistant extensively around the house and my workshop. I love making reminders for myself so I don’t get too into projects and forget to feed my ducks or cat, and normal alarm clocks on phones are boring, so I have a megaphone and 1940s industrial bell wake me up.

Now using the html triggered scripts I can have the Chumbys join the fun and they can use text to speech, MP3s stored on the usb drive, as well as visual cues to show me things.

Here’s an example html script already built in to play any kind of remote stream (here playing my favourite internet radio station Shirley and Spinoza) – yes I know I have a weird IP range at home:
http://192.168.8.164/cgi-bin/zmote_play.sh?http://s2.radio.co:80/sec5fa6199/listen

Here’s another where I can make it use built in text to speech to say whatever I need it to:
http://192.168.8.164/cgi-bin/speak.pl?action=say&words=hello%20person

There are heaps of these functions built in, even to turn the screen on and off, change widgets etc etc.

In Home Assistant, you just need to use the RESTful stuff to trigger it (it just needs to access the HTML links to trigger them on the Chumby). I may not be doing this in the most graceful way, but it was late and I was admittedly a few drinks in.. here’s some basics with the black chumby I got working (I also have an espresso coloured one and a grey one):

  1. Add to your configuration.yaml:
    rest_command:
    blackchumby:
    url: “http://192.168.8.164/cgi-bin/{{ urly }}”
  2. reload your core or whole HA (unsure which reloads configuration.yaml)
  3. Create a script named whatever you want (mine will be an alarm that speaks “good morning”, and starts playing my favourite internet radio quietly, but slowly increasing in volume
  4. for the first in the sequence we’ll speak “good morning”:
    call a service: this service will be (from above) “rest_command.blackchumby”
    put in the service data box: urly: “speak.pl?action=say&words=Good%20morning
    the “%20” is a space, I haven’t created a neat way to filter spaces and make them %20 yet
    Here is the raw yaml:
    data:
    urly: “speak.pl?action=say&words=Good%20morning
    service: rest_command.blackchumby
  5. give some delay of a few seconds at least between each command, so we’re not overloading the chumby
  6. do the same kind of call service but with a command of “custom/setvol.sh?25” to set the volume nice and quiet
  7. short delay
  8. do the same kind of call service but with a command of “zmote_play.sh?http://s2.radio.co:80/sec5fa6199/listen” – this will start playing the web radio station using the built in player that can still be controlled on screen
  9. delay of 30 seconds before it gets louder
  10. command of “custom/setvol.sh?50” to get louder
  11. delay of a few minutes before full volume
  12. command of “custom/setvol.sh?100” for full volume

At this point you could manually turn off the stream or if you want something else to stop the music, you could use “zmote_play.sh?stop” – which isn’t actually a stop command, but the file doesn’t exist so it’ll stop playing. I’m sure there’s a more elegant way.

If you want to change the screen brightness there are more scripts and even turning off the light settings, they’re all in the thumbdrive of this firmware under /lighty/cgi-bin/ as well as /lighty/cgi-bin/custom

I recommend checking it all out. Some is a little rough, and I’ve also added my own script to turn the screen on after it’s been off which is just a copy of the off.sh script with a dim level being echoed of 0 instead of 2.

When I go to bed now, I press a button, and along with all of my house lights, my chumbys around the house turn their screens off. I love a completely dark house!

My next steps? I’d love to keep working on this, as development appears to have dropped off a cliff in 2014, but I’m just not sure about the build environment for Chumby – does anyone have any idea of the way these packages were built? I would love to update the built in DLNA server to a later version of the software to allow it to be an endpoint, so that my Home Assistant and devices can stream their music and sounds to it as needed without needing to preload sounds onto their USB, and using other voices, though I do love the TTS voice onboard this firmware (the built in DLNA server has problems with the scripts, but even when started is only able to choose music from a remote server, not be streamed to).

Funny bit of trivia, it sounds like the voice is the same voice as the robot that serves Rick butter in Rick and Morty.

Hit me up if you need any help – I love these these little guys, and think they’re still worth hacking on. I also think we can take them further along with us.

Transferring files between Windows 3.11 for Workgroups and Ubuntu Linux 20.04

I have an interest in vintage computers, and enjoy making them functional again to explore their operating systems and how they worked. I’m not a purist though, and the first thing I do is replace the noisy failing hard drive with some form of SD card with adaptor or similar for solid state for easy backing up etc.

Part of the difficulty in starting fresh with an empty drive on these older machines is actually getting an operating system installed in 2020. Most floppies I have are failing now, and finding new working floppies is getting hard, as well as the wear and tear of constantly imaging whole floppies with installation media etc. I try to keep this part to a minimum for the base OS, and then use alternate means to transfer files.

My usual go to is to get the machine network connected in some way, usually using an ancient ethernet card or device. You would assume that from here it’s all smooth sailing, however this can sometimes multiply the problems as in these times, communication protocols have moved on making it hard to interlink with them.

I really should probably just bite the bullet and set up a small FTP server on the network using a Raspberry Pi or something similar, but I haven’t done that yet, plus sometimes FTP transfers start to bring in other problems in the way you transfer etc.

For a recent resurrection of a 486 DX2 66mhz machine I managed to work my way through installing MSDOS 6.22, followed by Windows for Workgroups 3.11 on top of that. I then made sure to install the network card drivers in Windows 3.11 as well as the TCP32b driver and added the tcp/ip protocol to the network card in the network control panel (removing IPX/SPX protocol while I was there). I made sure to enter manual IP settings for my network, or you could hope that DHCP will work (I never trust DHCP on old machines, things can get screwy when troubleshooting). Windows will want to reboot after that, and you should be all set to transfer files via Windows shares via TCP/IP.

I assumed it would be as easy as firing up file sharing, and accessing windows drives on the network to transfer the hundreds of MB of games and utilities I’m keen on putting on there, but since Windows XP, the windows file sharing protocols have been updated, and older insecure protocols like those used in Windows 3.11 no longer work.

This is where I usually would use Ubuntu Linux which is my main operating system to open Samba sharing to do this job, but it it, too, has moved on, and by default on Ubuntu 20.04 the version of Samba no longer will talk easily to Windows 3.11.

After much head scratching and walking between the laptop and the 486, I figured out that I had to allow Samba to speak the correct version of the SMB protocol. By default it will only speak much later versions, whereas we need to enable SMB v1 for poor old Windows 3.11 to connect and talk.

Whether you are hosting the share on the Linux machine, or accessing the shares to push the files onto the Windows 3.11 machine as a client, the Samba config will affect it all.

In your /etc/samba/smb.conf file in the [global] area, I’ve put:

netbios name = samsung
lanman auth = yes
client lanman auth = yes
ntlm auth = yes
client min protocol = CORE

This allowed my laptop running linux to use the GUI tools in Ubuntu 20.04 to access the old Windows share.

netbios name is optional, but I gave it a simple under 8 character name as my laptop’s hostname is longer and I thought that may affect things.

lanman auth looks like it’s no longer working, but I put it anyway, along with client lanman auth.

ntlm auth is probably not needed but is in my config for other things.

client min protocol = CORE is what does the magic, lowering the minimum version of SMB protocol to old fashioned basics for the windows machine.

Make sure you restart Samba and the NMBD daemons or simply reboot your Linux machine and you can then follow on here:

On my 486, I then went to the file manager, created a folder named “shared” and used the menus to share it with no password. I found the password was messy as there didn’t seem to be a username associated with it, and I couldn’t connect no matter what I did. You may have windows want you to “log on” which for me I’d set up simple username and password during installation, which is part of enabling network sharing in Windows 3.11.

On my Ubuntu laptop I then opened the file browser, and pressed the “+ other locations” button on the lower left. In this window I went to the bottom of the window, where the “Connect to server” area is and entered “smb://486dos/shared” where 486dos is the name I gave my windows machine while setting it up, and shared is the name I gave to the shared folder I was sharing in Windows 3.11.

By magic, you should find that you can now transfer files to the old machine! *

* It’s not super straight forward however. There are a few quirks:

I found transferring lots of things at once would pop up some errors about overwriting files. This could be some kind of bug. I found it better to transfer single zip files and unzip them on the older machine rather than copy folders containing multiple files. Also remember you’re using a machine restricted to 8 character file names, so make sure this is the case. Try to avoid fancy characters also.

I recommend finding the last working version of Winzip for Windows 3.11 and install it, so the process will be, move the zip file to the windows network share, unzip it there into the final location.

So far so good, and happy I can now transfer the files to the massive 8GB SD card working as an IDE drive in the machine. Which was another headache I’ll put in another post.

Making Tasmota lights turn on urgently

I’m a huge user of Home Assistant and Tasmota open source firmware for ESP8266 based devices. It has allowed me to set up quite a nice smart home setup including light bulbs without using external services.

If you’re like me though, and sometimes just urgently need a light to turn on and for some reason the controller isn’t responding, or something has broken in your fiddling, then this rule is quite handy.

Using the powerful Tasmota Rules framework I’ve set up a rule to make certain lights turn on if I flick the original power switch off then on.

Simply go to your Tasmota console for the light you’d like to add this rule and put:

Rule1 ON Power1#Boot DO backlog delay 1; power on; ct 430; dimmer 100; ENDON

This will on boot up turn on the light to full brightness with a pretty warm colour temperature. Of course this is for my lights that have colour temperature, so you may need to adjust for your lights as needed.

I can think of times where maybe there’s a fire upstairs where the HA Raspberry Pi is set up and the controller is offline. You need light and right away. Or maybe your room controller for some reason has gone offline, or your wifi access point as died.

I’ll update this if I find a better method, as I’m worried it needs some more conditions (ie I don’t want it turning on with a system restart etc) but it’s good for now!

A better way to configure Cura to slice objects for your Makerbot Replicator 2 3D printer

**Update** This method has been proven up to Cura 4.10 on Ubuntu Linux. If you’re having problems, first check that it’s a Replicator 2 (I haven’t tested a 2X with the heated bed), then check that your PLA material info is set to printing at 230 degrees C, then double check that you’ve followed all of the instructions directly, skipping no steps (essential parts are the “r2” profile addition, and the GCode for start and stop). Also this method may require that you adjust your bed height on the fly while printing the first layer to get it just squashing onto the plate, but not blocking the nozzle.

I’ve posted previously about using Makerbot Replicator 2 3D printers with Cura, which involved hacking at the X3GWriter plugin, but was frankly a little hacky, and starts to cause problems when you update etc.

With more time on my hands now I’ve had a closer look and spoken to the author of the X3GWriter plugin. It turns out that the printer definition in Cura passes metadata to the plugins you use, and that his X3GWriter plugin was watching for the “machine_x3g_variant” value. When we modify the standard printer definition for Replicator 1 that comes with Cura, it still passes “r1” to the X3Gwriter plugin, which makes it take on values for the Replicator 1 which results of course in incorrect print scaling. For a replicator 2 we actually want “r2”. Makes sense.

So if you’ve been trying to use Cura on your Replicator 2, and getting things that are the wrong size, you’ll need to create or modify your profile for your printer.

Ideally, Cura would come with a Replicator 2 profile, which I’ll put time in to submit to the maintainers via github once I can understand how their provided profiles work, but for now here’s my little how to:

I’m using Cura 4.6 for my example, and this is specifically for the Replicator 2 – you may need to modify some things to make the 2X work

I also assume that you’ve installed the X3GWriter plugin already in Cura’s “marketplace”

1. Open Cura, and add a new printer. Click on non-networked printer, and select “Makerbot Replicator”

2. Once you’ve added this printer, rename the printer to something like “Makerbot Replicator 2” (doesn’t matter what, it won’t affect anything), and go to “machine settings” for this new printer.

3. Make the Gcode flavour “makerbot”, enable origin at center, disable heated bed, select build plate rectangular, and make the dimensions the following:

   x width = 225mm

   y depth = 145mm

   z height = 150mm

Here are my printer settings:

1127212819_Screenshotfrom2020-05-2519-03-25.thumb.png.3450bc6c7570b0616f83b1ca006950ec.png

4. We’ll also check settings for “extruder 1”. The standard nozzle size is 0.4mm, and the compatible material diameter is 1.75mm.
Here’s my extruder settings:

1191352320_Screenshotfrom2020-05-2519-06-24.png.de9bf5f4485a2601947017bbb194592c.png

5. Add the custom GCode to the printer settings. This is necessary as for some reason by default heated bed info is sent, which makes the printer stop straight away. You can look up what this means and tweak it as needed (maybe you want the bed to drop lower at the end etc).

Contents of my start Gcode:

; -- start of START GCODE –
M73 P0 (enable build progress)
;M103 (disable RPM)
;G21 (set units to mm)
M92 X88.8 Y88.8 Z400 E101 ; sets steps per mm for replicator
G90 (set positioning to absolute)
(**** begin homing ****)
G162 X Y F4000 (home XY axes maximum)
G161 Z F3500 (home Z axis minimum)
G92 Z-5 (set Z to -5)
G1 Z0.0 (move Z to "0")
G161 Z F100 (home Z axis minimum)
M132 X Y Z A B (Recall stored home offsets for XYZAB axis)
(**** end homing ****)
G92 X147 Y66 Z5
G1 X105 Y-60 Z10 F4000.0 (move to waiting position)
G130 X0 Y0 A0 B0 (Set Stepper motor Vref to lower value while heating)
G130 X127 Y127 A127 B127 (Set Stepper motor Vref to defaults)
G0 X105 Y-60 (Position Nozzle)
G0 Z0.6     (Position Height)
; -- end of START GCODE –

Contents of my end GCode:

; -- start of END GCODE –
G92 Z0
G1 Z10 F400
M18
M104 S0 T0
M73 P100 (end  build progress)
G162 X Y F3000
M18
; -- end of END GCODE –

Here’s what it should now look like in your printer settings (the gcode settings of course are longer than the box, so they scroll, don’t copy directly from this image for them):

228029716_Screenshotfrom2020-05-2519-09-13.thumb.png.306b9ed405e5c5ee818e7b0be88da208.png

So we now have the printer defined, but it’s missing the important piece of the puzzle which is the metadata to pass along to the X3GWriter plugin so that we get an X3G file suited for the Replicator 2.

6. Let’s manually edit the printer definition file. Close Cura before continuing. I’m using Ubuntu Linux, so my printer definition file is in:
/home/username/.local/share/cura/4.6/machine_instances/MakerbotReplicator2.global.cfg

I use nano, but any text editor (even gnome’s gedit) will be fine to edit this file.

If you’re in windows, try a system-wide search for the location (sorry I don’t know where it lives in Windows)

We are looking for the heading “[metadata]”, and anywhere under this heading block we’re going to put “machine_x3g_variant = r2”. For example, here’s what mine looks like (some details will be different for yours):

[general]
version = 4
name = MakerBotReplicator2
id = MakerBotReplicator2

[metadata]
setting_version = 13
machine_x3g_variant = r2
type = machine
group_id = 993612c3-052e-42e2-bb6b-c5c6b2617912

[containers]
0 = MakerBotReplicator #2_user
1 = empty_quality_changes
2 = empty_intent
3 = normal
4 = empty_material
5 = empty_variant
6 = MakerBotReplicator #2_settings #2
7 = makerbotreplicator

Notice where “machine_x3g_variant = r2” is?

Save this file where it is, and reopen Cura.

That should be it. You’ll be able to directly choose your printer in the normal way, choose your settings and object, and export. If you find it doesn’t successfully create a file, there’s something up with your config, so double check any syntax problems etc.

You can also check the output of Cura’s errors in (again I’m in Linux):

/home/username/.local/share/cura/stderr.log

So if you tweak Gcode and tinker with those bits and pieces you can see if X3GWriter is unhappy about any of it.

Quick note about filament: I should mention here, that I use PLA filament, and I set (under the materials profile settings) the nozzle to print at 230 degrees (celcius) because that’s what I find works well. I find that going too much lower than 220 degrees (Cura seems to default to 200!) tends to jam the nozzle. It could be that my filament needs this, or just the temp of the head is always slightly off on these printers, but that is what works for me, and could be the cause of problems I’m asked about where the head seems to not be extruding. Worth checking..


Happy printing!

Delete your Facebook

As one half of an artist duo that makes work that is quite critical of the way in which social media is distorting our lives, I felt like a hypocrite to continue to have an account at Facebook.

Somehow along the way, I was coerced into joining (somewhere around 2008) to see what all the fuss is about. Looking back at my posts, I can see where I was being cautious, and a definite point at which I became addicted.

Suddenly I felt it was ok to share even the most bland of things or have a good old rant about something, but for who?

Why do we all feed our most personal of information into this private corporation’s database? We would never be ok about our government knowing this much about our personal lives!

On top of this, each one of us with an account in the western world is worth approximately $34 a year in advertising revenue to this multi-billion dollar corporation. Advertising and tracking that follows us around as we browse the web elsewhere. Doesn’t that feel invasive?

Today I stop feeding this machine. It’s time to take back what is ours, and our personal information is the most important thing we have. Consider whether you want to continue down this rabbit hole and join a growing number of people deleting their Facebook. Let’s stop hashtagging and start to get back into real contact with the people we care for.

Use Facebook to promote things to friends and followers? Consider a mailing list. Use it to keep in touch with friends and family? Use one of the many many messaging apps around that don’t track you. You enjoy the personal blogging? Start a WordPress blog. Everything can be done without this tool, it’s just become easier for some reason to use it.

If you need help for alternatives to use in future, get in touch, I’m happy to assist.

Advice to young creatives from a mid-career artist

As part of artist duo “Cake Industries”, we’ve been asked to give many talks over the years, and those talks generally focus on what we’ve done, what we’re currently working on, and possible future paths we might take.

What we’ve never really done though is try to give advice to young people trying to figure out their place in the world, and especially as creatives, how to be what we want to be as artists, as we felt like we were still trying to figure it out ourselves.

As I rapidly approach 40, it seems weird to think that I’m probably half-way through my career, though I still feel 25 inside and just starting out. I feel like at this point in my career, though, that I’d like to give some advice to young people feeling like they’re not sure where to go in a few pieces of advice. These are based on some hard lessons I’ve learned as an artist surviving in the world.

Nobody is an authority
Despite what many in the arts may believe, nobody is a complete authority. This includes me and my advice here. There may be a director or a curator, or even another artist who tells you how to get ahead in your career. Mostly they are wrong, and their advice either is specific to an artform, period in time, or a particular situation. If you feel strongly about something, then do it. You may forge a new path, or find others along the way that think the same way.

Don’t wait
There is lots of advice out there from well-meaning people that you just have to wait, or that your generation will have their time, or that you should just wait until X. Don’t wait. Time really can fly and before you know it another decade has dawned. If you feel the passion to do something, then do it no matter what. Ignore funding deadlines, ignore local council opportunities. If they don’t fit, or they require waiting for significant periods of time, then find another way to do what you want to.

Be all in, or don’t bother
This could be controversial, but the longer you delay really pushing forward with your passion, the further out of reach it will be. That full time job you landed “until things pick up” will possibly divert your life away from what you want to do. That permanent support role you have could make it seem like you don’t have your own ideas or interests. Of course we all have to find ways to make ends meet (I’ve worked many casual and part time positions over the years), but keep these engagements short and controlled so that you don’t forget what you’re doing them for.

There’s no such thing as “making it”
It’s almost as if we all believe that one day you’ll get out of bed and a sign will be there letting you know you’ve “made it”. Making it is a strange idea, as we never truly finish or reach a point where we can rest. We are all going to be continuously trying to make the next idea happen, get to the next place, try a little harder. While you’re wondering about how to survive next year, people younger than you are looking up to you and thinking that your life must be so easy now. Just keep going.

Life is both short and long
Sometimes it seems like forever, that the slog to create and survive goes on and on. But suddenly you realise something you made was 10 years ago. Every single one of us could also die suddenly at any time. Pretend as if you only have 5 years left to live and pour your energy and focus into what you love. The idea that you could do something when you reach your 60s may never happen, so just try now.

There is no single path
No matter who you are or what you make, there is no single path in your career or life. Even if you’re told as much, don’t believe the hype. With over 8 Billion people on the planet, there are so many ways to find your scene, your place, your focus that nobody can definitively tell you what step to take next.

You may disagree with some or all of the advice above, and that’s fine, but my hope is that it may inspire some young creatives to push aside the rubbish in their life, and follow their passion.