Saturday, February 23, 2019

My Digital Lavalamp - or "The MkI Epilepsy Generator" - Part 3

Hola! You made it to Part 3, wherein I detail some of the discoveries, pitfalls and trials of implementing my standalone lava lamp software running in Raspbian.

Previously, on My Digital Lavalamp:

Part 1 - The inspiration and initial setup.
Part 2 - The physical build.

So, with the physical build complete [and having made my choices concerning buttons and lights], it was time to delve into setting up how the software driving the lamp worked ie: how it behaves when you turn it on and switch modes. I won't cover everything in absolute detail as that'd be boring, but I will link to the key places I found what I needed and discuss what I got working.

I discovered early in my processing sketch explorations that was not only a cool site offering a live sketch programming facility, but that a large number of the sketches there were compatible with processing itself and would run simply by copying and pasting code into my own sketches. And thankfully the Terms of Service regarding this are very friendly to creative endeavours:

"By submitting Content to OpenProcessing for inclusion on your account, you grant anyone Creative Commons license to reproduce, modify, adapt and publish the Content as defined by the license. If you delete Content, Wiredpieces Inc. will use reasonable efforts to remove it from the Website, but you acknowledge that caching or references to the Content may not be made immediately unavailable."

This meant I could openly reuse and adapt existing sketches and techniques there and modify them to run on my lamp. This led to many evenings of fun discoveries like this, and this which became modes I could incorporate into my lamp like such and so on. I already had a long list of things I wanted the lamp to do, but I was quickly adding new ideas I hadn't originally considered. WIN!

Some of the best modes remain modified versions of the test sketches that come with the fadeCandy codebase created by Micah Elizabeth Scott. I've yet to top those in terms of beauty, efficiency and speed.

Processing also ships with an excellent library browsing and installation tool that gives you instant access to a wide variety of tools to do everything from sound interaction to computational fluid dynamics:

Looking at some of these libraries I was inspired to create lavalamp modes that respond to changes in the weather, things that simulate hot fluids like a *real lavalamp*, as well as respond to sounds in the environment etc. There's even standalone voice-recognition libraries that I could use to get it to switch modes when I asked it to. So many possibilities.

I'll put all the sketches I have and some other bits and pieces up on gitHub for anyone who wants to build something like this or simply see what I did. It's time to talk about Raspbian.

The Raspberry Pi doesn't come with an operating system. Well, it can do, but you have to download and run it yourself from a SD card. You can run many different flavours of linux on a Raspberry Pi but the one I chose was Raspbian because it's the "official" operating system for the Pi and widely used.

The first thing to do was download a copy of Raspian and flash it onto my SD card. I used an OSX utility called Etcher for this. In no time I'd booted the Raspberry Pi with everything connected and was welcomed by the Raspbian desktop and config utility.

From here setup was a breeze, with one exception. My mouse pointer was lagging and jittering all over the place. A quick search on Reddit revealed that I needed to add a line to the /boot/cmdline.txt config file. This is one of the many places in linux where you can tweak settings as the computer starts up. This issue is likely one that'll be fixed in later versions of Raspbian.



to /boot/cmdline.txt. There may be nothing else in that file if you're working with a fresh install where this might occur. I didn't encounter too many other issues apart from this with the exception of some wifi glitches that were fixed in a subsequent Raspbian release, so I won't cover them here.

Processing and Performance

With that done it was time to install and test the Processing performance playing through some of my existing sketches created on the Mac. In recent versions of Raspbian, Processing comes pre-installed, which is great. But, in case it doesn't, here's a link to info about where to find the ARM-specific version and how to install it:

At this point, starting my Pi took about 10-15 seconds, after which it secured itself an IP address and presented itself on the network. I was able to connect to it in the OSX finder and simply drag and drop my sketches into a folder there and run them directly on the Pi by launching the pre-installed version of Processing. Which is when I first became aware of one of the chief differences between the Pi and a Mac Mini, namely CPU performance.

My Mac Mini [late 2014] is a Intel Core i5 running 2 CPU cores at 2.8GHz. So it's not exactly bleeding edge but it's got enough snot to chew through a lot of Processing draw calls, specially at the resolution I'd been prototyping my sketches at ie: 400x800 etc. Particle simulations run pretty smoothly, computational fluid dynamic solves run well enough to look like fluids etc. I was able to use Processing's additional 2D and 3D acceleration frameworks to speed up some effects too. I could afford to instance a transparent circular gradient .png file as a fire particles at high enough sizes and densities to look pretty cool.

The Raspberry Pi 3B+ however has a significantly smaller hardware capability. Although it's got a 1.4GHz Quad-Core ARM Cortex-A53 CPU, it's a very different CPU and draws a lot less power, creating a lot less heat and more importantly, simply doesn't crunch the same amount of Processing draw calls I was used to. I'd been spoilt by prototyping my lavalamp modes on the Mac Mini and was forced to make some economies and trade-offs to achieve the performance that I wanted in my lamp.

I'll go through what choices I made to combat this briefly and then we'll move onto the FadeCandy installation:


Processing on ARM architectures doesn't currently support quite the same levels of 2D and 3D acceleration, so sketches that were using those things ran incredibly slowly [granted, I was only running one 3D acceleration sketch which I then abandoned]. But I was using 2D acceleration for drawing particles in some of the particles intensive sketches. This meant that I needed to make some changes like:
  • Disable the 2D draw support - depending on the sketch, it was adding maybe 25-30% performance gain over NOT using it. 
  • Lower the overall sketch resolution, which was ok because lets face it, technically speaking the sample resolution of the array was low to begin with. 8x15 LEDs is not exactly high-def. And 400x800px is a little rich. Sketches now became more like 100x200px, which as we know from the MegaPixel Wars in camera sensor resolutions, is a non-linear improvement. 
  • Lower the number of particles I was drawing. 
  • Change the particle itself from a transparent gradient .png file to a simple flat transparent circle() in the draw function and use other techniques to regain the softness I'd previously had. 
  • In other sketches where I was needing to draw lines or other shapes, simply lower the number of line() calls, make them fatter etc. 
  • If I needed to achieve an animated wipe or transition, I considered using a single large image and transforming it somehow, compared computing it's appearance otherwise [in fact, this is how I achieve my startup array test look]. 
With most sketches getting one or more of the adjustments listed above, I regained about 50% of the performance I used to have on the Mac Mini, which was good enough. Some of the computational fluid dynamic sketches required deeper digging to optimise for the Pi as they had more complicated compute functions as well as requiring drawing many particles.

I also found it helpful to specify a target frameRate() and work within that constraint until I'd achieved the speed and look I needed. By default I think Processing sketches will run as fast as they can - this may not be a good place to be! The look of your sketch may be affected by it's compute, so it's important to know what knobs to tweak to control appearance.

Even after shrinking and optimisation, my Simple Fire sketch can consume 83.2% CPU on the Mac Mini at 30fps.
Overall this phase was good. It forced me to economise and be efficient. It also made me work harder to achieve the look I was after and consider more difficult changes to some of the sketches I was working with.

FadeCandy has a server/client architecture such that you can run the server on a main computer housed somewhere central, and then multiple small FadeCandy units can connect to that server to control LED strips where you need them. The server is very lightweight. All you really need to do is run it at startup and forget about it.

Here's an excellent guide for getting FadeCandy setup [on OSX] running a 8x8 array of addressable LEDs that also covers some wiring and power requirements:

But if you want to run standalone on a Pi you need to download and make a version of FadeCandy that'll run on ARM etc - this legendary Adafruit guide has all these steps and more including running the FadeCandy server at startup -

Note: in the guide I just mentioned, I stopped before creating the fcserver.json config file as I am simply running one FadeCandy board and didn't need to configure addressing the 3 boards used in that example.

The FadeCandy server page has some useful utilities for testing your LED strips etc and will also show you the serial number of the FadeCandy board you have once it's detected ie: plugged in via USB.

Processing talks FadeCandy

How does Processing interface with the FadeCandy server though? How do you even know what is being sent to your LED array from your sketch? Good question. This is where the OPC class comes in.

Shipping with the FadeCandy installation comes some example files created by Micah, that contain her Open Pixel Control java class for Processing. This is a suite of methods for telling the FadeCandy server where a LED [or multiple LEDs] are positioned relative to the sketch window. It has calls that allow you to mirror your physical LED array in Processing so you can accurately gauge how to achieve the visual effects you need. You then use calls like ledStrip() and ledGrid() to construct a sort of sample array whose points will query the colour of pixels under them and send that info per frame to the FadeCandy board over USB and then onto your actual array.

Here's what the OPC commands look like for my array:
opc = new OPC(this, "", 7890);
opc.ledGrid(0, 15, 4, width*0.25, height/2, height/15, width/8, 4.712, true);
opc.ledGrid(64, 15, 4, width*0.75, height/2, height/15, width/8, 4.712, true);

The first ledGrid() command arranges and fills the first FadeCandy output channel of 64 LEDs and the second ledGrid() call fills the second channel with the remaining LEDs. The '4.712' is the rotation amount expressed in radians required to get the sample array mapped in a vertical orientation to match my LEDs wrapped around a cylinder.

Here's a couple of visual examples of the OPC dot sample array I'm talking about:

On the left, Micah's wavefronts example sketch at its original size, on the right, my optimised and shrunk particle fire effect.

You can see in these two sketches the result of the OPC library calls placing white dots in the sketch window that mimic the physical array of my lavalamp LED layout so I can see just where the colours of the sketch will be sampled and displayed externally. Of course, the white dots are not sampled [otherwise the LEDs would just display white!], just the colours underneath them.

So this means that you actually don't really need to run a sketch in a large window as there's a lot of wasted draw() that is not utilised. There's a lot of gaps between the dots. So, for an application like mine, I can make the sketch smaller and use a filtering function like filter(BLUR, 2) to handle smoothing the edges of moving shapes.

Startup, Shutdown and Sketch Changing
One of the realties of using a full Linux computer [or most others for that matter] as a lighting fixture is that you can't simple unplug it when you want some peace. Computers have file systems that may be busy in the middle of writing some important information when you cut the power, and this can corrupt the file system, potentially making your computer not even start up. I had little choice but to consider making a nice way to shut the Pi down when I wanted to turn it off. And without having a mouse, keyboard and monitor connected to it, how can this be done?

I needed a physical button. Well, two actually. One that could tell the Pi when to shutdown, and one to tell the Pi when to switch the current Processing sketch. Thankfully the Pi has GPIO header pins [general purpose input/output] that permit connecting many things up etc. This means with a little browsing at your local electronics store, a simple momentary switch can be used to send these signals.

GPIO pins on my Pi with two switches and a status LED connected.
After reading a bit I followed the instructions in this link to hook up my startup/shutdown button:

This link also includes the instructions for getting this small python script to run when the system starts up automatically. This is really handy and I thought I could simply reuse this script launching facility to start the python script that would listen for my second button pushes to make the lavalamp sketch change. However, this was not to be - more on that shortly.

The setup in the link above requires making a small python script called that uses a python GPIO library designed to sense voltage changes on GPIO pins and turn those into the system command that shuts down the Pi. You also make a shell script that will run this listening python script upon booting up the Pi. They're both small and lightweight - nice and simple.

When you do turn off the Raspberry Pi, it takes about 10 seconds to halt all processes and get into a low power state where it's safe to cut the main power. Before it does that it flashes the TxD LED that is soldered onto the main board along with some other lights. More info on those lights here. It'll flash that light 10 times quickly, after which it's safe to cut the power.

I chose to extend that status light to the outside of my Pi enclosure following another guide here:

Here you can see on the rear of my lamp base, the extended status LED and the startup/shutdown momentary switch.
I extended the last switch to the front of the housing as the sketch change switch. This was another identical momentary switch. Like I said, I thought I could simply use the same script launching system as the startup/shutdown switch, but no luck. For some reason that wouldn't work and it took some digging to discover why.

I wanted to have another python script running that would again just be listening for voltage changes on another set of the GPIO pins. This time though, the script would kill any running sketch-related processes, iterate through the string array of available paths to sketches, select the next logical one in the list and launch it. But every time I tried it, I couldn't even get it to launch the first sketch in the array. I tried about 4 different schemes for launching python scripts at startup, making the assumption that they were all failing. It was a little infuriating.

Actually what was failing was that all the script launch schemes I was using were not designed to launch interactive applications like Processing. They were designed to launch small utility scripts at different linux run-levels beneath the user-space and interactive levels where Processing could be initiated. That's why the worked, because it wasn't ever trying to start a complicated graphical program, it only ever invoked the terminal shutdown command.

Once I understood this I found the information I needed. Here's where I read about that problem:

Then I had success. It was time to flesh out the main python script for managing the sketch list and which sketch played automatically on startup.

Speed and pre-compiling

If you've used Processing on a fast computer, it might seem pretty interactive when you launch a sketch from within Processing itself. But, trying this on the Pi, I found a much longer delay when launching, like more than 5 seconds in some instances, compared to my Mac Mini's 1-2 seconds. This would be a pretty laggy experience if you pushed the button on the front of the lamp and had to wait 5 seconds before something changed. I also didn't have a way to launch a Processing sketch from within the program interface via Python, so I would have had to write some master framework within one main sketch that contained all the sketches. That would become onerous to compile and add new modes to.

Processing offers a pre-compliation option where you can pack a sketch down into a standalone unix executable file compatible with your platform - in this case the Pi. When I tried this I discovered not only does it start up much faster on the Pi, but that I can control the launching of the sketch from the command line via Python. Success!

By using the good old 'top' command while a pre-compiled sketch is running, I could see at a glance that the main CPU hog is java and also that there were no other java processes currently occurring. This mean that to stop my sketch immediately I can simply execute the 'killall java' command to nuke it. This is a bit like using a hammer to squash a fly I realise, but as this Pi was simply devoted to being my lavalamp and shouldn't be running any other java processes, it's fine to use this method.

So, with the following pieces in place:
  1. A method to launch a python script after hardware boot,
  2. A means to detect hardware button input,
  3. A means of starting and stopping Processing sketches via the command line,
  4. A speedy and self-contained sketch format,
I had all the pieces in place to create a master python script to control switching between sketches:

It's pretty simple, and I'm sure someone reading it will find flaws and ways I could do it better. With most coding things there's always a better way.

Should you want to use what I've made you can find this script and all the sketches in the gitHub repo: You're welcome to take it and modify it to suit your requirements etc.

Stretch Goals
Phew. That's a lot of stuff. Now that it's done, I've got a few stretch goals in mind. I reached my initial goals of making the lamp completely standalone and having switchable sketches triggered by a hardware button. While getting there it occurred to me that having a full linux computer running the lamp sitting on the local network meant that I could do some other tricks with it:

  • Have it serve a webpage over the local network so guests could control the lamp from their phone via animated gif buttons showing the different modes.
  • Employ a standalone voice recognition package to control the modes.
  • Run HomeBridge on the Pi to interface with some HomeKit devices via voice control.
  • Make more sketches that interact or reflect the status of other things ie: IFTTT integration.
  • Use my new found skills to build something like this -

Anyway, that's about it! Thanks for reading.