Thursday, December 13, 2018

My Digital Lavalamp - or "The MkI Epilepsy Generator" - Part 1

A three part series wherein I discuss the problems and solutions I encountered during the production of my own interactive LED lava lamp display:


PHOTOSENSITIVE WARNING: READ BEFORE WATCHING A very small percentage of individuals may experience epileptic seizures when exposed to certain light patterns or flashing lights. Exposure to certain patterns or backgrounds on a computer screen, may induce an epileptic seizure in these individuals. Certain conditions may induce previously undetected epileptic symptoms even in persons who have no history of prior seizures or epilepsy.



However, to get there from here, we'll have to go back a few steps...

Ion
-------------------------------------------------------------
I saw this product on a friend's desk at work: Ion - A Music Detecting Mood Light - Kickstarter.com and I wanted one.


But the Kickstarter campaign was long over, and the site it spawned to sell them post the campaign had shut down also. There were none on Ebay or anywhere else in stock I could find. I've owned lava lamps over the last 25 years and I really like them and this seemed like a really smart advancement. I was looking for a project that would force me to delve into addressable LEDs so I decided to make my own. What's better than a lava lamp that can be every lava lamp?

I started a massive Evernote file filled with questions about how to build it, what hardware to choose, what functions it could/should have etc. I consulted with friends and pooled ideas about different modes and what they thought might be cool etc. The original Ion lamp was capable of some pretty sublime colours, and also some cool interactive modes too:



I felt like I could see the the individual LEDs themselves and that I could improve on the resolution and offer a wider variety of modes including sound reaction, API connections to favourite services, IFTTT integration, voice recognition and, and ... and... well I'm still adding some of those now it's built.

One thing I haven't tried to replicate is the Ion's bluetooth connection to your phone. That's possible of course, but I had a lot to figure out and had better start with just controlling some lights first huh.

Fadecandy
-------------------------------------------------------------
I had to make a decision about how to control the LEDs. There are many ways to do this, but which was going to be the easiest and friendliest to my current coding capabilities? Plus, where there any which improved on the somewhat basic RGB output offered by off the shelf Arduino kits?

The answer is indeed yes thanks to Micah Elizabeth Scott and her Fadecandy hardware board. Micah Elizabeth Scott has been crafting displays for annual trips to the Burning Man festival amongst other art installations and interactive experiments. As she shows on her site, most normal LED controllers fall into a trough of sadness when it comes to blending hues together or displaying correct colouring as low light levels. She created the Fadecandy hardware to solve these issues.

She partnered with Adafruit to turn the Fadecandy board into a small and affordable form factor unit that can control a metric crap-tonne of LEDs for really, really big joy-filled displays.


Better yet, it can be controlled via USB from big computers and small, embeddable computers like the Raspberry Pi etc. And it interfaces directly with Processing, which I've already experimented with. Processing is a great platform to program generative art that can accept inputs including music, sound, sensors and other things. It's used by all sorts of creative people for interactive art installations, live music, large scale projections, small embedded pieces etc. Processing is also available for the Raspberry Pi, thus opening the door to my small scale needs.

Get some LEDs
-------------------------------------------------------------
Where better to get addressable LEDs than Adafruit themselves? Actually... shh, Amazon had the very same lights with waaaay cheaper shipping so after some rough calculations I ordered two 1 metre strips of 60 weatherproof NeoPixel RGB LEDs. I also ordered a 5-volt, 10 amp power switching power supply [to handle NZ's 240v mains power] and one Fadecandy board.

I decided that 8 vertical columns of 15 lights, wrapped around a cylinder should provide a suitable height and LED density to improve upon the Ion lamp resolution. I had to also figure out how to reproduce their diffuser solution as they talked on their page about their prototypes designed to deal with making the individual LEDs blend together into a whole. More on that later.

The first thing to do was to get the Fadecandy board powered and connected to a computer in order to test my LED strips to make sure there were no dead LEDs. They're relatively hardy but sometimes die during shipping.
Success! No dead NeoPixels.
I should mention at this point a note of gratitude to all the people on the internet who have written about their crazy projects and shared advice and tips on how to do stuff like this. This page in particular is excellent and offers a large amount of information for getting started.

Micah offers many example Processing sketches designed to run directly on Fadecandy once you're connected and have the Fadecandy server up and connected. Here's Jamie testing the mouse-driven interaction:


Another example sketch gives you the ability to transform a bitmap through the sample points that are sent to the Fadecandy board and thus onto the LEDS. Here's what a simple bitmap of fire can look like via that method:


It's quite effective. The colours are excellent and the brightness can be overwhelming at times. This is a powerful way to manipulate the light array that means you don't have to be an expert programmer, you can envision cool effects just using things you can make in Photoshop.

You can see the weird-ass bitmap I made in this example scrolling through the array sample points. These were meant to look like mini nukes.
With these simple examples working, and the creative power clearly accessible, it was time to form the complete array that will end up wrapping around the central cylinder in my lamp.

As detailed in the Adafruit NeoPixel Curtain example I spent some time mapping and designing the separation between the power requirements of the array and it's data inputs. Each Fadecandy board offers 8 data outputs that can drive 64 NeoPixel LEDs each. I planned to drive 120 LEDs split into two strips. It was easy enough to simply drive each strip with a channel from the Fadecandy and not be too concerned that I wasn't using all the bandwidth of each channel. This did make for some funky OPC mapping that we'll get to soon.

The next step involved soldering, which I can do but have never been great at. Nothing better than a reason to practise.

Here's the array layout with my hand drawn power and data connections. You can see the arrows indicating the direction of the serial data that tells each LED what colour to display. I had to wire and solder all the missing parts at the end to complete the flow:


Enter the Vase
-------------------------------------------------------------
I should mention at this point that I'd decided on the length of the strips according to the final form I intended to deploy the strips into. I wanted to mimic the Ion lamp styling and presumptuously assumed a local homeware shop [Briscoes] would simply have cylindrical glass vases that might suit the task. They did! Here's a pic of the vase, upside down on a temporary wooden plate with a central core of PVC tube from the hardware shop:


My plan was to complete the wiring with the array laid out flat, then transfer it to the PVC tube and resolve the rest of the wiring loom issues as I went. I also gambled on being able to solve the diffuser issue down the line too as I could simply remove the vase and line it with some acrylic later once the array was working correctly. I still had no concept of the final installed form at this point, much less what small single board computer to run it off. I thought who cares if I only ever have it running connected to my computer while I'm dorking about?

After a busy few hours of soldering and insulating, I had the strips connected and ready to test. Although a few of my solder joins failed [embarrassedFace.jpg] thankfully I'd not made any short circuits and my array successfully lit up.


Having got this far, I could not resist testing some more complicated array graphics and modes.

Yeah I know, my desk is a mess.
I quickly ran into some issues to do with how I'd chosen to lay out my array and how the Fadecandy board and Processing saw things working out.

Radians, and assumptions
-------------------------------------------------------------
As any Fadecandy enthusiast will know, a visit to the Fadecandy google group page will show that multiple people want to layout their LED arrays in different and sometimes challenging ways.

I'd made the assumption during my wiring stage that my horizontal layout could simply be rotated in the Processing sketch or OPC layer to be vertically oriented and wrapped around a cylinder. Here's an example of the indexing of the zigzag array I'd followed [this example matches an 8x8 NeoPixel grid but the result is similar for a 15x8 grid - just longer on one side]:


You can see that the data input for the array enters on the top left at the 0 index, continues along the top to the right end where it zigs [or zags?] down onto the next row, this time in reverse order to the left hand end and then zigs again onto the next row, this time in the correct order etc. And on until the end of the layout.

It's vitally important that the Fadecandy understands the intention of this layout so that it knows how to take the sample points in the processing sketch and convert the resulting pixel colour information into the correct spatial information when it's sent in serial down the data pin connection of the LED strip.

My problem was that I actually required a layout more like the following image and that, being the VFX artist I am, I could simply specify a 90 degree rotation to achieve the correct result:


I needed this layout so that all my input power and data wiring would be near the bottom of the layout and that the longest dimension [not represented properly here] would be vertical, to match the height of my lava lamp physical orientation once the array was wrapped around the PVC cylinder.

Although I had success with running sketches on the array as I'd wired it horizontally, if I simply rotated my LEDs 90 degrees it meant that a graphic element running in the Processing sketch from top left to top right would run from bottom left to top left on the array. Gadzooks. This throws a spanner in the works of some of my ideas for things based on physics, like sketches that use a gravity direction meant to mimic the real world.

I had no clear place in the Processing sketch to rotate the output, and it was not immediately obvious where else I could effect this?

It's beautiful. But it's horizontal. This will not do.
After some forum diving and Fadecandy google group spelunking, I hit on this thread in particular about a person who had a non-standard NeoPixel layout who needed to perform a similar rotation-based remapping operation. They used a rotation value specified in radians to get the correct orientation in conjunction with some other value swapping kung-fu.

If you get on down to building a Fadecandy project yourself you're going to run into the problem of which OPC library call to use to map your Processing sketch out to your array. I settled on using two opc.ledGrid calls, the syntax for which looks like the following:

opc.ledGrid( index, stripLength, numStrips, x, y, ledSpacing, stripSpacing, angle, zigzag, flip )
After some head scratching and monkeying around, I successfully remapped my sketch layouts out to my array via the two Fadecandy channels I was using with the following commands:

opc.ledGrid( 0, 15, 4, width*0.25, height/2, height/15, width/8, 4.712, true )
opc.ledGrid( 64, 15, 4, width*0.75, height/2, height/15, width/8, 4.712, true )

I've highlighted in the lines above the radian specification for the rotation required. This worked! And it now gave me sketch output in Processing that was oriented correctly for the cylinder arrangement.

Success! Now I can move onto the physical installation, knowing that gravity points down. Duh.
With this problem solved I was pretty certain that I could progress onto the next stages of the design - the physical form factor, and also start considering some other issues like, how can I add a button to the front of the lamp to change the sketch? What small computer could it run on?

While I considered those things, I also spent some time making sketches in Processing to run on the array and experimented with designs from https://www.openprocessing.org where a great many people share their ideas with the world. The terms at OpenProcessing.org specify that any work uploaded or created on their site falls under a Creative Commons license unless specified otherwise. I've found many sketches there that simply run in Processing locally quite well. They require tuning and optimising to run on the lava lamp array but this is quite fun.

So, that's it for Part 1. In Part 2 I'll discuss the computer platform choice and show the housing construction. In Part 3 I'll detail the steps and software I created to deploy what I'd built into a standalone unit with wifi where I can add new modes wirelessly. And a stretch goal...


Part 2 this way...

-j

Saturday, December 8, 2018

FujiFilm X100F update

So after shooting happily with the sublime FujiFilm X100s for the last 5 years or so, it's time for an update. Thankfully, in the interim, Fuji have not sat idle. They have iterated steadily, producing the X100t and in 2017 the X100f, representing a significant jump in specs and capabilities.

I love the way the X100s produces images, from the combination of it's controls and lens to the friendly portability and styling that impart a relaxed mood to capturing scenes. It's just so much fun to have around. Then in Lightroom, the grading possibilities and way that the X-trans sensor sharpens makes me look forward to processing images from trips abroad.

Deer - Nara Park - Japan
The X100F delivers a huge improvement in speed, battery life, ISO and file resolution, marking a significant commitment from Fuji to keep delivering on what makes the X100 series so delightful. I couldn't resist dressing up my new X100f in all the latest hipster accoutrements:

Thanks in particular to Gordy's Camera Straps for the custom leather and binding.
The only fauxhemian element left to arrive is a leather Gariz half-case, to replace the Fuji brown leather travel case I had on the x100s. Also, my amazing wife is getting me the Tcl-x100 tele-conversion lens for Christmas which will afford me a 50mm equivalent focal length. This will be a great travel and portrait option where the 28mm equivalent fixed lens is a little wide.

Merry Christmas and happy shooting!

-julian

Thursday, October 27, 2016

Oculus Touch early access


Oculus have very generously sent me Touch kits for use in developing the hand interactions in the Untouched Forest. I'm really stoked to have their support in what is turning out to be a really interesting project. Oculus Touch is not officially available and shipping until December later this year, however pre-orders are available here: https://www3.oculus.com/en-us/rift/

In short, Touch is fantastic. It's capacitive sensing and haptic feedback allow for the detection of hand gestures as well as feedback about objects the player interacts with. It also maps pretty much 1:1 with the controls on HTC's Vive hand controllers so, thanks to the generous support of Valve, almost any SteamVR title is natively supported. I find them very comfortable and intuitive to use and have begun Unity integration and experimentation.

Google's Tiltbrush is an amazing app to use. I'm stunned by the user interaction in the form of wrist-mounted controls and I've begun sculpting crazy light pieces that I dreamed of creating when I was a child:


Massive thanks to Callum Underwood and Andres Hernandez [Cybereality] at Oculus for helping me out and giving me this fantastic set of tools! And thank you to all the engineers and developers there for pushing so hard to get this into the hands of devs all over.

-julian

Tuesday, September 13, 2016

Your foster parents are dead



Yeah ok so, bad title I know. But seriously, remember this moment above from Terminator 2: Judgement Dayhttps://www.youtube.com/watch?v=MT_u9Rurrqg click to the left there to watch.

Well, looks like the speech synthesis component of that instance has arrived. WaveNet - A generative model for raw audio, looks like it has massively closed the gap between computer speech synthesis and human speech. I won't attempt to summarise the whole article but, in short, far more natural sounding computer speech [and in fact almost any audio source including music] has arrived. The implications are, unnerving.


With the previous technology leader 'Concatenative' in the light pink on the far left in each graph, and human speech in green on the right, you can see where WaveNet now falls. Listen to the results yourself in the midst of the article.

This means that all the devices and smart assistants that are speaking to you and I today [Siri, Amazon Echo, Cortana, turn by turn GPS navigation etc] are not only going to sound ever more convincing, but the potential for mimicry of voice actors, politicians and people that are no longer around that we have enough samples of their speech will go through the roof.

Mimicking long dead artists' work is one facet of neural-net tech, this is another.

Incidentally, in that same article are some amazing [and frightening] piano music examples. I think the results are maybe physically impossible to play. They are interesting in a somewhat schizophrenic fashion.

-j


Saturday, August 20, 2016

Welcome to the Untouched Forest

I've begun a new VR project entitled Untouched Forest. It's a piece of native New Zealand forest where you can experience flora and fauna in an interactive way. I'll be exploring player/character interactions in a relaxing virtual environment. Click here to take a look: www.untouchedforest.com

from the site:

Spend some time in a NZ native forest environment as native bird life comes to visit you. Experience a night and day cycle with all the variation and appearance of creatures that has to offer. Use Oculus Touch to let birds come and land on your outstretched hands and enjoy their song. See glow worms and hold a weta in your hand. Sit and relax as day and night pass before you while you escape your normal surroundings.

More information on the site's blog here: www.untouchedforest.com/blog/

-julian

Tuesday, April 26, 2016

Google's Deep Dreaming, your favourite artists and YOU.

What if your favourite dead artists were still painting fresh works? Fresh works containing themes *you* specifically desired? Are you still sad that Francis Bacon perished? Are you gutted that H. R. Geiger fell down some stairs and died? Isn't it sad that we don't have more of Gustav Klimt's stunning paintings from his gold phase? I think so.

But here are some paintings they never made:




What is this sorcery? We've entered a new age. To explain a little...

Google's Deep Dreaming neural net is completely melting my brain. First, there's what Google's brain-in-a-jar-with-eyeballs makes of images you feed it. Google researchers employed layers of artificial neurons, progressively working on different levels of an images structure, letting it amplify what it *thinks* it sees. The results seem to invariably involve dog-lizards, fish and bird life where previously there may have only been spaghetti:
Exhibit: A.
You can experiment with this marvellous craziness yourself here: deepdreamgenerator.com

This alone is worth toying with. For example this portrait of me holding a baking dish becomes something of a Dr Seuss trip, complete with fish-lizards, mutant turtle-birds and shirt monkeys. Click the images below for larger versions:



close up weirdness
This is obviously fantastic. Like, really? Are we at that point where a computation can spontaneously add human-meaningful elements to an image? I... I guess we are. For the longest time computer-vision and image synthesis has been perfunctory at best, suited only perhaps to picking objects off a conveyor belt robotically or extending tileable textures from photos etc. We've all witnessed and read about the arrival of face-tracking and matching technology however, and now it's approaching an exciting tipping-point. Computers are no longer able to simply recognise faces, they're able to replace them believably in realtime. But I digress.

Extending on Google's research, other parties have created more online tools where you can supply the guesses for what the deep dreaming algorithm sees by giving it a source image to choose elements it recognises from. This is like saying 'Make me a new image from this photo in the style of this image'. For example:

Who doesn't like Van Gogh's Starry Night? 

Brian painted by Van Gogh?
 I know what you're thinking. What if The Great Wave of Kanagawa was really about skulls instead of tsunamis? Well:

Me in the style of Duchamp's Nude Descending a Staircase? Yes.


Naum Gabo yeah yeah!
The main tool I'm currently using is an Android/iOS app called Pikazo: www.pikazoapp.com
This app allows you to upload your chosen combinations to the cloud where the computation is performed. It is intensive and as such, only a limited resolution is permitted - somewhere in the realm of 500px on the longest side, and this takes roughly ten minutes to produce. You can currently upload up to 3 combos at a time as an obvious compute load and bandwidth constraint.

I got a little carried away with this. There just seems to be so many new cool possibilities! Too see my whole gallery of experiments, click here: www.julianbutler.com/Art/Deep-Dreaming/


I'm not sure what this means for art and originality. Obviously the combinations I've managed to produce are in no way able to be passed off as legitimate works by the original artist at all. But then, now the new work is 50% my contribution? According to copyright law and the internets this may be the case. Everything is a remix huh.

However, I think the strength of a successful image still lies equally in the concept behind the image, as well as it's execution, and currently the computer isn't coming up with too many great artistic concepts on its own. 

Yet.

-j

Stanley Donwood I probably owe you a beer.