Thursday, October 27, 2016

Oculus Touch early access


Oculus have very generously sent me Touch kits for use in developing the hand interactions in the Untouched Forest. I'm really stoked to have their support in what is turning out to be a really interesting project. Oculus Touch is not officially available and shipping until December later this year, however pre-orders are available here: https://www3.oculus.com/en-us/rift/

In short, Touch is fantastic. It's capacitive sensing and haptic feedback allow for the detection of hand gestures as well as feedback about objects the player interacts with. It also maps pretty much 1:1 with the controls on HTC's Vive hand controllers so, thanks to the generous support of Valve, almost any SteamVR title is natively supported. I find them very comfortable and intuitive to use and have begun Unity integration and experimentation.

Google's Tiltbrush is an amazing app to use. I'm stunned by the user interaction in the form of wrist-mounted controls and I've begun sculpting crazy light pieces that I dreamed of creating when I was a child:


Massive thanks to Callum Underwood and Andres Hernandez [Cybereality] at Oculus for helping me out and giving me this fantastic set of tools! And thank you to all the engineers and developers there for pushing so hard to get this into the hands of devs all over.

-julian

Tuesday, September 13, 2016

Your foster parents are dead



Yeah ok so, bad title I know. But seriously, remember this moment above from Terminator 2: Judgement Dayhttps://www.youtube.com/watch?v=MT_u9Rurrqg click to the left there to watch.

Well, looks like the speech synthesis component of that instance has arrived. WaveNet - A generative model for raw audio, looks like it has massively closed the gap between computer speech synthesis and human speech. I won't attempt to summarise the whole article but, in short, far more natural sounding computer speech [and in fact almost any audio source including music] has arrived. The implications are, unnerving.


With the previous technology leader 'Concatenative' in the light pink on the far left in each graph, and human speech in green on the right, you can see where WaveNet now falls. Listen to the results yourself in the midst of the article.

This means that all the devices and smart assistants that are speaking to you and I today [Siri, Amazon Echo, Cortana, turn by turn GPS navigation etc] are not only going to sound ever more convincing, but the potential for mimicry of voice actors, politicians and people that are no longer around that we have enough samples of their speech will go through the roof.

Mimicking long dead artists' work is one facet of neural-net tech, this is another.

Incidentally, in that same article are some amazing [and frightening] piano music examples. I think the results are maybe physically impossible to play. They are interesting in a somewhat schizophrenic fashion.

-j


Saturday, August 20, 2016

Welcome to the Untouched Forest

I've begun a new VR project entitled Untouched Forest. It's a piece of native New Zealand forest where you can experience flora and fauna in an interactive way. I'll be exploring player/character interactions in a relaxing virtual environment. Click here to take a look: www.untouchedforest.com

from the site:

Spend some time in a NZ native forest environment as native bird life comes to visit you. Experience a night and day cycle with all the variation and appearance of creatures that has to offer. Use Oculus Touch to let birds come and land on your outstretched hands and enjoy their song. See glow worms and hold a weta in your hand. Sit and relax as day and night pass before you while you escape your normal surroundings.

More information on the site's blog here: www.untouchedforest.com/blog/

-julian

Tuesday, April 26, 2016

Google's Deep Dreaming, your favourite artists and YOU.

What if your favourite dead artists were still painting fresh works? Fresh works containing themes *you* specifically desired? Are you still sad that Francis Bacon perished? Are you gutted that H. R. Geiger fell down some stairs and died? Isn't it sad that we don't have more of Gustav Klimt's stunning paintings from his gold phase? I think so.

But here are some paintings they never made:




What is this sorcery? We've entered a new age. To explain a little...

Google's Deep Dreaming neural net is completely melting my brain. First, there's what Google's brain-in-a-jar-with-eyeballs makes of images you feed it. Google researchers employed layers of artificial neurons, progressively working on different levels of an images structure, letting it amplify what it *thinks* it sees. The results seem to invariably involve dog-lizards, fish and bird life where previously there may have only been spaghetti:
Exhibit: A.
You can experiment with this marvellous craziness yourself here: deepdreamgenerator.com

This alone is worth toying with. For example this portrait of me holding a baking dish becomes something of a Dr Seuss trip, complete with fish-lizards, mutant turtle-birds and shirt monkeys. Click the images below for larger versions:



close up weirdness
This is obviously fantastic. Like, really? Are we at that point where a computation can spontaneously add human-meaningful elements to an image? I... I guess we are. For the longest time computer-vision and image synthesis has been perfunctory at best, suited only perhaps to picking objects off a conveyor belt robotically or extending tileable textures from photos etc. We've all witnessed and read about the arrival of face-tracking and matching technology however, and now it's approaching an exciting tipping-point. Computers are no longer able to simply recognise faces, they're able to replace them believably in realtime. But I digress.

Extending on Google's research, other parties have created more online tools where you can supply the guesses for what the deep dreaming algorithm sees by giving it a source image to choose elements it recognises from. This is like saying 'Make me a new image from this photo in the style of this image'. For example:

Who doesn't like Van Gogh's Starry Night? 

Brian painted by Van Gogh?
 I know what you're thinking. What if The Great Wave of Kanagawa was really about skulls instead of tsunamis? Well:

Me in the style of Duchamp's Nude Descending a Staircase? Yes.


Naum Gabo yeah yeah!
The main tool I'm currently using is an Android/iOS app called Pikazo: www.pikazoapp.com
This app allows you to upload your chosen combinations to the cloud where the computation is performed. It is intensive and as such, only a limited resolution is permitted - somewhere in the realm of 500px on the longest side, and this takes roughly ten minutes to produce. You can currently upload up to 3 combos at a time as an obvious compute load and bandwidth constraint.

I got a little carried away with this. There just seems to be so many new cool possibilities! Too see my whole gallery of experiments, click here: www.julianbutler.com/Art/Deep-Dreaming/


I'm not sure what this means for art and originality. Obviously the combinations I've managed to produce are in no way able to be passed off as legitimate works by the original artist at all. But then, now the new work is 50% my contribution? According to copyright law and the internets this may be the case. Everything is a remix huh.

However, I think the strength of a successful image still lies equally in the concept behind the image, as well as it's execution, and currently the computer isn't coming up with too many great artistic concepts on its own. 

Yet.

-j

Stanley Donwood I probably owe you a beer.


Wednesday, April 6, 2016

Vive VR launch video

With both launch campaigns from Oculus and Valve/HTC in full swing, this Vive launch video really grabbed my attention. It's commonly acknowledged in the fledgling VR community that it's tough to convey what VR is like in words and that experiencing it is the best way to explain it to newcomers. This low-key mix of everyday people experiencing VR for the first time sells what virtual reality is like in a way that needs no words. Check it out:


Congrats to Valve and HTC for setting the tone. Onwards!

-j

Friday, March 25, 2016

It's Virtual Reality Christmas Eve!

It's been a long wait for people interested in getting their hands on the consumer VR hardware Oculus' founder Palmer Luckey first dreamt up on the MTBS3D forums four years or so ago. It's been an even longer wait for people [myself included] who were expecting it to take off in the mid-90s when The Lawnmower Man had us all thinking the shape of work to come was suspended leather couches and skin-tight glowing bodysuits. Thankfully that is not to be the case.

But, it's now finally time for this:


And the below gif singlehandedly sums up the mood of myself and the VR community:


Looks like I'll be waiting till late April/mid May for the delivery of my CV1 darnit. I was a day late to the kickstarter for the DK1. Had I been a little snappier I'd be hopefully receiving my free CV1 this Monday, as shipping to the original kickstarter backers has commenced!

Bring it on.

Sunday, March 8, 2015

Unity 5 realtime global illumination in VR

Well, after a pretty great response to the Sunshine demo, [big thanks to Oculus for sharing it on the front page] I'm ready to start looking at the next project.

I'm a big fan of American artist James Turrell, and I wanted to make an experience reminiscent of his work where you can just experience the effect of light. That's kind of simplistic I know, but his colours, spaces and projections look terrific to me and I'd love to go and see a show of his work, or even better, go see his Roden Crater when it's finished.

Not Unity, but James Turrell's 'Blue Planet Sky'.
So, the first thing to try with Unity 5 is the realtime global illumination system built in from Geomerics, called Enlighten. I wanted to illuminate a scene completely with only one light. Here's a clip showing my current demo:


I recorded this clip in VR mode and looked around using the Oculus Rift, so my apologies to those looking for a standard single eye view. Regardless, you can see the effect of the sunlight bouncing around during the daytime phase and then the same effects during the moonlight phase.



Effects like this used to only be possible in an offline rendering situation with all the indirect bounce-light calculations sometimes taking hours. With a little bit of up front pre-computation [in this case about 45" on my computer] this lighting can play back at realtime speeds.

This image, from a mental ray render in Maya in 2003 took about an hour to render from memory?
So it's a really exciting time to be making interactive 3D content - realtime GI is a major stepping stone towards realistic lighting that can immerse the viewer.

-j