Saturday, December 8, 2018

FujiFilm X100F update

So after shooting happily with the sublime FujiFilm X100s for the last 5 years or so, it's time for an update. Thankfully, in the interim, Fuji have not sat idle. They have iterated steadily, producing the X100t and in 2017 the X100f, representing a significant jump in specs and capabilities.

I love the way the X100s produces images, from the combination of it's controls and lens to the friendly portability and styling that impart a relaxed mood to capturing scenes. It's just so much fun to have around. Then in Lightroom, the grading possibilities and way that the X-trans sensor sharpens makes me look forward to processing images from trips abroad.

Deer - Nara Park - Japan
The X100F delivers a huge improvement in speed, battery life, ISO and file resolution, marking a significant commitment from Fuji to keep delivering on what makes the X100 series so delightful. I couldn't resist dressing up my new X100f in all the latest hipster accoutrements:

Thanks in particular to Gordy's Camera Straps for the custom leather and binding.
The only fauxhemian element left to arrive is a leather Gariz half-case, to replace the Fuji brown leather travel case I had on the x100s. Also, my amazing wife is getting me the Tcl-x100 tele-conversion lens for Christmas which will afford me a 50mm equivalent focal length. This will be a great travel and portrait option where the 28mm equivalent fixed lens is a little wide.

Merry Christmas and happy shooting!

-julian

Thursday, October 27, 2016

Oculus Touch early access


Oculus have very generously sent me Touch kits for use in developing the hand interactions in the Untouched Forest. I'm really stoked to have their support in what is turning out to be a really interesting project. Oculus Touch is not officially available and shipping until December later this year, however pre-orders are available here: https://www3.oculus.com/en-us/rift/

In short, Touch is fantastic. It's capacitive sensing and haptic feedback allow for the detection of hand gestures as well as feedback about objects the player interacts with. It also maps pretty much 1:1 with the controls on HTC's Vive hand controllers so, thanks to the generous support of Valve, almost any SteamVR title is natively supported. I find them very comfortable and intuitive to use and have begun Unity integration and experimentation.

Google's Tiltbrush is an amazing app to use. I'm stunned by the user interaction in the form of wrist-mounted controls and I've begun sculpting crazy light pieces that I dreamed of creating when I was a child:


Massive thanks to Callum Underwood and Andres Hernandez [Cybereality] at Oculus for helping me out and giving me this fantastic set of tools! And thank you to all the engineers and developers there for pushing so hard to get this into the hands of devs all over.

-julian

Tuesday, September 13, 2016

Your foster parents are dead



Yeah ok so, bad title I know. But seriously, remember this moment above from Terminator 2: Judgement Dayhttps://www.youtube.com/watch?v=MT_u9Rurrqg click to the left there to watch.

Well, looks like the speech synthesis component of that instance has arrived. WaveNet - A generative model for raw audio, looks like it has massively closed the gap between computer speech synthesis and human speech. I won't attempt to summarise the whole article but, in short, far more natural sounding computer speech [and in fact almost any audio source including music] has arrived. The implications are, unnerving.


With the previous technology leader 'Concatenative' in the light pink on the far left in each graph, and human speech in green on the right, you can see where WaveNet now falls. Listen to the results yourself in the midst of the article.

This means that all the devices and smart assistants that are speaking to you and I today [Siri, Amazon Echo, Cortana, turn by turn GPS navigation etc] are not only going to sound ever more convincing, but the potential for mimicry of voice actors, politicians and people that are no longer around that we have enough samples of their speech will go through the roof.

Mimicking long dead artists' work is one facet of neural-net tech, this is another.

Incidentally, in that same article are some amazing [and frightening] piano music examples. I think the results are maybe physically impossible to play. They are interesting in a somewhat schizophrenic fashion.

-j


Saturday, August 20, 2016

Welcome to the Untouched Forest

I've begun a new VR project entitled Untouched Forest. It's a piece of native New Zealand forest where you can experience flora and fauna in an interactive way. I'll be exploring player/character interactions in a relaxing virtual environment. Click here to take a look: www.untouchedforest.com

from the site:

Spend some time in a NZ native forest environment as native bird life comes to visit you. Experience a night and day cycle with all the variation and appearance of creatures that has to offer. Use Oculus Touch to let birds come and land on your outstretched hands and enjoy their song. See glow worms and hold a weta in your hand. Sit and relax as day and night pass before you while you escape your normal surroundings.

More information on the site's blog here: www.untouchedforest.com/blog/

-julian

Tuesday, April 26, 2016

Google's Deep Dreaming, your favourite artists and YOU.

What if your favourite dead artists were still painting fresh works? Fresh works containing themes *you* specifically desired? Are you still sad that Francis Bacon perished? Are you gutted that H. R. Geiger fell down some stairs and died? Isn't it sad that we don't have more of Gustav Klimt's stunning paintings from his gold phase? I think so.

But here are some paintings they never made:




What is this sorcery? We've entered a new age. To explain a little...

Google's Deep Dreaming neural net is completely melting my brain. First, there's what Google's brain-in-a-jar-with-eyeballs makes of images you feed it. Google researchers employed layers of artificial neurons, progressively working on different levels of an images structure, letting it amplify what it *thinks* it sees. The results seem to invariably involve dog-lizards, fish and bird life where previously there may have only been spaghetti:
Exhibit: A.
You can experiment with this marvellous craziness yourself here: deepdreamgenerator.com

This alone is worth toying with. For example this portrait of me holding a baking dish becomes something of a Dr Seuss trip, complete with fish-lizards, mutant turtle-birds and shirt monkeys. Click the images below for larger versions:



close up weirdness
This is obviously fantastic. Like, really? Are we at that point where a computation can spontaneously add human-meaningful elements to an image? I... I guess we are. For the longest time computer-vision and image synthesis has been perfunctory at best, suited only perhaps to picking objects off a conveyor belt robotically or extending tileable textures from photos etc. We've all witnessed and read about the arrival of face-tracking and matching technology however, and now it's approaching an exciting tipping-point. Computers are no longer able to simply recognise faces, they're able to replace them believably in realtime. But I digress.

Extending on Google's research, other parties have created more online tools where you can supply the guesses for what the deep dreaming algorithm sees by giving it a source image to choose elements it recognises from. This is like saying 'Make me a new image from this photo in the style of this image'. For example:

Who doesn't like Van Gogh's Starry Night? 

Brian painted by Van Gogh?
 I know what you're thinking. What if The Great Wave of Kanagawa was really about skulls instead of tsunamis? Well:

Me in the style of Duchamp's Nude Descending a Staircase? Yes.


Naum Gabo yeah yeah!
The main tool I'm currently using is an Android/iOS app called Pikazo: www.pikazoapp.com
This app allows you to upload your chosen combinations to the cloud where the computation is performed. It is intensive and as such, only a limited resolution is permitted - somewhere in the realm of 500px on the longest side, and this takes roughly ten minutes to produce. You can currently upload up to 3 combos at a time as an obvious compute load and bandwidth constraint.

I got a little carried away with this. There just seems to be so many new cool possibilities! Too see my whole gallery of experiments, click here: www.julianbutler.com/Art/Deep-Dreaming/


I'm not sure what this means for art and originality. Obviously the combinations I've managed to produce are in no way able to be passed off as legitimate works by the original artist at all. But then, now the new work is 50% my contribution? According to copyright law and the internets this may be the case. Everything is a remix huh.

However, I think the strength of a successful image still lies equally in the concept behind the image, as well as it's execution, and currently the computer isn't coming up with too many great artistic concepts on its own. 

Yet.

-j

Stanley Donwood I probably owe you a beer.


Wednesday, April 6, 2016

Vive VR launch video

With both launch campaigns from Oculus and Valve/HTC in full swing, this Vive launch video really grabbed my attention. It's commonly acknowledged in the fledgling VR community that it's tough to convey what VR is like in words and that experiencing it is the best way to explain it to newcomers. This low-key mix of everyday people experiencing VR for the first time sells what virtual reality is like in a way that needs no words. Check it out:


Congrats to Valve and HTC for setting the tone. Onwards!

-j