In 2014 I went to see the Digital Revolution show at the Barbican and was struck by a large installation called "The Treachery of Sanctuary." Participants stood in front of a huge screen that showed their shadows decomposing into flocks of birds, and finally, flapping their hands, they flew up off the screen. It reminded me of some experiments I did in 2012, giving people feathers in a "magic mirror", so decided to film and write up that old project.

Working with video to do gesture recognition and real-time processing used to be a serious undertaking, involving computer science and maths and formulae on whiteboards. Before starting on production for his 2002 film Minority Report, Steven Spielberg consulted with an impressive array of experts in what you could call future technologies, and the resulting gestural interface designed by John Underkoffler of the MIT Media Lab was both an accurate prediction of and an inspiration for near-future user interfaces. Fast forward to 2006 and Israeli company PrimeSense was successfully developing a system that projects a pattern of infra-red light and scans it with an infra-red camera to calculate the depth at each point - hence a depth camera. The Nintendo Wii was taking off with its motion-sensing Wiimote, and by 2008 at Microsoft Project Natal was coming together, attempting to create the true Minority Report style interface combining the PrimseSense system with a regular camera and microphones, applying machine learning techniques for voice recognition and body part recognition. The resulting Kinect was a huge success, and the large-scale consumer adoption kept prices low enough for it to become a very sophisticated bit of hobbyist kit. By 2012 the open source world had hacked together enough pieces for the great how-to book Making Things See.

So I had my Kinect (go get one! as I write this in Sept 2014 these formerly almost-military-grade bits of kit are going for £22 at Computer Exchange), I had a wealth of libraries. I did waste a day on an overly ambitious attempt to get depth sensing working with my trusty Raspberry Pi (unclear at that stage that anyone had managed it, although perhaps a solved problem now). Meantime at State we were building a global opinion network, and every product example we worked through at the time invariably involved Natalie Portman's performance in the film Black Swan. The film includes a powerful sequence where, dancing as the black swan, we see her partially transformed, with feathered limbs for her arms. How close could a general coder get to this kind of high-end special effect in 2012 I wondered?

The video will show you how far I got in a day or so in a really fun little project. Skeleton tracking and video processing were startingly easy compared to my expectations, but drawing nice feathers could clearly have benefited from several days more work!

If you're interested in setting up your own experiments, but with systems around in 2014 as opposed to 2012, I recommend Glen McPherson's post on How to setup Microsoft Kinect on Mac OS X 10.8 (Mountain Lion) to get yourself up and running (especially as he points to archived versions of libraries; since the acquisition of PrimeSense by Apple the OpenNI.org site has been shut down), and then code samples in Making Things See should work. For my next work in this area however I will be adding in the wonderfully precise Leap Motion controller, and the mobile sensor from Structure is intriguing. A step closer to Minority Report perhaps...