I recently went to computing science seminar at Newcastle University with the title ‘Reality Remixed: Augmented Reality without Gloves and Glasses.’ It was run by David Kim, a PhD student in the Digital Interaction Group at Culture Lab and Microsoft Research Cambridge.

Here’s the abstract…

The idea of augmented reality has been around for almost two decades but hasn’t really set foot on our reality yet. Many sensing and display technologies that capture our surroundings and enhance our perception of the world, such as head-mounted displays and data gloves, are still very restricting and cumbersome to use. This talk presents some of the recent advances in augmented reality sensing and display technologies that have been developed at Microsoft Research in Cambridge.

Given the tech press coverage AR gets, much of what was presented may be familiar to those who follow the field, but it was good to hear first hand from someone involved in their development and there was something new (to me) shown at the end.

The broad aim of the research he described is to make AR interactions more natural, so that people are immediately immersed into a mixed reality environment without the technology getting in the way. That is perhaps in contrast to current generation of AR which is typically mediated through displays (screens, or maybe Google glasses sometime soon) that can degrade the view of reality, be unnatural and restricting. They are designing devices and gestural interfaces to allow people to directly manipulate digital objects in the physical environment (you know, like in those movies!), so that AR feels more natural.

Another stated limitation of current AR was that while devices may be aware of position and orientation, they don’t know much about the geometry of world around them. Building better environmental sensing into devices allows for the surroundings to be quickly scanned so that digital objects can “know” about the real environment and it’s properties.

I’m not doing this justice with my waffle, so here are some videos where you can see some examples in action…

KinectFusion

“Real-time 3D surface reconstruction system that creates a volumetric model of the environment for interactive use.” This is quite a long research video, but stick with it (until 3min50s) and you’ll see an impressive demonstration of a room being mapped in seconds to allow digital particles to interact with it according to real world physics.

HoloDesk

“Situated augmented reality display that enables direct spatial interaction with virtual objects without instrumenting the user. ” Rather than projecting AR anywhere, this uses dedicated desk display unit. While this may seem more restrictive than the usual mobile AR approach, but I can see use cases where you’d want a planned interaction with AR, rather than spontaneously finding a need to use it anywhere.

Digits

“Wrist-worn gloveless hand tracker that fully reconstructs the hand pose and allows dexterous 3D interactions on-the-go.” No videos available (AFAIK) – you had to be there! Probably the most impressive technology and something I hadn’t seen before.

UPDATE: Digits video is now available…

Coming soon…

In the Q&A at the end, he hinted at current work being to decrease the latency between scanning and (re)presenting, work on making it more mobile and enabling remote collaboration in a mixed reality space. While that may still be speculative, I think this would address some of the barriers to using AR in education. That and making it easier to create AR, not just consume it. I’ll save my thoughts on AR and education for another post…

Related posts: