Many status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture.
To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from.
The video demonstrates some incredibly useful techniques, but as always, the devil is not just in the details, but also in implementation. Nothing shown in the video seems particularly complicated to implement using current technology, but UI elements that move around based on how you are holding or interacting with the device can be either incredibly intuitive – or downright infuriating.
Is anyone else reminded of that personalized menus idea Microsoft Office tried and abandoned?
What an incredible potential to build a confusing and infuriating UI!
Given that right now they’re not even capable of making a text suggestion NOT suddenly change place or even disappear when I type one more letter, I have zero faith that they’ll get this right.
(Yes, I know “they” are different “they”. I don’t care.)