As smartphones have grown in size over the years, using one single-handedly has gotten harder and harder. But with a user interface that adapts itself accordingly, such as dynamically repositioning buttons to the left or right edges of the screen, or shrinking the keyboard and aligning it left or right, using a smartphone with just one hand can be a lot easier. The only issue is enabling a smartphone to automatically know how it’s being held and used, and that’s what this team of researchers has figured out without requiring any additional hardware.
With a sufficient level of screen brightness and resolution, a smartphone’s selfie camera can monitor a user’s face staring at the display and use a CSI-style super zoom to focus on the screen’s reflection on their pupils. It’s a technique that’s been used in visual effects to calculate and recreate the lighting around actors in a filmed shot that’s being digitally augmented. But in this case, the pupil reflection (as grainy as it is) can be used to figure out how a device is being held by analyzing its shape and looking for the shadows and dark spots created as a user’s thumbs cover the screen.
There is some training needed for the end-user, which mostly involves snapping 12 photos of them performing each grasping posture so the software has a sizeable sample size to work from, but the researchers have found they’re able to accurately figure out how a device is being held about 84% of the time. That will potentially further improve as the resolution and capabilities of front-facing cameras on mobile devices do, but that also raises some red flags about just how much information can be captured off a user’s pupils. Could nefarious apps use the selfie camera to capture data like a user entering a password through an on-screen keyboard, or monitor their browsing habits? Maybe it’s time we all switch back to using smaller phones that are single-hand-friendly and start blocking selfie cameras with sticky notes too.