I spend a lot of time imagining what it would be like when SmartPhones become replaced by wearables, to the point where mixed reality becomes the norm. I often look at everyday interactions through the prism of this question.
For one thing, once the wearables replace the SmartPhone, I suspect I will end up spending less time using my notebook computer. After all, rather than the tiny cramped screen of my phone, I will have a vast display space wherever I go, far larger than I could ever hope for on my MacBook.
But that still leaves the question of text input. Sure, I will be able to use speech to text, but there are many situations for which that’s not appropriate, such as during a conversation or a lecture.
So I’ve been developing a gestural input system, on the assumption that future wearables will be able to track my hand and finger positions. I’m aiming for something really fast (similar in speed to touch typing), easy to master, socially unobtrusive, and not tiring to use.
Also, it should be a “heads up” writing system. That is, I shouldn’t need to look at my hands while I am using it. Rather, I want to be free to look at the world around me, or at the person I am speaking to, while my hands are generating text.
I’ve gotten quite far along, and I’m pretty happy with the results so far. Hopefully I will be able to show a demo video here in this blog in the near future.
There is also something poetic, I think, about the idea of writing in the air. One day I hope to gaze up to the clouds above, and use my hands to gesture a poem to the Universe.