Gazing at SIGCHI

picasso_stein_eyes

There are so many papers to attend here at SIGCHI 2019 that the experience is quite overwhelming. At any given moment there are about 20 different things you can choose to attend, between all the courses, papers and other gatherings.

There is a joke that going to SIGCHI is almost like not going to SIGCHI. After all, if you go to SIGCHI, you miss 95% of everything here. If you don’t, you only miss 100%.

Of course the real reason to attend SIGCHI is for those insights that you only get when you actually experience the people and presentations for yourself. So far I have been inspired by two paper presentations in particular.

Both were about using eye gaze for input. Eye gaze is incredibly fast, which makes it potentially very useful as an input modality. The difficulty is twofold. The first difficulty is that gaze isn’t always accurate, since our eyes dart around quite a bit.

The second difficulty, sometimes called the “Midas touch” problem, is that you don’t want things to happen the moment a user is looking at something. For example, imagine being in delete mode. The user looks around the screen and thinks “don’t delete that”, then “oh no, don’t delete that either!” Except that merely by looking at something they have already deleted it.

Both papers solved this by coupling eye gaze to some sort of trigger. In the case of one paper, items on the screen the user might be interested in selecting were highlighted in different colors. The trigger consisted of large squares with matching colors off to the side of the screen. Looking at the square of corresponding color completed an action. Users reported that this approach to be very fast and natural, and preferable to selecting with a mouse.

The other paper solved the problem by coupling eye gaze to keyboard typing. As the user types, the program highlights places in the document where the user might be looking. As the correspondence between typed text and a particular gaze target becomes stronger, that text is further highlighted. The user can then modify, delete, cut, paste, etc. Essentially gaze is taking the place of the mouse. Users report that the experience feels natural and easy to use, and preferable to using a mouse.

I love the way that a general principle emerges from these two papers. Rather than use an artificial device such as a mouse, with a little thought we can instead use our own bodily senses.

Leave a Comment

Your email address will not be published.