Alterface!

Updated on 01-Sep-2007

Either you’re working with a keyboard and mouse, or reading what your computer displays on-screen. Don’t you just wish user interfaces could go through a full makeover?

We’ve always been using boring-keyboard and boring-mouse to interact with computers. Why? Because that’s what was in use when the PC gained in popularity. As if the case with PCs wasn’t bad enough, some mobile devices have incorporated the QWERTY keyboard. You get tiny little keys that are generally used by the thumbs, which the keyboard layout wasn’t designed for in the first place. Should we settle for mediocre (or worse) just because it’s popular? When it comes to mass-produced devices, sadly, the answer is We Must.

What Choice Do We Have?

For now, none! But more and more innovative research is being done in the field of Human Computer Interaction (HCI), in fields as diverse as touchscreens, natural language processing, eye tracking, and more.

Before you think of questioning the need for new interfaces, think of the popularity of products such as every tablet PC, every PDA, the Wii, and the iPhone. These are popular only because they give you an alternative to pressing buttons or tapping away at a keyboard, and it’s mainly their interfaces that set them apart from the crowd.

The GUI

The WIMP interface, short for Windows, Icons, Menus and Pointer, was revolutionary for its time. In 1984 the Macintosh was the first publicly-available computer; it featured the WIMP interface. Twenty-three years later, things haven’t changed much, have they?

Now we’re definitely not suggesting that interfaces change for the sake of change. The fact is as long as we’re limited to a two-dimensional screen as the PC’s output, we’re probably not going to see anything but the WIMP interface. But with so much research happening to bring 3D realism to our screens, what happens when it does get here? Do you really want to be sitting and trying to use a mouse to drag a 3D window?

Another aspect to consider is the types of tasks we’re trying to accomplish: for example, why bother taking word processing to a 3D level? It’s not like you will be able to type along all three axes! It’s quite obviously the application that needs to be looked at, and one common GUI for everything might not be the way ahead.

Take gaming for example. There are games that are built for consoles and those built for PCs. A racing game, for example, might be a lot easier to play on a console, while an FPS might only be playable on the PC. However, platforms be damned, because both the racing game and the FPS could be so much better if they were more immersive and in 3D. Also, the racing game is a lot more fun if you have a steering wheel control, while an FPS might be more fun if we had a controller that looked like a gun and we had to physically turn around to see what’s behind us.
 
Touchscreens

From PDAs to Tablet PCs and now smartphones, everything is going the touch input way. If we’re going to have to use the point-and-click interfaces, nothing beats just tapping a screen—especially with handhelds or laptops.

The problem, however, is that our current WIMP system was built for mice and keyboards, so it’s not ideal for touchscreens.

In our June 2007 issue, we spoke about Jeff Han’s multi-touch interface, which allows you to use multiple points of contact, or even have multiple people using the same interface at the same time. Han’s interface is revolutionary: it will allow users to actually get a feel for applications that needs graphics, zooming and interaction. But such an interface is useless for, say, gaming, because there’s no interaction required. Ditto for word processing. However, let’s say you’re managing or editing photographs, browsing the Net, or using Google Earth. That’s where Han’s vision comes in. Perhaps the way gaming has broken away into the specially-made machines we know as consoles today, the future might see different hardware for different purposes.


We basically have to un-teach people what they have learned so far about computing, and convince them that they can use several fingers, that several people can work on the screen at once”
Jeff Han, Researcher, New York University


Another example here is Microsoft’s Surface, a 30-inch multi-touchscreen that’s kept horizontally instead of vertically, and is embedded in a desk. It allows multiple users to use it at once. It uses Windows Vista, a PC embedded in the desk, and five cameras to sense motion. It can sense 52 simultaneous touches, computing them and translating them into on-screen inputs.

Check out Jeff Han’s work at http://cs.nyu.edu/~jhan/, and the Microsoft Surface at http://www.microsoft.com/surface/.

Direct Interaction

You’ve come across Natural Language Processing (NLP) and Brain-Computer Interface (BCI) in this space. We’ve discussed them at length earlier, so we won’t get into the details, but here’s about two major technologies that promise to get us interfacing better with computers.

The much-hyped Surface takes boring computing to mind-bogglingly amazing new levels

NLP: NLP is the ability of a computer to understand and compute a human’s normal speaking voice. The perfect NLP system would allow you to talk to your computer the way you would talk to another person. We’re far from achieving this—subtleties of voice patterns, background noises, accents, and more play spoilsport (refer Tech Transcends Tongues, Digit, April 2006.) Of course that doesn’t stop us from trying, and there are a lot of NLP projects being worked upon: check out http://opennlp.sourceforge.net/projects.html, a list of open source projects related to NLP.

BCI: Imagine a computer that could read your mind—yes, your mind—there’s no talking required. Imagine sitting in a restaurant talking to your laptop… now you know why NLP isn’t really ideal for the way we currently use computers. However, a computer that can read your thoughts… now that’s something we all want. Imagine thinking out that letter, moving your character in a game just by thinking about it… even better, imagine having a random thought, “I wonder who does the voice of Bart Simpson?” while interfaced with your PC.

Currently, BCI is being researched by quite a few groups all over the world. Most notable is Fraunhofer’s Berlin Brain-Computer Interface (BBCI; more at http://ida.first.fraunhofer.de/bbci/ index_en.html), which is already showing off the ability for people to play Pong against each other or the computer, using their thoughts as controls. Fraunhofer also holds BCI-related contests to encourage the development of this technology. Currently, accuracy for such a system in ideal conditions is between 91 and 99 per cent, which is extremely good—and leaves us optimistic.

The Future

It will take years before we have decent prototypes (leave alone products). We are also faced with the challenge of making a billion people forget all they know about computers, and re-learn to use them from scratch. Technologies such as eye-controlling the mouse, which uses a camera to monitor your eye and translate it into data your PC can understand is all very well and good, but what use if we still have to type out our letters?

Brain-Computer Interfaces also have limitations: how do we stop the PC from trying to interpret thoughts we don’t want it to? Imagine working in an office, looking over to that hot co-worker and thinking, “I wish I could send him / her an e-mail asking him / her out,” then turning back to see that your PC just shot off a mail asking the boss out!

 New GUIs for operating systems / software and new hardware bring us to the chicken-and-egg scenario—software waiting for new input devices to code for, and devices waiting to see what software or AI can make possible.

We loved the concept of Microsoft’s WinFS, and were looking forward to having a file system that was built like a database. Unfortunately, looks like we won’t be seeing anything like it for PCs soon. Still, we’ll keep our fingers crossed. WinFS promised to be different from the WIMP interface, and perhaps just that excitement of seeing something new has us hooked.

Another trend we have to consider is the divergence factor. When PDAs were killed by smartphones, Digit told you all about Convergence—the coming together of gadgets, hardware, software, and services into one do-it-all gadget. Things are slowly beginning to shift in the opposite direction. Sure, we all want more powerful cell phones, and who wouldn’t want a device powerful enough to run Vista yet small enough to fit in one’s pocket, and which you can use to make and receive phone calls? Still, how do we explain gaming consoles and their runaway popularity? Then there are the Internet tablets that have been popping up, allowing you limited functionality as compared to a PC—while still being ideal for browsing!

Perhaps the future of BCI is planting a chip in the brain—who knows? Apologies for the corniness when we tell you that all we can do is wait and watch. 

Robert Sovereign Smith

Robert (aka Raaabo) thinks his articles will do a better job of telling you who he is than this line ever will.

Connect On :