What Is The Next Data Input Device?
Harken back to the days of yore when the only input device connected to a computer was a keyboard. Everything was text based and very very boring. So, when the mouse came along it seemed (and rightly so) like a truly remarkable, ground breaking achievement. No longer were you bound to the command line! Instead you had *gasp* a pointer that floated about your screen, selecting and clicking things. The computing world went nuts over this little device hooking them up to every computer in sight and never looking back.
Of course, several decades later it is apparent that the industry stopped looking forward as well. Every aspect of the modern day computer has been drastically changed over the last decade or so except for the way in which we input data. Think for a moment of some of the advances that have happened. Dual core processors, huge RAM chips, faster more powerful buses, flat panel displays, flash drives, bluetooth, wireless networking, need I go on?
And yet, we are still using a keyboard and a mouse to input data. While it is true that they have gained more buttons, they really haven’t advanced very much. The trackball has been replaced with a glowing red light, a scroll wheel was added and they have both gone wireless, yet at their core they really aren’t all that different from the devices used 20 years ago. Which brings me up to my point, why? Why hasn’t that aspect of a computer caught up to the rest? And when will things change? What will they change into? I have a few ideas.
How many of us watch Star Trek? Remember how Captain Picard would walk onto the bridge and ask the computer a question and the computer would always give him the right answer? On the first try no less. Well, that will never happen in the real world I am afraid. Save for very basic commands, using your voice as a method for controlling a computer is very inefficient. Why you ask? Well, because sometimes you know what you want to do but haven’t taken the time to articulate it. With a non-verbal method of input that really doesn’t matter. But if you are relying solely on your voice then you must very clearly state what you want the computer to do (because they can’t read your minds) or else you risk getting incorrect results. So, I don’t see voice activated commands replacing the keyboard and mouse anytime soon.
So now that I have established that there are some Star Trek fans in the room now lets see a show of hands for all of the Matrix fans. Ah yes, there are quite a few. On the extreme end of the spectrum we have the most efficient and most far-fetched (for now at least) method of controlling a computer. Directly linking your mind to a machine so that there is no delay between thought and execution. This would of course be the most efficient method of using a computer the only downside is that currently, it is impossible. Very very impossible. But even if it were doable then you would still need a giant hole in your head so that someone could shove a spike in it and, lets face it, no matter how cool we might think that might be giant spikes in the brain might not be for everyone.
Last question, how many of us have read Tad Williams’ Other Land saga? Hum, this might be a tough question to get some affirmative responses from, seeing as how each of the four books is at least 800 pages long and that might be daunting to some. No matter, I will summarize it in part for those not willing to spend the next two months reading. At some point the author speculates that using a combination of gloves, headsets and suspension chairs one could simulate virtual reality enough to interact effectively with a computer and other online users.
This last case seems to be the most likely to happen in our lifetimes. There is nothing to prevent it from being implemented right now, other than the fact that there is currently no operating system designed to accommodate such hardware. Yes, I am afraid that before we put away the trusty keyboard and mouse our operating systems will have to change to reflect a 3D virtual world instead of our current 2D flat screen. Hopefully the end of the mouse and keyboard is not far off and soon we can cast our Logitech mouses into the trash and don gloves and headsets which will be better suited to exploring a new, 3D version of the net.
As a brief yet amusing aside it should be noted that the author also predicts a Ever Quest like 3D game to which a whole new generation of kids are addicted. It is both funny and sad though incorrect. Because everyone knows that kids will be too busy killing cops and beating hookers in Grand Theft Auto 42 Death City to bother with trolls, and elves and monsters and such.
Comments
voice is the next data input method. It’s the holy grail of computer input.
My gf uses Naturally Speaking to save wear and tear on her wrists and it’s pretty damn fast after it trains to your voice. I’ve seen it with my own eyes. She can voice dictate with the same accuracy and speed in which I type.
I know that Bill Gates said that it is infeasible but he’s dead wrong. The probabability of a person saying “mice and elf” adjacent to each other is very rare.
Once PDA/Smartphones are fast enough to take dictation over a bluetooth or Wireless USB headset the game totally changes. Laptops then become passe and the PDA revolution truly begins.
Instead of different input devices I think that the real need is to simplify. A comparison between Pages and Word is a good example. Shortcuts are also critical. It’s AutoCorrect in Word and I use typeit4me on the Mac. For example, I use “qqq” i to type “Australia”.
The keyboard & mouse setup is a great tool for writing when you want to make changes as you go - or move parts to a different place.
I’m too old and slow to dictate via speech, but I would like speech commands, like “bold” for a highlighted area, or to call up an app. I pity the poor programmer, however, who has to produce something that would understand my accent.
The example from the Other Land saga kind of reminds me of what TC was using in Minority Report. That was one awesome input device!
Why couldn’t you use the built-in iSight of the iMac to interpret ASL (American Sign Language) as input? It’s proven to be very effective at communicating with machines (they can pick up the changes rather well), and most importantly is a universal language.
Also, using your hands and fingers to select, drag, drop, etc… would be pretty basic, the Desktop part of the OS will just have to change a bit in order to reduce ambiguity.
what about retinal scanners? if a camera (a souped up isight) was trained on your eye, wouldn’t it be possible to move the cursor with your eye, then use a button on the keyboard to click, drag, etc?
“...soon we can cast our Logitech mouses into the trash and don gloves and headsets which will be better suited to exploring…”
But that would require users to completely disconnect from the outer world and also to get all dressed up before using the pc. I think input devices will tend to become more ‘invisible’ to the user. The problem with voice recognition is not typing but other commands, which could be complicated without a mouse. But a combination of VR and touchscreen or mouse could be the most useful. Of course, this is available today! Maybe a good idea could be a sort of pointer or something attached to the tip of your finger, with the same functionality as a glove but without having to wear an actual glove.
martunibo, you stole the words from my lips. Anything that involves strapping yourself up in order to use it is impractical in “real work” environments. Keyboard and Mouse are quite efficient to use for given tasks, especially shortcuts. Implant me nano-transmitters in all my fingertips and I will go all minority report on the Mac, but I would still have to engage some physical instruction in order for the computer to know that I am now talking to it instead of it trying to interpret me washing my hands.
Beaver: don’t worry, the nano-transmitters come with an I-O switch, just hit any finger against the desk twice and a green LED lights up under your thumbnail. Also, keep the little finger pressed for about 3 seconds and play/pause/previous/next butons appear on your other nails. Wanna supersize that with a 2 GB flash chip to store your tunes? (This one will be installed under your palm)
I want pre-emptive text like my Nokia mobile phone has.
The other problem with voice recognition, a-la Star Trek, is the computer also has to learn to identify voices and then distinguish betwween two people speaking at the same time.
There is nothing wrong with the mouse.
We move (and touch things), we speak and we think. That’s how we interact with the world. Therefore, tactile sensors of various kinds, body language and speech recognition and direct brain link are the ultimate input devices of future.
To avoid dealing with noise, we’ll digitize speech before it becomes audible. Digitizing thoughts will be the next logical step.
I think you’ll see input divided into two different camps entirely.
1. Voice will become the norm for home users, perhaps even a combination of voice/retina-scan. For the casual users this is already possible but just not as well implemented as it could be. Casual users don’t need fancy 8-button commands or much else. I know I still get a kick out of telling my Mac to “Open My Browser” from time to time. The capability is there, it just needs refining. Combined with a retina scanner this would be fantastic. Your voice giving the application commands, your eyes telling the browser which article to focus on. Even more fun to see will be how the web porn industry uses this, lol.
2. Pro users require far more speed and control and therefore they are far more likely to don the cool gloves, etc. - for use in conjunction with or without the voice/eye commands. This opens the possibilities of two hand control as well, allowing anyone with the dexterity the ability to multitask.
Also, I’m positively salivating at the notion that I could be playing Grand Theft Auto 42 here soon…
what about retinal scanners? if a camera (a souped up isight) was trained on your eye, wouldn’t it be possible to move the cursor with your eye, then use a button on the keyboard to click, drag, etc?
One word - WOW. Considering it’s rumored to be planned that all Macs will have iSights built in soon, just imagine if Apple unveiled this ability at a 2007 Keynote!
But there is one huge problem you’ve all overlooked regarding voice input. What is it? Well it’s quite obvious, but you wouldn’t realize it. I was reading this article and decided to go and turn on my speech input. Then d’oh! I remembered why I don’t use it. - I play music all the time on my computer. Unless I used headphones (which I don’t wanna) then voice input will remain useless for me.
I would think the voice commanded Mac of the future would have the ability to recognize the difference between the iTunes music being played and the voice of its “master”. The software should be sophisticated enough to cancel out the sounds that its own speakers are producing, allowing it to respond to the voice commands—a sort of “noise cancelling.”
Speech will never take off because of what James R. said in his article.
No gloves, goggles or electronic jumpsuits because of what martunibo said.
I agree sudarkoff, people want to touch and feel and interact with their computer, which is another reason voice won’t take off. Pshhh.. voice has been around for a while (since OS 9) and it hasn’t taken off. Some people like it, but it’s such a limited market. Also .. if you want to write a private email, you won’t want to say it out loud.
Gosh you people have to think about the average user. What do they want? They don’t want an obsticle or something they have to learn. They want the input device to feel like it should, to just work and go away when they don’t need it. It needs to fit with a human, and humans use their hands. Two things that humans do most is looking and touching.
I think that the next truely great input device will be the elimination of the mouse. Holographic screens that let you touch and feel. Think TRUE drag and drop. Something like Minority Report without the gloves. No implants either… just hands. Computers will use heat sensors to detect human touch. Actually, scrap that, I have no idea how they’ll do it. But all this “I’m going to put on goggles, gloves or sit on my couch and tell my computer what to do” is crap.
When people can touch your files and actually move them around, that will be the next step.
As for the eye-mouse-control .... wtf? I don’t want the cursor following my eye! What if I want to look at something detailed, and then all-of-a-sudden, the cursor is covering it. Then you’d need some way to click. With blinking? Or maybe a dedicated button on your desk that only clicks? Why not just a mouse? The only implement I can see of this is a FPS game. That might be cool. But anything else? Naw.