Talking with some friends about "what is the future of the desktop paradigm?" (wherein my response has basically been "well, that's an awfully vague question") and figured I'd move the convo out because there are people with much, much more experience in this stuff out there (and who are way smarter regarding it than I am).
But how long is the mouse-keyboard-screen interaction going to stick around? We're used to it, sure, but it's not very good for our hands/arms/shoulders/backs. -Nikki
I see this as a much more concrete, actionable, and AWESOMETASTIC subset of the "what does the future of the desktop look like?" question.
It's clear what our current primary interfaces with computers are.
- we look at screens (output)
- we type on keyboards (input, character)
- we use pointing devices, usually mice, but sometimes tablets. (input, single pair of x-y coords)
It's clear how this may become different in the very, very near future - some of these
- multitouch/multiuser (input, multiple pairs of x-y coords)
- 3D (input, x-y-z coords)
- alternate text entry methods like voice rec, OCR, handwriting rec, foot pedals, treadmills, punching bags (input, character)
- ambient sensors and other forms of input with no separate "put this into computer" human action required (input, coordinates/text)
- combos of the above. for instance, 3D multitouch/multiuser gives us input with multiple pairs of coordinates made of tuples of arbitrary dimension.
And you can quickly see my blind spot: what other forms of input would we want to give computers that don't come in the form of characters/text or spatial coordinates over time?