Human-Computer relation over time (from http://tuprints.ulb.tu-darmstadt.de/5184/)
One of the drawbacks of this vision-centric key to smart spaces is that is is by no means hands-and eyes free. Google and Siri are continuously improving the voice capabilities of their personal agents but they still rely on manual interaction and force the users to look at the screens. It appears as if we forgot about the "invisible" attribute of Weiser's vision. Invisible meaning that we did not perceive it as a device. Today, the smartphone is still an explicit device.
One day, in the very near future, we will have to decide if this is the way to go. Do we want to reuse this thing everywhere, while we are walking the streets, in our cars,...?
Maybe, this also(!) motivates the wish for Android Auto and Apple CarPlay to have the car as another cradle for your smartphone.
Scenarios like those described in http://www.fastcodesign.com/3054733/the-new-story-of-computing-invisible-and-smarter-than-you are still far away. A video demonstrates their room E.
Prototypes like this are already existing in the labs and maybe, it is time to leave the labs.
Amazon Echo is, maybe, a first step in this direction. As a consequence, it became the best selling item above $100 http://www.theverge.com/2015/12/1/9826168/amazon-echo-fire-black-friday-sales
In contrast to the scenario in the video above, users do not need to speak up. It can be used for voice querying and controlling devices http://www.cnet.com/news/amazon-echo-and-alexa-the-most-impressive-new-technology-of-2015/ So, let's see how this one evolves with regard to Weiser's vision. Maybe, we see comparable approaches soon.
Keine Kommentare:
Kommentar veröffentlichen