Sonntag, 17. Juli 2016

NLU is not a User Interface

Some time ago I already spoke about NLU vs Dialogmanagement. My hope was that people working in NLU and Voice User Interface design would start talking to each other. I enhanced these ideas in a paper submitted to the IUI Workshop on Interacting with Smart Objects: NLU vs. Dialog Management: To Whom am I Speaking? In essence "Dialogmangement-centered  systems  are  principally  constrained  because they anticipate the users input as plans to help them to achieve their goal.   Depending  on  the  implemented  dialog  strategy they allow for different degrees of flexibility. NLU-centered systems see the central point in the semantics of the utterance, which should also be grounded with previous utterances or external content.  Thus, whether speech or not, NLU regards this as a stream of some input to produce some output. Since no dialog model is employed,  resulting user interfaces currently do not handle much more than single queries".

Actual dialog systems must go beyond this and combine knowledge from both research domains to provide convincing user interfaces.

Now, I stumbled across a blog entry from Matthew Honnibal who bemoans the current hype around artificial intelligence and the ubiquitous promise for more natural user interfaces. He is right that voice simply is another user interface. He states:  "My point here is that a linguistic user interface (LUI) is just an interface. Your application still needs a conceptual model, and you definitely still need to communicate that conceptual model to your users. So, ask yourself: if this application had a GUI, what would that GUI look like?"

He continues with mapping the spoken input to method calls along with their parameters. Then, he concludes: "The linguistic interface might be better, or it might be worse. It comes down to design, and your success will be intimately connected to the application you’re trying to build."

This is exactly the point where voice user interface design comes into play. Each modality requires special design  knowledge for effective interfaces.  Matthew Honnibal seems neither be aware of the term VUI nor of the underlying aproaches and concepts. Maybe, it is time to rediscover it to build better voice-based interfaces employing state-of-the-art NLU technology.

Dienstag, 5. Juli 2016

AI and the Need for Human Values

Stuart Russel, professor in Berkeley  and who wrote the standard book on artificial intelligence with Peter Norvig, speculates  about the future of AI.

He has no doubts that AI will change the world. "In future, AI will increasingly help us live our lives", he said, "driving our cars and acting as smart virtual assistants that know our likes and dislikes and that will manage our day." The technology is already there that is more accurate in analyzing and monitoring a plethora of documents to forecast events or provide us with hints to make our lives easier. "Looking further ahead, it seems there are no serious obstacles to AI making progress until it reaches a point where it is better than human beings across a wide range of tasks."

In the Best of all cases "[W]we could reach a point, perhaps this century, where we're no longer constrained by our difficulties in feeding ourselves and stopping each other from killing people, and instead decide how we want the human race to be."

Ob the other side he also sees a great danger. Autonomous weapons may reveal as great threat. "Five guys with enough money can launch 10 million weapons against a city," he said.

He demands serious plans how to core with that. "A system that's superintelligent is going to find ways to achieve objectives that you didn't think of. So it's very hard to anticipate the potential problems that can arise." Therfore, "there will be a need to equip AI with a common sense understanding of human values."

He suggests the only absolute objective of autonomous robots should be the maximising the values of humans as a species.

This is all well said. But did the human species already reach a point to agree upon a common set of values? Who will be the one to decide how we want the human race to be? How would we teach those to AI? I fear that this remains a nice vision and that we will reach the point where we would have needed such an integration of values too early.