Skip to content
October 20, 2011 / Dr. Toad

Siri — the “weak” AI in your life

I just came across this tumblr that records the shit that Siri says.

Talk dirty to me -- The carpet needs cleaningA few pundits have commented on how Siri is essentially “weak” AI, meaning, its conversation is indistinguishable from a human’s (the Turing test), even though it doesn’t of course really understand what it’s saying. It’s just manipulating symbols, looking up what to say in the vast reams of online text out there. It probably does it with statistics (ha ha) and, judging from Siri’s response to “Open the pod bay doors” and “How much wood would a woodchuck chuck?”, a few randomly accessed stock answers. In fact, the tumblr stream suggests we are quite predictable in our attempts to stumble the “weak AI” in our lives.

This made me imagine a not so distant future where we have created two kinds of intelligent artificial beings. The Siri (and Watson) kind, which are witty in expected ways and are able to maintain short free-form conversations in between helping us access and process online information, but which “don’t really understand”, although they could get really good at pretending they do. And the self-driving car kind, who, like your dog, can’t talk, but are clearly truly intelligent, in the “strong AI” sense: they perceive and manipulate the changing physical world according to their goals, they operate (you could say think) with categories created out of properties of landscapes and objects that they can sense. They are capable of creating meaning, even though they can’t express it (yet).

I wonder how our relationships with these things will be shaped by their abilities and our views of them. Will the meaningless but cute and funny Siri-alikes gain rights and maybe even full personhood, while the sensing and thinking cars will be relegated to tool status because they lack language?

Ref: Parisien & Thagard (2008) Robosemantics: How Stanley The Volkswagen Represents The World. Minds and Machines 18:2 (pdf)

2 Comments

Leave a Comment
  1. CarlosT / Oct 20 2011 2:43 pm

    What’s to stop them from linking a weak AI to a strong AI? The strong AI is doing the thinking and the weak AI is the spokesperson. I suspect that this might be how we really work in the end.

    • Dr. Toad / Oct 21 2011 11:31 am

      In principle, nothing is to stop them. But it’s all about the way things are done. The most successful technology for talking or text-processing agents right now is all about collecting sophisticated usage statistics on text. So you need to input language to get language as output. There is nothing other than language that it’s connected to. On the other hand the sensorimotor apparatus of meaning-making robots isn’t tied to language at all. It’s a very difficult problem, how to tie what such an entity “knows” from its experience to the words that people use online.

What say you?