When discussing "Westworld," it's important first to explain whether you're referring to the show about a theme park, its employees, robots, and guests; or the park itself, also named "Westworld." The show takes place at some point in the vague future, our only hints to its real-world (universe?) location and date in the clothes worn by the employees (fairly standard 2016 business casual) -- and by the shiny white "Hunger Games" corridors of the Devos facility, where employees seemingly both work and live.

Do these scenes take place on a "Wall*E"-style starliner? Inside a dome on a colonized planet? Far underground? And -- thus far not explicitly mentioned on the series -- what's the rest of the world up to as Ford (Anthony Hopkins) reigns over all this madness? It's a world where escaping to a Wild West fantasy primarily designed to unleash rage, along with its much-discussed mission statement to vaguely "show you who you are," is appealing enough to keep the wheels turning for at least thirty years; somewhere the exorbitant cost of playing dress-up is affordable to enough people to keep the park consistently populated with guests.

westworld fly on eye Ask a Social Roboticist: How close are we to building a real Westworld?

These practical considerations may be answered by the end of the freshman season, but they are also, of course, beside the point: "Westworld" the show presents a morality play that is itself dressed up as escapist fun for us. Each Sunday night we tune in as surrogates not of Ford, or eminently sympathetic AIs Dolores (Evan Rachel Wood) and Maeve (Thandie Newton), but as visitors -- with our without the nihilism of Logan (Ben Barnes), voyeurs all the same...

Unless you happen to be someone whose 9-to-5 is centered on programming robots for people to interact with, and AIs to respond to tone of voice and body language.

RELATED: 'Westworld' updates classic sci-fi tropes of body horror & autonomy

Meet Dr. Mary Ellen Foster, a Lecturer at the University of Glasgow's School of Computing Science (and also, full disclosure, this writer's sister). Her work as a Social Roboticist has focused on building artificial characters -- mainly robots -- that can interact with people using natural, face-to-face conversation. Mary Ellen was kind enough to take time out of her work to chat about how realistic "Westworld" is -- both show and park -- and provide background and comment on its portrayals of both artificial intelligences, and the people who work with them.


Screener: Considering your research, is watching 'Westworld" for you the same as when doctors watch "Grey's Anatomy" or forensic investigators watch "CSI" -- are you like, "No, that's not how that works!"

Dr. Foster: For the most part, it’s so far removed from actual day-to-day computer science that I don’t really notice stuff in that way -- the way that they "program" the robots through voice commands is cool, and seems to be well thought through, for example.

In fact, I really like the scenes when the Delos people are programming and debugging the systems -- it feels quite real, how they go through a process of making a change, testing it out, figuring out why a certain behavior happened, changing the commands a bit, etc. I certainly don’t sit there going argh! the whole time the way I do when (for example) someone needs to hack into a computer system on "24" or something like that.

Of the staff at the Delos facility, whose character represents your current work interests and/or work? Like, if you were hired for a job at Delos, which position would you be most qualified for?

The character I identify most closely with is Elsie [Shannon Woodward] -- she seems to be focused mainly just on getting the hosts working properly, and debugging them when they do strange things. It’s all in this weird context, with people trying to design crazy things, and meditating about the nature of consciousness (or else molesting the sleeping hosts in storage rooms), but she’s just there, getting on with the job of making them behave the way that they should behave. Obviously her ethics get a bit questionable later on though. ☺

I'm really curious to know how you feel watching the hosts on this show. Do you relate to Dolores and Maeve, for instance, with the same empathy as the human characters -- or do you consider them as AIs, not people?

How do you feel when seeing sequences like the one from, I think, the first episode -- where humans walk through the cold storage of discarded AIs -- is it just like, "That's a sensible storage solution" or "Ooh this is creepy"?

Hmm, interesting question. I guess I mostly see them as AIs, but I still cringe when they get shot and raped. And that cold storage scene was definitely creepy! But then, a room full of switched-off robots can be creepy even if they’re not such humanoid types.

"Westworld," along with basically every property dealing with robots and AI -- from "2001: A Space Odyssey" and "Short Circuit" to "Her" and "Ex Machina" -- always play around with the idea of robots/AIs developing their own consciousness and, inevitably, taking over all of civilization. Is this a real concern for people in your field?

Robots "developing their own consciousness" -- a.k.a. "killer robots taking over the world" and similar tropes -- is something that most researchers in AI mostly laugh at. In fact, if you look at the papers presented at mainstream AI conferences, you will pretty much never see anyone mention consciousness as a topic -- people use language like belief, desire, intentions, maybe even intelligence, but never consciousness.

There’s an interesting conversation in… I think it’s the third episode, where Ford tells Bernard about Arnold’s original plans for the park. He mentions that they were originally aiming to pass the Turing Test, which amounts to (broadly) creating an artificial character that is indistinguishable from a human. But then supposedly, Arnold wanted to try to create something more, something with true consciousness…

This is a long-standing philosophical debate in the world of cognitive science, whether it’s enough to create something that "just" behaves like a human, or whether there is something more that’s required. For the record, I tend to fall on the "create something that behaves like a human" end of the spectrum: Maybe I don’t have the right sort of imagination/spirituality/whatever, but in my opinion, if you’ve created a system that behaves in a way that people respond to it like it’s a human partner -- success! That’s the goal! It doesn’t really matter so much (to me) whether the underlying models are human-like (whatever that is) or whether they have consciousness (whatever that is).

Do the robots you work with ever surprise you, doing something that you hadn't specifically programmed them to?

The robots I work with do surprise me sometimes, but because of the technical limitations, the surprises are much more mundane. For example, it turned out that with a robot bartender I worked on, if we gave it some rules that would let it ask a customer for their drink order and then serve the drink, those same rules could deal with a customer who walked up and asked for a drink without being asked first. Pretty boring, unfortunately...

One technique that’s getting a lot of press and attention (and money, from big players like Google) is called "deep learning"... Instead of giving an artificial system a set of rules for how to behave, you will just give it a LOT (like seriously a lot) of data (sample interactions, for example), and let it figure out from that data how it should behave.

This is a context where an artificial agent can surprise you, and not always in a good way -- for example, recall Microsoft’s Twitter bot that very quickly learned to be a racist. And it has implications for things like self-driving cars: If the behavior of a car is controlled by this sort of deep-learning model, it becomes very hard to figure out why it made any particular decision other than "the model told me to" -- so for example, if there’s an accident where a human gets injured, it can be very difficult to track down exactly what went wrong, or why the car decided to make a sudden left turn or whatever.

And the Westworld AIs are very different in this way: In fact, a lot of the behind-the-scenes programming [has] to do with figuring out why a particular host did things in a certain way, or asking them to enable/disable particular rules or behavior types. And usually that works pretty well, so obviously Delos is doing something more rule-based and less purely data-driven.

westworld dolores william episode 5 e1478804946828 Ask a Social Roboticist: How close are we to building a real Westworld?

How close are we to even nearing the development of an AI like Dolores -- one who picks up on social cues, is adept at navigating conversations with numerous people, able to converse with many people on a variety of subjects...

There are two big aspects of building such sophisticated robots: Building the hardware, and developing the behavior models.

On the hardware side, people are already building fairly sophisticated human-like robots -- see, for example, the work of Hiroshi Ishiguro and colleagues in Japan. There have also even been projects that aim to 3D-print robot limbs, with tendons and muscles like human ones. There are still a lot of challenges in building a really human-like robot platform (e.g., walking is still quite hard), but as far as I understand they are mainly in the area of practical engineering.

The second aspect -- the behavioral models -- is the area I mainly work in, and I think this side has further to go and faces bigger, more fundamental challenges. I’ll quote from an article I wrote last year, where I discussed this issue in the context of R2-D2 and C-3PO:

"The real challenge is putting […] components together to enable robots to interact in a socially intelligent way. To understand why this is so hard, think about what happens when people talk face to face. We use our voices, faces and bodies together in a rich, continuous way. A surprising amount of information is conveyed by non-verbal signals. The meaning of a simple word like "maybe" can be dramatically affected by all the other things a speaker is doing."

Real-world communication doesn’t take place in a context-free vacuum either. Other people may be entering and leaving the scene, while the history of the interaction -- and indeed all previous interactions -- can also have a large effect. And not only must the robot fully understand all the nuances of human communicative signals, it must also produce understandable and appropriate signals in response. We are talking about an immense challenge.

For this reason, even our most advanced robots generally operate in constrained environments such as a lab. They are capable of a limited amount of communication, and generally can only interact in very specific situations. All these limitations reduce the number of signals that the robot must understand and produce, but at the cost of natural social interaction.

And our technology, overall?

My guess: It won't be for another 100 years at least.

Finally, in your opinion as a TV viewer: What do you think is really going on with this show? Have you read about the multiple timeline theory, that William is the Man in Black but back in time, or that there's more than one Dolores, or that one or more of the "real" workers at Delos may really be AIs?

I have no idea... I wouldn’t be surprised if one or more of the Delos workers ends up being a robot, but only because that’s the sort of thing that always happens in shows of this type. Beyond that, I’m trying not to overthink it, and just going along for the ride.


This interview has been edited and condensed.

"Westworld" airs at 9 p.m. ET/PT Sundays on HBO.

Posted by:Ann Foster

Ann Foster is a blogger and librarian living on the Canadian prairies.