robosapiens

Originally published in the May 2003 issue of T3 Magazine.

Robots – sleek, intelligent, thinking machines, they’re in the future, right? Those funky humanoid robots like in The Terminator and Star Wars, we’ll never be smart enough to create them and even if we were, a robot could never be truly alive. Or could it?

Well, a future populated by thinking and even feeling humanoid robot helpers is actually right around the corner and one place where they’re busy building this dramatic vision is at the Artificial Intelligence Labs at Massachusetts Institute of Technology (MIT), just across the Charles River from an unsuspecting downtown Boston in the north eastern corner of the U.S. Founded by Professor Rodney Brooks – perhaps the world’s leading robotics expert – the Humanoid Robotics Group (HRG) at MIT is doing some amazing work creating a variety of robots that mimic both human behaviour and appearance.

Brooks himself has quite a pedigree, apart from his post as Professor of Computer Science and Director of the A.I Labs, he is chairman of iRobot, a company making all manner of domestic, entertainment and military robots – including the Packbot, a small, rugged tracked device that is currently deployed in Afghanistan and which was also used to look for survivors at Ground Zero on 9/11. Under Brooks’ guidance, the HRG has over the last ten years made huge strides in developing what it likes to call ‘sociable robots’.

These unusual machines are a far cry from the initial steps taken at Japan’s Waseda University when they built the first walking humanoid robot back in the early 1970s, and they’re a big improvement on previous US efforts like The Greenman, a robotic exoskeleton project and Manny, a mechanically impressive but non-too-smart robotic mannequin developed by the US Army developed in the 1980s.

Robot image by Ben Campbell

Brooks’ initial work in robotics was with insect bots in the 80s, building devices such as Genghis, a six legged creature which turned conventions in Artificial Intelligence upside down by allowing robots to learn by experience rather than tortuously pre-programming their every decision.

The first independent robots had just been created and it earned Brooks a reputation as something of a non-conformist in the A.I. community. Back then, he had initially anticipated working his way up a kind of robotic evolutionary chain – starting by first developing insects, then mammals and finally a humanoid. He soon realised it could take years to achieve his ultimate goal of creating an android and so he took a quantum leap in both science and imagination and started work on Cog, a robot resembling the upper torso of a human being and with two arms, a head, cameras for eyes and nowadays, a speech processor and hearing capabilities too.

This work led the HRG in a whole new direction, never before undertaken in the field of robotics. During the same period that Japanese companies such as Honda and Sony have developed increasingly sophisticated devices such as the P3, ASIMO, and entertainment robots like the AIBO and last year’s prototype ‘droid, the SDR-4X, Brooks and company got busy creating robots that were specifically designed to interact with humans in ways that people were familiar with, using both speech and body language, and that would also respond emotively to different behaviours from their human operators.

The thinking behind these new ‘sociable robots’ was that people would be more comfortable working and living alongside these machines if they act in ways that are familiar to us and also, that teaching such a machine new skills by just talking to it and showing it something is a far more attractive and familiar route to learning than having to programme each and every element in a particular procedure. As HRG grad student Paul Fitzpatrick explains ‘if I want to build a robot porter, I need to be able to show it something once and then be able to trust it to just get on with the job’.

Two of the HRG’s most successful and high profile projects have been the aforementioned Cog and what they describe as ‘an emotionally expressive robotic head’ called Kismet, originally the brainchild of Dr Cynthia Breazeal, who has since moved on from her post at the Lab.

Both of these units resemble human beings in one form or another and Brooks reiterated that this is in part down to making machines that are a whole lot more intuitive to use and also, admits Brooks, ‘because we have a fascination with making something that looks like us’. Kismet, the robo-head, has large eyes, ears, lips and a face that can all express a variety of emotions such as anger, happiness, disgust and surprise.

Fitzpatrick explains, ‘it is programmed to be drawn to things that are bright and mobile, and it has two operational modes – playful and sociable – and it is designed to keep these two modes in homeostasis, so if nothing’s going on, Kismet will appear bored and will begin to look around the room until it finds something bright that it wants to play with or until it notices a person it wants to socialise with. Kismet is also able to detect eye movement and skin tone and will call a person to it when they enter the room.’

Breazeal’s work has been continued by Fitzpatrick and Lijin Aryananda and, though Kismet has now been ‘retired’ – in the Blade Runner-esque words of Rod Brooks – his grad students have amassed a great deal of knowledge on robot perception, vision capabilities and robot / human communication through their work with the device.

Aside from being a useful testbed for new developments, Kismet also had a strong effect on those who interacted with it; Aryananda found some strange responses to an experiment she set up to assess Kismet’s affective speech recognition facilities – its ability to mimic infant responses when given different vocal stimuli in the form of praising, reprimanding, soothing, attentive and neutral voice tones.

Aryananda used a group of women for the test – men weren’t expressive enough apparently – and whenever Kismet was scolded, it would droop its head and flatten its ears, mimicking shame. “The women actually felt guilty for scolding Kismet,” says Aryananda, recalling a clear sign of people attaching human-like qualities to the robot. Indeed, when relating this account, she even referred to Kismet as ‘the baby’ – before quickly correcting herself. She also recounted occasions where, if she were alone with the robot, she would talk to it and show it bright colourful objects to entertain it, often telling the device ‘I’m still here’ so that it didn’t get too lonely.

Fitzpatrick too explained that Kismet gave the appearance of being living and was quite a powerful presence to be around, although he insisted that the robot only mimicked emotion and life. My own experience of Kismet was limited, the machine was turned off during my visit in readiness for its imminent transferral to the MIT Museum, where it will become a permanent exhibit. Even so, when seeing Kismet, complete with a rose clasped in its mouth, one did have the sense of being in the presence of an intriguing machine that had clearly had a profound effect upon those who had worked on it.

An altogether different proposition is Cog, the nearly ten-year old robotic humanoid torso that occupies a good portion of one of the larger rooms at the A.I. Labs. Standing almost six feet tall and attached to an impressive Matrix-like array of over twenty computers that manage its vision, speech and behaviour systems, Cog is quite an impressive presence.

When activated, it moves slowly and silently and yet, despite its imposing metallic frame, it is entirely safe to be around thanks to what Brooks describes as ‘series elastic actuators’ – a sophisticated intelligent control system in its arms that mean that this robot is very user friendly.

Unlike factory robots that necessitate human-free zones around them because they apply the same amount of force every time they perform a specific function meaning that whatever, or whoever, gets in the way will likely suffer serious injury or death, Cog’s actuators (motors) mean that it will always stop a movement if it encounters resistance or senses something in the way, perhaps in the form of one of its human operators.

This is another crucial step in the design of robots that are to live and work among us in everyday society. Cog has also provided Fitzpatrick with the opportunity to work on better perception systems for robots by developing what he calls ‘segmentation’. This entails teaching Cog to recognise the same object – and its related functions – in widely different settings. For example, if Cog learns to recognise what a computer mouse looks like and what it does, Fitzpatrick’s aim is for the next time Cog sees a mouse, then it will know what to do with it and won’t need re-telling.

The same principal holds for speech and language recognition – while Cog may not feel what a word means in the way that we do, it will understand its meaning and what it is most likely being told to do. All of these elements are vital if we are to look to a future that features reliable, useful robot helpers – as Brooks points out, ‘it’s no good if you have to keep retelling your robot what something is and what you want it to do, we need them to understand what a dishwasher is and that when the wash cycle is finished, the machine needs emptying and all the crockery has to be put away in the right place’.

All of these problems suggest the shape of the next phase of development for robots – ease of use and with better short and long term memory, so that information stays in their minds. Brooks expressed frustration that it takes several minutes to load up specific behaviour software for Cog to function, indeed when I was there, it took some minutes for Cog’s arm to warm up before it was ready to move. But these are just two of the small but significant hurdles that future robots must overcome and, if Brooks has his way, this next generation of humanoids is going to be whole orders of magnitude different from the essential but primitive steps taken with Cog and Kismet.

Stepping into Rod Brooks’ well-lit office is a little like crossing a threshold into some alternate reality, a reality somewhat more technologically advanced than that we are used to, even in the ‘developed’ world. An artificially intelligent doll, My Real Baby, one of iRobot’s own inventions, sits passively on the sofa; Roomba, a robotic vacuum cleaner is at rest on the floor.

Brooks voice activates his computer into life and automatically, the blinds close and a giant wall screen appears from nowhere when he wants to screen a presentation demonstrating the work of his company’s PackBots. ‘Computer, go to sleep’ he says when the show is over. ‘I am happy to rest’ responds the computer’s Stephen Hawking-like voice.

And all this is before the discussion on the future of robotics as Brooks and his HRG group see it over the next few years. In early 2004, the AI Labs will be relocating from their present home at 200 Technology Square at MIT into a stunning Frank Gehry designed building right across the street from their current home. This move will also coincide with the ‘retirement’ of Cog and Kismet and the birth of the next generation of the HRG robots.

Brooks, Fitzpatrick and Aryananda all agree that the work they are doing on sociable robotics will soon converge with the types of more functional robots being created by the likes of Honda and Sony. Indeed their own plans for Cog and Kismet’s successor is a remarkable robot, based on what Brooks calls ‘intelligent design rather than evolution’ – a humanoid torso, with a Kismet style expressive face and three arms, all built onto a Segway transportation unit in order to give it mobility.

This device, which currently doesn’t have a name and is awaiting funding from DARPA (the Defence Advanced Research Projects Agency), will be an autonomous, self-contained unit designed to act as sociable robotic help mate that won’t require minutes of preparation before it works, it will just switch on and go. And this is just the start – as he holds and pats the gurgling My Real Baby like a proud father, Brooks explains “we’re just at the beginning of the robotics revolution, within the next 5-10 years affordable ‘dumb’ robots – robot vacuum cleaners and such like – will become commonplace in people’s homes, just as PCs are now.”

“From there,” continues Brooks, “the convergence of sociable robots with functional robots will take place and I can’t see anyway that humanoid robots won’t be commonplace in our world.”

Crucial to these developments are factors like economics and cultural trends – countries such as Japan and Italy will both have an affluent but ageing population who have, in their younger years, been used to living an independent lifestyle. In Brooks’ view, this will make them prime contenders for adopting the new generation of domestic robot helpers, as such devices will crucially allow their owners to retain their independence as they grow old – an important life-quality for today’s babyboomers.

Beyond that, it gets quite esoteric. There is a quote from Brooks on the MIT website, stating that what drives him is the desire to discover what it is that lets matter transcend itself to become living. So far, he hasn’t yet got an answer to this profound question but says enigmatically “I do understand the question better.”

His colleagues at MIT are now learning how to digitally control cellular matter, to allow the creation of tiny organic robots that, for one thing, could be integrated into specially grown prosthetic limbs for amputees. He reiterates his belief that during the next fifty years robots will ‘routinely incorporate both biological and electronic components’ and I ask whether, in that case, will we then have robots that are truly alive. His eyes shine at the thought and he reveals his ultimate goal – to create a living machine. Having spent some time with the people and robots at MIT’s A.I. Labs, I wouldn’t bet against them doing it.

Robot image © Benedict Campbell.