Researchers at Tufts University are teaching robots self preservation – all hail our future robot overlords!
Gordon Briggs and Matthias Scheutz at the HRI laboratory at Tufts
are working on mechanisms that would allow robots to say no to orders from humans in certain cases. Their research is based on the use of felicity conditions (i.e., the conditions that make a robot follow an order) to determine which commands are best for the robot to follow. Briggs and Schultz proposed the following conditions be present:
- Knowledge : Do I know how to do X?
- Capacity : Am I physically able to do X now? Am I normally physically able to do X?
- Goal priority and timing : Am I able to do X right now?
- Social role and obligation : Am I obligated based on my social role to do X?
- Normative permissibility : Does it violate any normative principle to do X?
These conditions help the robot assess if they’ll follow an order – if the commander has authority and if carrying out the order will harm it. The researchers used Aldebaran Nao robots
to demonstrate how all of this looks when applied:
As you can see a sort of trust is built between human and robot via its programming. Now, this doesn’t really give the bots freewill as humans know it, and it’s no where near sentience – HRI research
has many years to go before something like that will be developed. It’s awesome to get a look at the sub-basement of a system like that, though.
If you want to get into the nitty gritty of the project you can read Gordon Briggs and Matthias Scheutz paper here. Also, if you’re interested in building your own robots, Stack Social has an awesome Arduino beginners bundle available right now.
What do you think about this line of research? Are you ready for robots that are built to follow Asimov’s rules?