GEEKERY: The First Step to Building HAL?


Researchers at Tufts University are teaching robots self preservation – all hail our future robot overlords!

Gordon Briggs and Matthias Scheutz at the HRI laboratory at Tufts are working on mechanisms that would allow robots to say no to orders from humans in certain cases. Their research is based on the use of felicity conditions (i.e., the conditions that make a robot follow an order) to determine which commands are best for the robot to follow. Briggs and Schultz proposed the following conditions be present:
  1. Knowledge : Do I know how to do X?
  2. Capacity : Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing : Am I able to do X right now?
  4. Social role and obligation : Am I obligated based on my social role to do X?
  5. Normative permissibility : Does it violate any normative principle to do X?
These conditions help the robot assess if they’ll follow an order – if the commander has authority and if carrying out the order will harm it. The researchers used Aldebaran Nao robots to demonstrate how all of this looks when applied:

As you can see a sort of trust is built between human and robot via its programming. Now, this doesn’t really give the bots freewill as humans know it, and it’s no where near sentience – HRI research has many years to go before something like that will be developed. It’s awesome to get a look at the sub-basement of a system like that, though.

If you want to get into the nitty gritty of the project you can read Gordon Briggs and Matthias Scheutz paper here. Also, if you’re interested in building your own robots, Stack Social has an awesome Arduino beginners bundle available right now.


What do you think about this line of research? Are you ready for robots that are built to follow Asimov’s rules?

  • euansmith

    Nothing can possibly go wrong…

  • Zack Seiders

    Well, I picture robot overlords somewhere with in a couple hundred years or so.

  • greenskin

    I have mixed feelings about BOLS posting articles this tangentially related to gaming or hobby. On one hand, it does add some entertainment value; on the other hand, it further waters down the in-depth wargame content with articles that I’ve already seen on Reddit and FB, and elsewhere.

    • CMAngelos

      Bell of Slow News Day.

  • JP

    In a completely unrelated event, several researchers at Tufts University have been found dead. Though they appear to be suicides, the possibility of foul play has not been ruled out.

    • jeff white

      my heart glows red for that comment.

  • Rafael Fernandez

    So what happens in an alternate case when the researcher lies about caching the robot after ordering it to walk off the table? Will the researcher be able to sleep peacefully without locking up the robot?