WATCH: Robot Swarms of the Future (Because Sometimes It Takes a Village)

What happens when you put 40 tiny robots in a room and let them go nuts?

  • Share
  • Read Later

When you think about robots, you probably imagine vaguely humanoid machines, say Rosie from The Jetsons, C-3PO from Star Wars or maybe the T-800 from The Terminator. But what about robots the size of tea cups that scoot around on tiny wheels, snapping pictures with miniature cameras and keeping track of where they are in relation to dozens of others?

If you’ve seen Farscape, imagine Moya’s DRDs (diagnostic repair drones) without the eye stalks, plasma welders and lasers. If you haven’t seen Farscape, imagine a bunch of beetle-like droids scooting around, Roomba-style, working in concert to carry out various tasks, a little like super-sized body cells or immune system antibodies.

It’s called “swarm robotics,” and we’ve seen real-world examples before, like these autonomous nano quadrotors devised by researchers at the University of Pennsylvania that can fly in formation — think “Live Action Space Invaders” — and actually build stuff. (No, it’s not an April Fools’ joke — those things really exist.)

The idea behind swarm systems sounds simple enough: Instead of building largish, single-body robo-mechanisms, you organize platoons of smaller ones capable of coordinating behaviors, allowing for rapid spatial reconfiguration and task optimization in a given scenario, say pushing an object along the ground and being able to simultaneously alter its trajectory by quickly shifting the way you apply force to it. Swarm-like analogies in nature might include colonies of ants working together to drag objects around, biological cell-based systems and the neural structure of the human brain itself.

That’s how U.K.-based Sheffield Center for Robotics describes it, anyway, where researchers have been working to get some 40 smallish ground-based robots to play well together. As robotic lab honcho Dr. Roderich Gross puts it, “There’s no central entity that controls everything … all the little parts interact with each other, and complexity arises from these interactions.”

Each robot taken alone isn’t much to write home about: a “tiny little brain” housed in a hockey puck-style cage that includes two wheels to move around, a camera, a microphone, a speaker, an accelerometer and proximity sensors to detect nearby objects. But put 40 of these machines in a room together and they start to organize and interact in intriguing ways.

Like aggregation, or as Gross puts it, say “you have lots of friends around the city and you want to meet in one place.” What makes the aggregation demo — illustrated in the video above — so interesting, according to Gross, is that the team placed pretty severe restrictions on the robots to see if they could solve the problem of finding and clustering without using arithmetic or memory systems.

Why inhibit these things? Aren’t they supposed to tap computational abilities? Don’t we expect robots to exploit all of their functions, especially things they can do more quickly than humans, like memory storage/recall and math?

It depends. In this case, the researchers wanted to test how future much-smaller renditions of these robots — say microscopic ones, where due to size and thus design limitations, their abilities might be far more limited — would be able to perform tasks like clustering, gradually moving from discrete units into groups by “finding” each other, eventually forming a single mass (you can watch the process in the time-lapsed video above).

Even more interesting: Given different conceptual size assignments (based on color) and using a light to represent the pull of gravity, the robots are able to sort them accordingly — to use Gross’s analogy, like different sized flakes settling into their various strata (large ones on top, small ones on the bottom) in a shaken box of cereal.

“We are developing artificial intelligence to control robots in a variety of ways,” says Gross. “The key is to work out what is the minimum amount of information needed by the robot to accomplish its task. That’s important because it means the robot may not need any memory, and possibly not even a processing unit, so this technology could work for nanoscale robots, for example in medical applications.”

And that’s where the rubber meets the road: nanoscale robots entering biological systems to perform any number of health-related tasks, say cell-sized robots that could be used for targeted treatment scenarios, singling out trouble cells in the body without harming “good” ones.

How long before we’re injecting ourselves with (or drinking, or simply inhaling) clouds of self-organizing, semi-autonomous nanobots? Guess away. I’d say futurist Ray Kurzweil has the timescale wrong (he’s much too optimistic), but he’s predicted nanoscale devices will be used in medical scenarios by the 2020s — less than a decade away — and that they’ll allow us to perform brain scans so accurate we’ll finally fully understand how our gray matter works (good luck with that). Also: that nanobots capable of nourishing and “cleaning” our cells will be in use, rendering traditional food consumption archaic.

In any case, it’s pretty cool to see this idea of “more from less” refining how we think about nanotechnology, and giving us a glimpse, in “swarm robotics,” of how we might distill new forms of complexity from organized simplicity.

When you think about robots, you probably imagine vaguely humanoid machines, say Rosie from The Jetsons, C-3PO from Star Wars or maybe the T-800 from The Terminator. But what about robots the size of tea cups that scoot around on tiny wheels, snapping pictures with miniature cameras and keeping track of where they are in relation to dozens of others?

If you’ve seen Farscape, imagine Moya’s DRDs (diagnostic repair drones) without the eye stalks, plasma welders and lasers. If you haven’t seen Farscape, imagine a bunch of beetle-like droids scooting around, Roomba-style, working in concert to carry out various tasks, a little like super-sized body cells or immune system antibodies.

It’s called “swarm robotics,” and we’ve seen real-world examples before, like these autonomous nano quadrotors devised by researchers at the University of Pennsylvania that can fly in formation — think “Live Action Space Invaders” — and actually build stuff. (No, it’s not an April Fools’ joke — those things really exist.)

The idea behind swarm systems sounds simple enough: Instead of building largish, single-body robo-mechanisms, you organize platoons of smaller ones capable of coordinating behaviors, allowing for rapid spatial reconfiguration and task optimization in a given scenario, say pushing an object along the ground and being able to simultaneously alter its trajectory by quickly shifting the way you apply force to it. Swarm-like analogies in nature might include colonies of ants working together to drag objects around, biological cell-based systems and the neural structure of the human brain itself.

That’s how U.K.-based Sheffield Center for Robotics describes it, anyway, where researchers have been working to get some 40 smallish ground-based robots to play well together. As robotic lab honcho Dr. Roderich Gross puts it, “There’s no central entity that controls everything … all the little parts interact with each other, and complexity arises from these interactions.”

Each robot taken alone isn’t much to write home about: a “tiny little brain” housed in a hockey puck-style cage that includes two wheels to move around, a camera, a microphone, a speaker, an accelerometer and proximity sensors to detect nearby objects. But put 40 of these machines in a room together and they start to organize and interact in intriguing ways.

Like aggregation, or as Gross puts it, say “you have lots of friends around the city and you want to meet in one place.” What makes the aggregation demo — illustrated in the video above — so interesting, according to Gross, is that the team placed pretty severe restrictions on the robots to see if they could solve the problem of finding and clustering without using arithmetic or memory systems.

Why inhibit these things? Aren’t they supposed to tap computational abilities? Don’t we expect robots to exploit all of their functions, especially things they can do more quickly than humans, like memory storage/recall and math?

It depends. In this case, the researchers wanted to test how future much-smaller renditions of these robots — say microscopic ones, where due to size and thus design limitations, their abilities might be far more limited — would be able to perform tasks like clustering, gradually moving from discrete units into groups by “finding” each other, eventually forming a single mass (you can watch the process in the time-lapsed video above).

Even more interesting: Given different conceptual size assignments (based on color) and using a light to represent the pull of gravity, the robots are able to sort them accordingly — to use Gross’s analogy, like different sized flakes settling into their various strata (large ones on top, small ones on the bottom) in a shaken box of cereal.

“We are developing artificial intelligence to control robots in a variety of ways,” says Gross. “The key is to work out what is the minimum amount of information needed by the robot to accomplish its task. That’s important because it means the robot may not need any memory, and possibly not even a processing unit, so this technology could work for nanoscale robots, for example in medical applications.”

And that’s where the rubber meets the road: nanoscale robots entering biological systems to perform any number of health-related tasks, say cell-sized robots that could be used for targeted treatment scenarios, singling out trouble cells in the body without harming “good” ones.

How long before we’re injecting ourselves with (or drinking, or simply inhaling) clouds of self-organizing, semi-autonomous nanobots? Guess away. I’d say futurist Ray Kurzweil has the timescale wrong (he’s much too optimistic), but he’s predicted nanoscale devices will be used in medical scenarios by the 2020s — less than a decade away — and that they’ll allow us to perform brain scans so accurate we’ll finally fully understand how our gray matter works (good luck with that). Also: that nanobots capable of nourishing and “cleaning” our cells will be in use, rendering traditional food consumption archaic.

In any case, it’s pretty cool to see this idea of “more from less” refining how we think about nanotechnology, and giving us a glimpse, in “swarm robotics,” of how we might distill new forms of complexity from organized simplicity.