Researchers Visualize Image Recognition Output to See Things From a Computer’s Perspective
Even the best object-recognition systems, however, succeed only around 30 or 40 percent of the time — and their failures can be totally mystifying. Researchers are divided in their explanations: are the learning algorithms themselves to blame? Or are they being applied to the wrong types of features? Or — the “big-data” explanation — do the systems just need more training data?
To attempt to answer these and related questions, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have created a system that, in effect, allows humans to see the world the way an object-recognition system does. The system takes an ordinary image, translates it into the mathematical representation used by an object-recognition system and then, using inventive new algorithms, translates it back into a conventional image.
It’s not quite Terminator, but it’s not that far off either: Check out Atlas, a new, 6-foot, 2-inch-tall humanoid robot designed for a contest being held by US Defense Department. The 290-pounds machine is being called “one of the most advanced humanoid robots ever built,” in no small part due to its 28 different hydraulic joints and freakishly good balance.
As robots grow more autonomous, society needs to develop rules to manage them. A recent piece in the Economist explored the issue of robot ethics and some of the dilemmas that autonomous machines present.