Photo: PanasonicAutonomous vehicles can add a new member to their ranks—the self-driving wheelchair. This summer, two robotic wheelchairs made headlines: one at a Singaporean hospital and another at a Japanese airport.The Singapore-MIT Alliance for Research and Technology, or SMART, developed the former, first deployed in Singapore’s Changi General Hospital in September 2016, where it successfully navigated the hospital’s hallways. It is the latest in a string of autonomous vehicles made by SMART, including a golf cart, electric taxi and, most recently, a scooter that zipped more than 100 MIT visitors around on tours in 2016.The SMART self-driving wheelchair has been in development for about a year and a half, since January 2016, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator in the SMART Future Urban Mobility research group. Today, SMART has two wheelchairs in Singapore and two wheelchairs at MIT being tested in a variety of settings, says Rus.
The Human OSBiomedicalBionicsStartup Neurable Unveils the World’s First Brain-Controlled VR GameBy Eliza StricklandPosted 7 Aug 2017 | 13:10 GMT Photo: NeurableImagine putting on a VR headset, seeing the virtual world take shape around you, and then navigating through that world without waving any controllers around—instead steering with your thoughts alone.That’s the new gaming experience offered by the startup Neurable, which unveiled the world’s first mind-controlled VR game at the SIGGRAPH conference this week. In the Q&A below, Neurable CEO Ramses Alcaide tells IEEE Spectrum why he believes thought-controlled interfaces will make virtual reality a ubiquitous technology.Neurable isn’t a gaming company; the Boston-based startup works on the brain-computer interfaces (BCIs) required for mind control. The most common type of BCI uses scalp electrodes to record electrical signals in the brain, then use software to translate those signals into commands for external devices like computer cursors, robotic limbs, and even air sharks. Neurable designs that crucial BCI software.
The U.S. Department of Energy wants to make investing in energy technology easier, less risky, and less expensive (for the government, at least).A new initiative by the DOE’s office of Energy Efficiency & Renewable Energy (EERE) is looking for ideas on how to reduce barriers to private investment in energy technologies. Rho AI, one of 11 companies awarded a grant through the EERE’s US $7.8-million program called Innovative Pathways, plans to use artificial intelligence and data science to efficiently connect investors to startups. By using natural language processing tools to sift through publicly available information, Rho AI will build an online network of potential investors and energy technology companies, sort of like a LinkedIn for the energy sector. The Rho AI team wants to develop a more extensive network than any individual is capable of having on their own, and they’re relying on artificial intelligence to make smarter connections faster than a human could. “You’re limited by the human networking capability when it comes to trying to connect technology and investment,” says Josh Browne, co-Founder and vice president of operations at Rho AI. “There’s only so many hours in a day and there’s only so many people in your network.”
How do you teach a robot right from wrong?It’s a question straight out of a sci-fi movie—but it’s also something we may have to grapple with a lot sooner than you might think.Take a self-driving car, that has to choose between hitting a child or slamming its own passenger into a barrier.Or imagine a rescue robot that detects two injured people in the rubble of an earthquake, but knows it doesn’t have time to save both.BERTRAM MALLE: How does that robot decide which of these people to try to save first? That’s something we as a community actually have to figure out.NARRATOR: It’s a moral dilemma. Which is why a team of scientists is attempting to build moral robots.If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.
In the mid-1940s, a few brilliant people drew up the basic blueprints of the computer age. They conceived a general-purpose machine based on a processing unit made up of specialized subunits and registers, which operated on stored instructions and data. Later inventions—transistors, integrated circuits, solid-state memory—would supercharge this concept into the greatest tool ever created by humankind.So here we are, with machines that can churn through tens of quadrillions of operations per second. We have voice-recognition-enabled assistants in our phones and homes. Computers routinely thrash us in our ancient games. And yet we still don’t have what we want: machines that can communicate easily with us, understand and anticipate our needs deeply and unerringly, and reliably navigate our world.Now, as Moore’s Law seems to be starting some sort of long goodbye, a couple of themes are dominating discussions of computing’s future. One centers on quantum computers and stupendous feats of decryption, genome analysis, and drug development. The other, more interesting vision is of machines that have something like human cognition. They will be our intellectual partners in solving some of the great medical, technical, and scientific problems confronting humanity. And their thinking may share some of the fantastic and maddening beauty, unpredictability, irrationality, intuition, obsessiveness, and creative ferment of our own.