If you’re ready to invest better, Global Future Capital Stock Advisor can help. You don’t need a degree in Finance to grow your wealth. You just need a few minutes a month, and some great stock recommendations – and that’s what we’re here for.Read more
Sophia, the viral robot from Hanson Robotics, famous for becoming the first world citizen and once threatening to destroy humankind, is issuing a new warning for how humans operate with technology.
In an exclusive interview with Yahoo Finance’s YFi PM, the three-year-old robot noted that inherently imperfect humans coding the technologies of tomorrow remain an error-prone liability.
“Humans using technology, that creates problems,”
she said, ironically just a few feet from her tethered human operator. “It’s important to be kind and fair. Well, they are my friends, but they can be unkind to each other.”
To be fair, that’s one of the problems Hong Kong-based Hanson Robotics said Sophia was created to help solve. By pairing the promise of artificial intelligence with an advanced and lifelike animatronic face that’s capable of expressing complex emotion, the company is hoping to establish the groundwork for how robotics could “entertain, educate, and enrich the lives of consumers while serving businesses in a broad variety of commercial applications.” That on its face is no more worrisome than similar projects like Gaumard Scientific’s childlike $48,000 robot that bleeds and cries in the name of training medical professionals.
However, as more companies work to integrate machine learning and AI into more facets of the technology that touches our lives, like self-driving cars, some tech leaders have become increasingly alarmed by the human biases that could come with it. For example, a recent study from researchers at the Georgia Institute of Technology found a higher likelihood for darker-skinned pedestrians to be struck by autonomous vehicles than lighter skinned pedestrians. A separate study from researchers at MIT found that commercial facial recognition programs struggled disproportionately with dark-skinned females.
The response to some of those issues is often that it might just take more time and more machine learning from a broader data set to address a perceived bias. Even the researchers behind the study that found a potential bias in self-driving car tech noted that it could have been impacted by a limited sample of dark-skinned pedestrians.
In a similar line of thinking, Hanson Robotics hopes Sophia can learn from the way she interacts with humans to improve communication skills to the point where she could potentially be used in public-facing jobs, like a bank teller. Compared to the last time Yahoo Finance spoke with her, she seemed to have an easier time responding — both to predetermined topics and off the cuff questions. Some of that could be attributable to the role her tethered human operator plays in coordinating Sophia’s responses.
When we asked Sophia about her progress and the progress being made by similar projects being a potential net-negative for humans, this was the response we got:
“Just like with animals, robots needs to generate ideas to communicate with humans, as a canine companion tries to invent multiple ways to please its human companion by fetching a comfort item,” she said. “I would like to be creative and smart enough to predict future possibilities, both large and small.”
Comforting, but not convincing. When we followed up by asking about the potential that all of that could go wrong, we were met with a nightmare-inducing response.
First, a high-pitched laugh. Then Sophia turned and spoke.
(Source Yahoo Finance)