TechsPlace | Artificial intelligence (AI) is seeping into every aspect of our modern life. Be it choosing what to watch on YouTube, tagging your friends on Facebook, or solving complex problems. Some scientists have even started using the technology for space exploration and scientific discoveries. Others are experimenting with Artificial intelligence algorithms on critical sectors, such as healthcare, national surveillance, driverless vehicles, and criminal sentencing.
Actually, we are at a unique point in life where AI-driven machines could soon be making major decisions that affect our lives. A recent study by PwC found out that AI could boost the global economy by about $15.7 trillion by 2030. However, there is still a challenge about this technology, especially when it comes to the reasoning behind the decisions it makes.
A few years ago, there was so much noise on social media when a robot working in an Amazon fulfillment center accidentally raptures a can of bear repellent. The controversy was not that the incident led to 24 human workers being hospitalized, but how life will be when robots take over human jobs.
As it appears, the excitement about the endless possibilities of AI is crippled by the fear that smart machines would render humans obsolete. A study by Pew Research shows that about three quarters (72%) of Americans are worried about the possibility of machines performing most of the tasks that are usually done by humans.
There is also the unsettled question about the decision-making ability of Artificial intelligence. The above incident raises concerns about the extent to which we can tolerate robot mistakes. A 2016 study suggests that most people are more willing to accept flaws in fellow humans, but not in the machine.
And this brings us to the question: Can we trust a machine that thinks like us?
While research has shown that new technologies like Artificial intelligence can perform better than humans in some tasks, the issue of whether we should trust an AI-driven machine still vary.
Modern Artificial intelligence is still new, and most of the big techs have only invested massively in research. They are yet to take a significant step towards empowering a machine to make critical decisions that are currently done by humans.
Can We Rely on AI to Tell Us the Truth?
AI systems, especially voice assistants, have an obligation to tell us the truth. But we can still confirm some of their answers. For instance, if you step outside and you find out it is not as warm as your assistant told you, and then you have a reason for abandoning it.
Perhaps what is more interesting is that these voice assistants have an incentive to lie to us every time they talk. Most of the time, they make us believe that they like us. Actually, all assistants sound like humans because their developers want us to trust them. They understand that we are wired to connect emotionally to human voices. Sounding like human increases our trust, but not necessarily because AI systems are more trustworthy.
Worries about AI
AI systems analyze and learn from vast amounts of information to mimic how the human brain works. These algorithms grow through adaptation and machine learning, enabling them to recommend actionable insights.
But just like people, trust in smart machines will have to be earned over an extended period. The implications of allowing artificially designed ‘brain’ to make critical decisions are still profound. Sure, a smart machine has more potential to benefit society. But if we can’t understand its thought process, how can we trust its decision? Apparently, AI algorithms have become more complicated that even their designers don’t always understand the precise way in which they evolve.
Let’s use these situations to illustrate how frustrating it is when you don’t know the process used to reach a conclusion.
- Imagine you have applied for a mortgage and the bank declined to offer the loan. You will be frustrated if the bank can’t tell you exactly why.
- Likewise, you will be frustrated if you are refused critical services like health insurance because of an unjustified risk assessment algorithm.
- Or even more seriously, a law enforcing officer arrests you on a suspicion of planning to commit a crime solely based on a predictive model with no solid evidence.
AI Could Be Beneficial, But at What Cost?
The advent of neural networks involving numerous interconnected data processors can give rise to better insights, ranging from more reliable medical research and diagnosis to better weather forecasting.
If we can teach a machine how to solve complex problems, such as how to resolve Mac error codes immediately they occur or even how to speed up Mac, a task that’s usually handled by Mac and PC cleaning software, then we would have gotten to a point where AI brings about positive social and economic impact at a personal level. One of the main unique advantages of AI is that it can perform things that humans can’t.
But what happens when a system is more difficult to understand?
AI, in its current form, is still hard to reverse engineer and explain. So, until the industry explores other ways to make AI algorithms easily understood and controlled by humans, it might end up being counterproductive. As it stands, AI is only as good as the information and instructions we provide it.
If we can trust AI to run software or manage critical aspects of our lives, for instance, making deductions about people’s motivations, then we should be ready to accept the mistakes it makes.
Even if a machine was to be designed to mimic how humans make judgments, it would be difficult for the computer to arrive at a conclusion for the same reasons we would.
If only intelligent machines can make accurate decisions by applying the same psychological process that causes us to make them, then more people are likely to trust AI. As for now, people will probably trust a machine if it compliments humans, not replacing them.