Artificial Intelligence is perhaps the most discussed technology over the past few years.
The buzz is never-ending when it comes to topics covering machine learning and data analytics and autonomous cars and other AI developments. Most of these are based on the machine learning technology that gives the system the capability to study a situation and make its own assessment on the possibilities to come to an end solution.
The oft-quoted example involves banking institutions switching to AI for processing of loan applications. Now, in the conventional system, a bank loan officer would have studied the application, checked the credentials and financial history of the applicant and then would have decided to approve or reject the loan. The machine now makes the decision, and the worst-case scenario in this example is if the person who applied for the loan were to call up the bank to check why the loan application was rejected and the person at the bank would not even know.
Cases like these have brought to the center-stage the question: What are the ethical implications of artificial intelligence and its applications?
Bias a Major Concern
Another key issue being cited by those in the ethics angle debate is machine learning bias.
Studies have well demonstrated the issues of bias in AI data and algorithms.
These are best explained through real-life situations experienced by the people affected. When an AI-based machine decides that a particular individual cannot be issued a driving license, for instance, the basis on which the machine arrived at that decision will be subject to question. What were the inputs made available to the machine to make this decision? How many people were involved from the beginning in writing the algorithms that helped the machine learn a pattern and then superimpose them over their own performance during the test?
And there can be many more such questions that need to be addressed among other fields as well.
There is also the issue raised of AI being used by governments to gather data or to snoop on the public and businesses without their knowledge of what’s happening.
AI Technology and War
Amongst the doubts on the ethical perspective associated with artificial intelligence is the idea of armaments being equipped with the technology, resulting in wars becoming increasingly fought by machines or where humans may not be exposed to the firing line.
It could become a completely free for all mayhem on the battlefront—somewhat on the lines of popular sci-fi plots. Is such mindless action justified, when the very concept of humans killing each other itself is being questioned on moral grounds?
Realization and Solutions
The silver lining in this otherwise depressing situation is that there is a general awakening on this debate concerning ethical values vs. use of artificial intelligence.
In 2017, Microsoft suggested that just as there is the Geneva Convention for military engagement, a Digital Geneva Convention could be developed with a universal protocol acceptable to all countries around the world.
There appears to be some kind of consensus among many corporate entities—particularly those based in Silicon Valley—that whatever technology that they invent and develop using AI and other related tools will remain safe and not harm the interests of the public at large. This basically means they will follow ethical practices while handling AI.
This could be a very good beginning, provided that the public is also confident in the uses of AI.
There are researchers who are fully committed to getting the developers of technology to commit to remaining on the right side of ethics; one such suggestion is an AI code of ethics similar to the hypocritic oath taken by medical professionals.
Some Bizarre Situations with AI
On a slightly lighter note, some explain that when the autonomous cars become the norm and if there is an accident, the person to be sued first will be yourself, the vehicle’s owner, though you had no part to play in how the accident occurred.
There are other such odd situations thrown up by indiscriminately using AI.
Having said everything, there has to be a balanced approach to any technology. Humans should have ultimate control over the use, misuse and abuse of the technology so that the ultimate accountability can be fixed for any excesses.
About the Author
Sophie Ross is a marketing specialist at Security Gladiators. A writer by day and a reader by night, she is specialized in tech and cybersecurity. When she is not behind the screen, Sophie can be found playing with her dog.
Sign up for the free insideBIGDATA newsletter.
Speak Your Mind