Anyone using everyday AI like Alexa or Siri must have heard of the controversies of privacy invasion by eavesdropping on people’s conversations. This is just the tip of the iceberg involving the unethical behaviour of humans with the help of Artificial Intelligence. To counter such activities, Ethical AI has become quite a buzz now.

With the Internet of Things, Artificial Intelligence and Quantum computing, humans feed the machines and if this feeding is embedded with bias, machines follow suit. From algorithmic bias to data breaching, AI has been the centre of controversies when it comes to moral and ethical practices. A large number of IoT and AI devices are sent into the market before ensuring that they are secure. Sensitive data like medical records, everyday personal conversations, routine travel data of consumers, information on homes and workplaces, etc. have been leaked in the past and cybercriminals have made use of them. Users are generally not informed or less informed about their data being used. AI is usually termed as “the most impactful invention” and the “biggest existential threat” by the World Economic Forum. Where do we start to correct this issue? With AI seemingly new yet growing at an unimaginable rate, is it too late to nip it in the bud?

The social implications of technology are vast and essential. However, there have been several negative implications like hacking that have largely influenced consumers creating doubts in their minds. The following are some of the reasons why ethical AI is the centre of discussion today.

1. Ethics Gap

Researchers solely depended on data collection and feeding the systems and only recently have realized the impact of algorithmic bias, which has been intentionally and unintentionally fed into the systems. Conferences and papers presenting further advancement in the existing systems do not have ethics as part of their review processes. A prime example for this is the NeurIPS conferences that happen in Canada twice a year, So, it is placed only on the moral responsibility of the individuals involved in a project with no system in place to check biases. They are detected much later and this ethics gap has led to the infiltration of biases in facial recognition, gender, age, ethnicity, etc. This means, there is a requirement of a basic structural change to avoid this from the beginning.

2. Machines over humans

Unlike what one sees in Hollywood movies where the machines take over the world and threaten the existence of the human race, there are more practical problems AI can pose. With the systems fed with all data and intelligence, they might anticipate every move of the humans handling them and could react against them using their “intelligence”. For instance, if unplugging is the final resort to stop a machine from acting against an individual, it may anticipate that move and prepare to defend itself accordingly. Humans need not be the strongest creatures on the earth but definitely the most intelligent. What if machines learn evil from humans?

3. Accountability

When an AI system fails, who is to be held responsible? The programmers? The end-users? Building an AI system takes such vast energy and effort that the responsibility is widely distributed. No individual can be blamed for the malfunctioning of an AI system. Accountability becomes almost impossible. This means, compromising on transparency, predictability, responsibility, auditability, etc., i.e. the criteria that apply to human functions in a social environment fed into the machines should be fed properly at each stage by every individual. Altering it requires redoing the entire system as they are interconnected. The computerizing society should think of a list of non-exhaustive ethics before feeding data into AI systems.

Follow the link to know more about AI involving in NLP.

Potential Solutions

Inclusivity in employment

A diverse set of employees ensures less bias. For instance, a coloured female employee will ensure data that discriminate against her is not fed into the system. Similarly, diversity in age, gender, ethnicity, sexual orientation, etc. should be the basic arena where bias can be stopped. Including people who have faced biases in real life will not pass it on further to other humans and machines. From a more general perspective, diversity in workplaces makes society a better place for everyone to live in. Breaking the privilege of the majority would resolve many social issues today.

Forcing businesses to resolve against unethical practices

Google announced in 2018 that the company would not involve in anything that required compromising on the ethical use of AI. They released their own ethical code of practice after criticism of their involvement with the US government’s weapon programme. Since then they’ve refused to participate in any government programmes that compromise on their ethical codes. This has to widely enforced on all companies, big and small. A constant review and checking of ethics and updating them as society grows better will result in the proper practice of AI. Big firms should come forward and share their measures in public for others to follow.

Data mineralization

Firms should collect only the required data and respect the obligations involved in using the data and should not use them for anything beyond the intended purposes. Objectives should be clearly laid out and information required only for that must be collected. At every stage, the employers must be able to demonstrate their ability to comply with such obligations and build trust among the consumers and their own employees. Anyone’s information should be presented to them transparently and they should be aware of where their information is being used and why. This displays the ultimate respect for someone willing to share their information.

The ultimate understanding is that we should build a better society around us to ensure the machines are taught the same. A machine “learns” from humans and it is our responsibility to ensure correct lessons are fed into them. There have been negative reactions to aspects like treating robots like humans and opinions like machines cannot be built bias-free surface in all the discussions around ethical AI. Respecting an individual and the data surrounding them is the least we could do as humans to prove that we have a sixth sense that differentiates right and wrong. It is essential that we do not forget the basics of humanity before passing our intelligence to machines.