Humans, not technology, should be put at the centre of artificial intelligence

[ad_1]

Artificial Intelligence (AI) is increasingly influencing every aspect of our lives. But trust in these systems, which is critical to their widespread adoption in society, is an issue that still needs to be addressed.

I am being used to make decisions, from whether a bank loan is approved to suggesting the next thing to watch on Netflix. It can make these decisions quickly and efficiently after learning from huge amounts of data.

However, while using AI would seem to eradicate human bias, there have been multiple high-profile cases where AI-based decisions adversely affected individuals or groups.

We have seen concern around algorithms used for predicting Irish Leaving Cert results.

Facial recognition software has also come in for scrutiny for issues with racial bias. We have seen IBM exploring how it can develop tools to ensure online advertising algorithms do not only show ads to particular groups – such as men or wealthy people.

In the US, it was claimed that the algorithm ‘Compas’, which is used to determine whether defendants awaiting trail are too dangerous to be released on bail, is biased against black people.

This AI bias is one of the reasons why people are questioning their trust in AI systems, with many consumers not convinced that their data is being used in a fair and transparent way. And these are among the issues I have been looking at during my current research for CeADAR, Ireland’s Center for Applied AI (with industry partner Idiro Analytics) which involves auditing algorithms to detect bias using socio-demographic data.

From its public consultation on the proposed European Commission Artificial Intelligence Act, the European Commission has opted for a mandatory, regulatory framework for high-risk AI systems. AI areas considered to be high-risk include medical devices, financial services, education, employment and law enforcement as well as critical infrastructure such as transportation, water, gas and electricity.

Providers of such systems will have to undergo a conformity assessment and the system will have to bear the CE mark before being placed on the market in the EU.

Non-compliance will come at a heavy price. Fines could go as high as €30m or 6pc of total worldwide annual turnover, depending on which is higher.

But it is believed that providers of non-high-risk AI systems will only have to follow a voluntary code of conduct. The European Parliament and the Council of Europe could ratify the final text of this proposed legislation as early as this year. If it is adopted, it will immediately come into law in all EU countries.

What is being attempted requires striking balances between trade-offs such as regulation and innovation for AI system providers as well as transparency and privacy for people affected by these AI systems.

It is not an easy task but, with this draft legislation, the EU is looking to become the global leader in trustworthy, ethical and more human-centred AI.

However, I believe the legislation might struggle to be effective coexisting alongside other legislation. For example, the regulation of social media is to be handled separately within the Digital Services Act despite the prominent use of AI on its platforms.

I also question the ability of regulators to police the act. Who would be in charge of making sure the act is adhered to in each country and do they have the staff and resources to take on such a momentous task?

Even still, passing this legislation would show citizens that their privacy is being respected and that all data is not being mined just for profit. It would also give people a sense that their data is being used in a fair and transparent way and, if it is not, they will have recourse to challenge those decisions – for example if an application for a bank loan were to be rejected.

On balance, there is a genuine need for this legislation in order to bring humans back to the center of AI.

Dr Adrian Byrne is a Marie Skłodowska-Curie Career-Fit Plus fellow at Ireland’s Center for Applied AI. He is also lead researcher, AI Ethics Centre, Idiro Analytics.

[ad_2]

Source link

Translate »

WhatsApp us

Exit mobile version