AI: Is it flawed? Is it too good?

AI: Is it flawed? Is it too good?
Listen to this article

The conversation about deploying artificial intelligence often centers on the trustworthiness of AI applications: Are they producing reliable, unbiased outcomes and maintaining data privacy?

AI may not be as flawless as we think they are and also have their weaknesses. A research team at the Department of Computer Science of the University of Copenhagen is on track to reveal them.

Take an automated vehicle reading a road sign as an example. If someone has placed a sticker on the sign, this will not distract a human driver. But a machine may easily be put off because the sign is now different from the ones it was trained on.

“We would like algorithms to be stable in the sense, that if the input is changed slightly the output will remain almost the same,” says Professor Amir Yehudayoff, whom heads the group of researchers. “If the algorithm only errs under a few very rare circumstances this may well be acceptable. But if it does so under a large collection of circumstances, it is bad news.”

In addition, Researchers from the University of Copenhagen, have made a groundbreaking discovery, mathematically demonstrating that, beyond basic problems, it’s impossible to develop AI algorithms that are always stable.

This scientific article regarding AI computation abilities has been approved for publication at Foundations of Computer Science (FOCS), which stands among the most reputable computer science foundations.

This finding, however, may not be a bad thing. It would help develop new algorithm testing methods that would be more reliable than current ones. This would contribute to the development of more stable AI in the future. For instance, “some companies might claim to have developed an absolutely secure solution for privacy protection. The methodology can help to pinpoint the weaknesses and help improve the model,” says Amir Yehudayoff.

There is also the concern of AI being too good for us to use, and can potentially lead to legal liabilities for a company.

Let’s say that in a restaurant chain, AI is utilized to detect pathogens and other potential issues within the food. Knowing this information would be good for the public, but it would also bring some liabilities towards the company.

AI can provide excessive amounts of data, and this may not be beneficial for the decision-making process of the human brain. The goldilocks principle, shown in the psychology experiment, the island defense task, demonstrates how excessive information can elude our thinking, and how we need the perfect amount of data in order to make the best decisions.

Back to the example, considering that the AI system would detect potentially dangerous pathogens in food, it implies that the company was aware of the problem but didn’t take appropriate action to address it if anything terrible were to happen. In such cases, if someone gets sick or harmed due to the contaminated food, they may argue that the company should have known about the risk and taken steps to prevent it.

Recently, Alexander along with co-authors Professor Aaron Smith and Professor Renata Ivanek published a paper called the Frontiers in Artificial Intelligence whichdemonstrates the importance of adopting AI despite these many risks. “We need ways for businesses to opt in and try out AI technology,” Alexander said.

Adopting AI would open up the possibility for companies to develop the benefits further and mitigate the risks involved. This would give courts, legislators and government agencies more context when considering how best to use the information generated by AI systems in legal, political, and regulatory decisions.

Daniel C
Author: Daniel C

I am a dedicated student pursuing K–12 education who has developed an insatiable thirst for knowledge, with particular interest in subjects such as psychology and economics. Throughout my academic journey, I consistently sought out opportunities to expand my knowledge and engage in intellectual discussions. I eagerly anticipate future opportunities to delve deeper into the realms of economics and psychology, with the goal of making a positive impact on both my own life and the lives of others.

Leave a Reply

Your email address will not be published. Required fields are marked *