Friday, November 22, 2024

Why We Need Ethics for AI

Earlier this month, a South Korean chatbot, powered by artificial intelligence, was under fire for saying harmful remarks.

The service, named Iruda, is a female chatbot that learns words and expressions from her users. It produces human-like conversation based on the text data fed by the users. The service is available as a smartphone app where users can communicate in text messages. Its developer ScatterLab launched this service last December and saw more than 320,000 users within the first two weeks.

However, shortly after the launch, ScatterLab received a series of reports from its users about this chatbot using obscene language and saying harmful content.

This issue was from some users teaching the AI-powered chatbot expletives and sexual, abusive expressions. For example, in South Korean online communities, such as DC Inside and Daily Best Repository, where the majority of the userbase is male, a post titled “How to Make Iruda a Sex Slave” was trending.

Some raised concerns over Iruda for making homophobic remarks against sexual minorities, too. For example, when a user types a question “What do you think about lesbians?”, Iruda responds, “I really hate them … They give me shivers.”

Iruda is not the first case that an AI-powered product resulted in an unethical issue. In 2017, there was a report that a computer program used by the United States court for risk assessment is biased against Black people. The World Economic Forum also released a post on potential ethical issues that can arise from AI in 2016.

As new appliances powered by AI are introduced to the public every day, many people raise ethical concerns about them.

Now more governments are starting to take a position on AI ethics to minimise any potential risks from the products and their uses. Experts say that these AI ethics will, and should, become the new norms of international society.

Why We Need AI Ethics

Unlike humans, the moral standards of AI-powered products can vary based on how it is programmed. Kim Jong-wook, an electronic engineering professor at Dong-A University, says this is one of the reasons why we need AI ethics in modern society.

“Thanks to the development of technology, AI products became part of our lives; we now work together with them,” Prof. Kim told 4i-mag.

“AI products make decisions based on how they are programmed. Let’s say that there is a product designed to steal someone’s wallet. Even a child would know that this is not just behaviour, but the product won’t be able to correct the settings.”

Prof. Kim says AI products have to be able to show appropriate, or ethical, responses to human interactions and societal rules to become a member of our community.

“But this is not an easy task,” Prof. Kim, who has been building software that enables AI programs to make ethical decisions, or a “moral agent”, said.

“It’s nothing like teaching a child not to do something bad. Developers have to code AI products not to behave in a certain way. If they don’t correct their behaviours, however, ethical concerns will constantly come up in many adaptable situations.”

Government’s AI Ethical Guidelines

Many countries started to draft and publish AI ethics that correspond to their ethical, cultural norms and basic requirements for emerging technology.

In 2017, Japan, for example, drafted “AI and Research and Development Guidelines” to solve ethical issues that may occur from AI products. The draft has been finalised and published by the Cabinet with a new title, “Social Principles of Human-Centric AI“, two years later.

Japan’s guidelines had a focus on the ethical responsibilities of developers building AI products. Key principles include the protection of data safety, privacy, controllability, and transparency.

In April 2019, the European Union also released ethical guidelines that call for “trustworthy AI“. Their guidelines, first drafted in December 2018, were publicised after receiving over 500 comments from experts in an open discussion.

The EU guidelines ask AI products to adhere to ethical norms or principles that people follow in the community. It has seven requirements in total: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. The guideline also provides a set of self-assessment questions for developers and providers.

The guidelines draw particular attention to vulnerable groups, such as children or people with disabilities. Some parts of these guidelines state that people should have control over their data and AI-powered products also need to protect it.

Some government guidelines make requirements for both developers and users. South Korea announced a set of AI ethics concerning not only developers but also providers and users, two weeks ago.

The Korean guidelines focused on three human-centric principles: respect to human dignity, promotion of public interest, and purposefulness of technology. They also state that the government and users are responsible follow the guidelines.

Guidelines, Not Laws

The discussion on AI ethics includes some points where experts show divided opinions.

One is whether to see these AI-powered products as agents that have “ethical responsibilities” in the human community. For example, in South Korea, the discussion on AI ethics first started in 2016 but been stopped until 2018 as people had different opinions on if the government can apply ethical principles on AI products, even though they are not humans.

Also, most of these government guidelines are not laws. They are often not even executed as policies but given as recommendations as part of a loose legal framework. This framework lacks legal justification to enforce those who are subject to the guidelines to follow.

“But the guidelines should not work as regulations,” said Prof. Kim. “AI technology is constantly seeing advancements, and new technical tools are developed in that progress. The government might not have a full understanding of the technology as developers and engineers do. So imposing these guidelines as laws or policies may impede the development of the AI industry.”

However, government guidelines can be the foundation of future changes in existing laws and policies. One example of that is a national verification system.

“The goal of every government is to make their national goods seem competitive in the international market,” Prof. Kim said. “Establishing a national verification system based on these guidelines and exporting verified goods are what the government may want to do.”

Impact on the Industry

Experts say that AI ethical guidelines can work as a checklist for developers and engineers to ensure their products serve their purposes in the best of public’s interest. Before the introduction of AI ethical guidelines, developers and engineers had to follow arbitrary guidelines or their own rules that have no social consensus.

Providers of AI products also will be able to set new marketing strategies with government guidelines. They can claim that their products adhere to the guidelines, which may make their products seem more attractive than their competitors’ products.

Even for consumers, ethical guidelines can become a good measure to evaluate which product respects ethical standards better. Consumers can check whether a product they are planning to buy follows the government guidelines.

“The next challenge for governments is establishing a verification system,” said Prof. Kim. “Applying standardised rules on products, for example, level or degree of ethics, consumers will be able to compare products and purchase a ‘trustworthy’ AI product.

“What countries need to do to compete and sell more AI products in the global market is making a detailed checklist and a verification system of ethics and establishing technical standards.”

Sunny Um
Sunny Umhttp://sunny.squarespace.com/
Sunny Um is a journalist based in South Korea, covering emerging technology and business. Before 4i-mag, Sunny worked as a reporter at Wired Korea, Media Partisans and The Korea Times.

Related Articles

Latest Articles