Wednesday, July 24, 2024
Home Society “We can really make artificial intelligence into something beautiful in society. We must’

“We can really make artificial intelligence into something beautiful in society. We must’

by News Room
0 comment

Moniek Buijzen is Erasmus Professor of Social Impact of Artificial Intelligence and Professor of Communication and Behavior Change at ESSB. He is the initiator of the AICON project, which connects art, science and people in society and experiments on a small scale how artificial intelligence can be implemented responsibly.

What do you think is the biggest societal challenge of the rise of artificial intelligence?

“Every technological revolution is accompanied by major social changes. The invention of the steam engine and the industrial revolution accelerated the transition from a feudal system to a capitalist system. Roles and power have changed. That’s happening now too.

“The development of artificial intelligence is inevitable, we cannot stop it. The question is how we can ensure that it is implemented in a way that is worthy of people and the planet. Artificial intelligence offers great opportunities and possibilities, but also carries risks. For example, artificial intelligence is now burdening the environment because the servers it runs on absorb water and energy. The costs and benefits have also not been evenly distributed among society. Because the technology is based on existing data, it acts like a huge magnifying glass for what is happening in society.”

Are you saying the algorithms are biased?

“No, the results are possible biased are. It’s not the technology itself. It shows what is wrong with society. A common example is that if you ask an AI to create a picture of a bank manager or a professor, you get a picture of an old white man. In the Netherlands, we have already seen this go wrong in practice in terms of benefits. This is not because the technology is sexist or racist, but because the algorithm is based on what was fed to it. It’s a systemic problem. But it’s also an opportunity because it can reveal things that aren’t right faster and more clearly.”

Author of the picture:
Bas van der Schot

How do you see the role of big technology in the development of artificial intelligence?

“The great power of big technology is a really big risk. These companies build AI applications with the sole purpose of making a profit. It is dangerous because public, human and democratic values ​​can be subordinated to it.

“For example, social media algorithms are designed to keep you on the platform as long as possible. The company makes money by making consumers addicted to the product. Companies also won’t shy away from misinformation if it means users stay on the platform longer.

“Privacy and integrity are also public and individual values ​​that are at stake. ChatGPT is based on information from artists, writers and artists. This is processed and used without source. This violates copyright. In addition, it enters user data. This may include very intimate personal information. People are not always aware that they are entering the system when they use it. But even if you’re aware of it, you can’t use these apps without sharing your data. And you might wonder to what extent it is possible to give it up completely in this society.”

Does science offer a counterbalance to this?

“Technological development is mainly driven by big technology. There are research institutes where researchers are creating protective and more ethical systems, such as the Erasmian Language Model and the European BigScience Initiative Bloom. But big tech companies attract good researchers with high salaries; offers they can’t refuse. The public sector cannot compete with large companies in financing development.

“On the social side, the necessary innovations take place in science. In any case, it has become clear that we need to manage the social changes brought about by artificial intelligence in the right direction. This is reflected in where research money goes and what public organizations do, such as our national Public Values ​​in Algorithmic Society project. We’ve had major warnings. The role of automated risk selection systems in benefits has really woken people up.

Can governments protect citizens from the power of big tech companies?

“Our democracies have laws and regulations that reflect what we consider acceptable and what is acceptable. But big companies have so many resources to avoid these regulations. For example, in Holland and Europe we have Kijkwijzer. This now applies not only to television, but also to social media. But the big players in big tech don’t follow this. They have the power to say that they don’t agree with it and that they, as a company, decide for themselves what is acceptable or dangerous and what is not. It is undemocratic and very dangerous.

“The EU has far-reaching laws regulating companies, such as the EU Artificial Intelligence Act and the EU Digital Services Act. These laws are now being criticized for focusing too much on regulating content rather than the companies themselves. This makes it easier for companies to avoid liability by saying that users post content, not the company.

“Regulation is really important. Individuals can’t compete with these companies, but governments can. Some see EU regulation as a restriction on individual freedom. But it’s about restricting the freedom of commercial parties.”

Some fear that robots will take over their jobs. Is the fear justified?

“Jobs are indeed disappearing and new types of jobs are being created. There will be a new divide between people who can control AI and people who can’t. Artificial intelligence leaves a lot of human work behind. My colleague Clartje ter Hoeven (Professor of Organizational Dynamics in Digital Society, ed.) studies this. Many people work in an industry where they do the rest of the work that a machine can’t do. It’s often not the most inspiring job, it’s poorly paid and unprotected. Think of people encoding images to train a machine, or people screening AI-generated text for offensive content. This has been outsourced to African countries and has already been compared to sweatshops.

“The fear of change also exists in creative fields, such as art or journalism, but we also see how these sectors take advantage of opportunities. Our AICON project includes the artist Peim van der Sloot. A symbiosis has really developed between him and the AI. In his Future Jobs project, he created advertising posters about future jobs using ChatGPT and Stable Diffusion.

“If you see artificial intelligence as a partner that can strengthen your own skills, great things can happen. It won’t happen if you think AI will take over your job and you can relax.”

em-ai werk pacman

Author of the picture:
Bas van der Schot

You support proper management of the introduction and development of artificial intelligence. What is needed for this?

“The only way to achieve an acceptable implementation is co-creation. Everyone needs to participate, think along and be a part owner, not just the rich, not just big tech and not just governments. We need to define together the conditions under which we want artificial intelligence in society, so that everyone benefits from it, not just humans, but the entire ecosystem in which we live. It requires a different way of thinking.”

The development of artificial intelligence seems to arouse both very utopian and dystopian feelings in people. Why is that so? What do you think about it yourself?

“Yes, I think the amazement and disgust of AI is recognisable. On the one hand, I’m impressed with what this technology can do. Personally, I’m now very happy with ChatGPT, which takes into account my limitations due to the long coronavirus. I’m really impressed with the lyrics it presents. Medicine also still has more effective applications for identifying cancer and brain diseases, for example.

“But like a lot of people, I sometimes feel uncomfortable interacting with a robot. The idea of ​​machines taking over humanity is a classic fear. I love science fiction that deals with this theme in all sorts of apocalyptic ways, like foundation in Westworld. But people with real technical knowledge disagree. So I don’t know if there’s an existential threat in the technical sense.

“If there is an existential risk in the future, we should not focus on it. This distracts from the social risks we face, which are far greater, more urgent and more tangible. Maybe I’m too naive, but I believe that if we deal with these risks properly and define the terms of artificial intelligence in our world together with everyone involved, we can really make it something beautiful. We must.”

em-whack a Mole pfas (002)

Read more

Dead cows, cancer and Teflon tomatoes. How harmful is PFAS?

Dordrecht’s Teflon factory has been discharging PFAS into the area for years, a group…

Leave a Comment