Sunday, May 12, 2024
Home Society “Stop using generative artificial intelligence if the risks are unknown”

“Stop using generative artificial intelligence if the risks are unknown”

by News Room
0 comment

Technical and legal expert Bart van der Sloot shares ten interesting topics for responsible generative artificial intelligence

If the risks of the technology are unknown, it should not be used. This so-called precautionary principle should guide the rapid rise of generative AI. It’s one of ten points that Dr. Bart van der Sloot, associate professor at the Tilburg Institute for Law, Technology and Society (Tilt), makes in his book “Regulating the Synthetic Society,” published Monday..

In order to protect society from the harmful consequences of generative artificial intelligence, laws and regulations must be adapted, says a senior lecturer at the Tilburg Institute. His book, which is available for free in PDF format, is recommended by experts such as European Court of Justice Judge Maja Brkan and Yale University Professor Luciano Floridi. Van der Sloot emphasizes that generative artificial intelligence is developing at an unprecedented speed. “The possibilities for applications like ChatGPT seem endless, but so are the dangers. How should this technology be regulated to manage the risks and maximize the opportunities?

He continues: Experts predict that in five years, more than 90 percent of all digital content will be wholly or partially generated by artificial intelligence. In such a “synthetic society” it may no longer be possible to determine what is real and what is not. Although generative is still in its infancy, this technology can already produce content that is indistinguishable from the real thing. The impact of this new reality on democracy, the rule of law, the press and personal relationships may be unprecedented, he says.

Chat GPT and Deepfakes

In his book, Van der Sloot shows how synthetic technologies such as ChatGPT, deep fakes, and augmented and virtual reality work and discusses applications in education, healthcare, and entertainment. He also explains how generative artificial intelligence can be used for fraud, exploitation and identity theft. He also explores deeper effects, such as a society where there is no longer a shared truth (post-truth society), the privatization of the public sphere, and the loss of individual autonomy and social trust.

10 points of interest

Ten basic things that, according to the expert, require attention in order for generative artificial intelligence applications to work flawlessly:

  • Information status: There is an urgent need for more information about artificial intelligence, both among citizens and regulators. Market participants need to be better informed about the existing regulations on artificial intelligence.
  • Strategic autonomy: Public alternatives to private and essential AI should be considered, along with broader export and import restrictions.
  • Technical information: The creation of technical criteria, bans and moratoriums (e.g. sound cloning) must be investigated.
  • The precautionary principle: The precautionary principle (if the risks of a technology are unknown, it should not be applied) may need to be applied to generative artificial intelligence.
  • Efficiency: The effectiveness of an artificial intelligence application must be proven before its implementation.
  • Social benefits: The societal benefits and cumulative effects of synthetic technologies must be considered in the regulatory system.
  • Legal changes: Changes to the legislative framework should be considered, including the criminalization of virtual rape.
  • Control: Consideration should be given to establishing a specialized AI oversight authority with broad powers and resources.
  • Cooperation: More cooperation is needed between regulatory authorities and government agencies in countries that respect the democratic rule of law.
  • sanctions: The ability of regulatory authorities to impose fines and compensation to citizens should be sufficient to have a deterrent effect.

Leave a Comment