what is going on?
A group of 50 AI experts and institutions want European politicians with additional rules Come “general purpose AI” or GPAI. This vague term includes artificial intelligence that was created without a specific purpose and can be used for multiple things.
Consider, for example, the extremely popular chatbot ChatGPT: a generation model that can generate scripts itself, but can also check programming code for errors. Or Dalle-E and Midjourney, who can create realistic word-based artwork or portraits. Think of the pictures where the pope appears to have suddenly turned into a file White puffer jacket He walks around.
What are they warning about?
It is no coincidence that experts are now sounding the alarm. The EU has been working on AI rules for two years and is now thinking and negotiating about how strict the rules are and what AI applications are covered by the regulations.
But these critics fear that Brussels is using too narrow a definition of the rules that are now being put down on paper.
AI models built without a specific purpose should become as much a part of the rules as they are concerned. They warn that what we are seeing now is just the tip of the iceberg.
Experts say the companies behind AI models must be transparent and accountable about the data they use and the design choices they make. They argue about the rules about model development, how the data was collected, who contributed and how the models were trained.
Regulations by which you determine in advance whether something is risky or not is not enough. Because apps that don’t pose a risk at the initial stage can still do so later on.
Cynthia Lim, assistant professor of computer science at TU Delft, also sees this. There she also works on the artificial intelligence in charge. She cites facial recognition systems as an example.
“Scientists and companies alike are working on technology to recognize faces. Face recognition is often seen as a standalone, generic component that you can use in entirely different applications. For example, to unlock your phone automatically, to find family members in private photos, but Also in police surveillance systems.
“But these are really different applications. Gradually we’re starting to think more critically about whether you just want to plug the same component into all of that.”
Images used for control
This happened, for example, on the photo site Flickr. Millions of photos of people It was shared by parent company Yahoo to create a large public dataset on which to train AI. This data set was then used in many applications, including monitoring.
“Many people then found it inconvenient to use their children’s photos for police purposes. They put those photos online themselves, but they didn’t realize that only other photo enthusiasts would see them,” Liem explains.
For example, according to the associate professor, there will be more AI applications that initially seem harmless or neutral, but can still lead to unwanted applications.
Be transparent and identify risks
Liem now sees the same thing happening with the AI generation. In early drafts of European regulations, she says, this was not yet so prominent. That changed quickly, thanks to the hype surrounding ChatGPT, DALL-E and Medjourney. But this is really just the tip of the iceberg.”
“In any case, it is nice to provide more transparency about the origin of the data, to be able to explicitly identify the potential risks of your technology, without judging whether you are honest about it, or to spend more time identifying harm. AI law can help in applying it.”
This is not the first time that critical voices have been voiced by scientists. Last month, a group of critics called for imposing a Pause in the development of artificial intelligencewhich also includes Tesla chief Elon Musk and Apple co-founder Steve Wozniak.
They fear a race out of control without regard for the long-term consequences. The pause can then be used to agree on boundaries.
European rules are coming
In the proposals, Europe wants to impose rules on AI applications that potentially pose greater risks. The higher the stakes, the greater the liabilities imposed on governments and companies.
Even systems that pose an unacceptable risk to security or threaten human rights are prohibited. Such as facial recognition in certain cases or in systems that give citizens a Set the social scorewhere you get plus and minus points based on your behavior.
The rules will apply to anyone who offers a service or product that contains AI.
Lots of opposition
GroenLinks MEP Kim van Sparrentak would like to classify these “general-purpose AI models” as high risk as well, so that they are also subject to the strict rules. “But there’s a lot of opposition from right-wing parties. And Big Tech is pushing hard. Microsoft, OpenAI: the whole outing often comes.”
The European Parliament is now in the final stages of negotiating what rules they want to impose. Once their position is determined, they still have to negotiate with EU member states and the European Commission. The goal is to reach an agreement on the rules by the end of this year. The European Commission submitted its first proposal in 2021.
“Subtly charming tv fanatic. Introvert. Thinker. Alcohol maven. Friendly explorer. Certified coffee lover. Infuriatingly humble food junkie. Typical reader.”