Telerik blogs

It’s too risky to allow AI to grow without oversight or control. That’s what the EU’s AI Act is for. In this post, we’ll look at what it entails and how it is going to affect web development going forward.

The European Union has officially passed the Artificial Intelligence (AI) Act. As we saw with the General Data Protection Regulation (GDPR), it’s not just nations and consumers in the EU who are affected by local legislation and regulations. So, when this goes into effect in the summer of 2024, what does this mean for those of us developing digital products?

This guide will cover what the AI Act does, who it affects, the penalties associated with it, how AI risks are assessed and how to prepare for it if you work in web design or development.

What Is the EU AI Act?

The AI Act is a new law passed in the European Union. The AI Act aims to accomplish three objectives:

  1. Protect consumers from the risks and threats associated with AI.
  2. Keep businesses from building or using AI-powered technologies that put their organizations or their end users at great risk.
  3. Empower AI innovation but steer it in an ethical and responsible direction.

While this regulation is titled the AI Act, it’s more of a human rights act. The primary purpose is to keep people safe.

And for those of you who’ve used platforms like ChatGPT or web chatbots and think the technology is innocuous, it is… for now. But these technologies are growing at a rapid pace. Even if we use them for mostly harmless purposes at the moment, who knows what they’ll be used for in the future.

The considerations we took into account when initially adopting AI may not apply to other use cases or industries. This is why regulations are needed, so we can evaluate the changing risks over time and help keep users safe.

How will the legislation help us do this?

The act’s guidelines aim to keep safe, traceable and transparent AI technologies on the market, though closely monitored. Anything that could potentially lead to a harmful outcome will be banned.

There will also be stiff penalties for any companies that are found to violate the law. A company found to be non-compliant can be fined up to 35,000,000 EUR or 7% of its earnings from the previous year, depending on the severity of the violation.

Keep in mind that it’s not just Providers, the companies that create AI systems, that are subject to fines. Importers and Distributors that help sell AI systems and Deployers that use AI in their work (like web developers) are as well.

Will the AI Act Only Affect the EU?

In short, no.

The AI Act may be an EU law. However, GDPR was only an EU law and it had global implications. If you’re building digital products with AI capabilities, and those products will be used by any business or consumer in the EU, then you’re subject to the guidelines and penalties outlined by the AI Act.

Plus, the EU isn’t the first governmental body to try to regulate AI. That said, it is the first to pass any such legislation that defines what AI is and to try to ban dangerous AI from entering the market or being used by businesses.

Other countries and parts of the world have their own AI regulations in the works. For example:

In South America, Brazil has been trying since 2019 to pass legislation that would minimize the harms caused by AI. Argentina and Colombia have both issued frameworks for the ethical use of AI.

In 2023, China published rules for generative AI platforms like ChatGPT. They have a strong focus on AI security as well as monitoring their algorithms for influential capabilities.

Although Australia has yet to put anything into action, their proposed regulations sound similar to the EU’s AI Act. The same goes for Africa.

So even if you’re not currently affected by this EU law, new legislation is planned for different markets around the world. At some point in the future, your digital products’ use of AI could violate new AI regulations. And, more importantly, put your users at risk.

Evaluating the Risks of AI

Not all AI is equal with regard to the AI Act. Some AI applications will be left alone once the law goes into effect. They’ll be monitored, but they won’t be banned.

Let’s address the four risk levels of AI so you know how this law will impact the products you build.

Unacceptable Risk

AIs that are deemed a threat to people will be outright banned. Prohibited artificial intelligence includes the following:

  • AIs that subliminally or overtly manipulate or deceive people and cause them to take an action that would lead to serious harm.
  • AIs that exploit weaknesses of certain groups of people with the intent of driving them to act in a way that’s against their own self-interest and safety.
  • AIs that use social scoring that leads to unjustified and unfavorable treatment of certain people.
  • AIs that use real-time biometrics for the purposes of law enforcement in public spaces. There are certain exceptions, like in the search for victims of human trafficking.
  • AIs that use risk assessments and profiling to predict if someone will commit a crime.
  • AIs that build a facial recognition database by scraping the web or CCTV footage for people’s photos.
  • AIs used in educational institutions or workplaces that infer the emotions of people within these buildings.

Some of these banned AIs probably won’t have much effect on the work you do on the web (the ones italicized above). The first few, though, are quite relevant and sound similar to the reasons why dark pattern lawsuits are on the rise.

That said, it’s important to understand every type of AI that’s now considered unlawful. Because, as the technology and what you do with it on the web advances, these applications may become applicable to the work you do down the road.

High Risk

AI applications that violate or negatively affect fundamental human rights or safety are deemed high risk under this law. There are two types of AI products targeted as high risk.

The first are AIs that violate the EU’s product safety laws. Products subject to this law include things like medical devices, cars and toys.

The second are AIs used in products or industries that must be registered within the EU. These include industries like infrastructure, education, immigration management and law enforcement.

To get any of these products or services on the market, they will first need to be evaluated for AI risks. They will also undergo continual monitoring.

Note: The main category I’d be concerned with is toys. While mobile apps targeted at children might not technically be classified as toys currently, they could be down the road. And it depends on what the AI’s function is.

So, if you’re developing gaming apps, educational apps or anything else that kids “play” with, keep your eye on changes or clarifications to this law to see if they have an impact on what you’re building.

Limited Risk

AIs that people directly interact with are generally classified as being limited risk. Chatbots and AI-powered search engines fall under this category.

While the risks are lower with this type of AI, providers and deployers are now required to disclose when content has been created by AI. Fail to notify users that they’re interacting with an AI or that the content has been manipulated, and you could face penalties.

The AI Act’s transparency obligations now require the following:

Rule #1: If a user directly interacts with an AI system, it must be 100% obvious that it’s artificial intelligence. If it’s not, the system must state that it is an AI.

Rule #2: AI systems that produce text, images, videos or audio must provide output that can be detected as synthetic and artificially manipulated. It also needs to be machine-readable.

Rule #3: If an AI is used to detect emotion or categorize biometrics, users must be notified of such. Personal data must be processed in accordance with the law as well.

Rule #4: When using an AI to create a deep fake, you must alert users that the content has been artificially generated. If the content is part of a larger piece of art, design or story, you don’t need to disrupt the experience with the disclosure. However, it needs to be included somewhere nearby (like the bottom of the page).

Rule #5: Users must be notified the first time they interact with an AI system or see artificially generated content.

Minimal Risk

AIs at this level come with low or no risks at all. While users will encounter them, there’s little to no chance for harm, manipulation or deceit to occur. An example of this is an AI-powered spam filter.

Even if these are the only types of AIs you use when developing digital products, you might still decide to adhere to the regulations and rules laid out in the AI Act.

How Is the AI Act Going to Impact Web Development?

Unlike GDPR, we don’t have years to become compliant with the new regulation. We will have six months. So, this is something to start working on now, whether you’re building your own AI products or integrating these systems within the websites and apps you make.

Another way in which this differs from GDPR is that it’s going to require an ongoing commitment. There’s no simple cookie consent notice you can post to your website and be done with it.

Here’s what you need to do to prepare to deal with the AI Act going forward:

First, figure out if it’s going to impact you. If you’re building websites or apps (or have in the past) that reach customers in the EU, then yes, you are responsible for monitoring and managing your AI.

Next, you’ll need to amend your current design and development strategy. Consider the following as you modify your approach to digital product development:

  • If you’re creating your own AIs, review the risk levels to determine which one it falls under. Then follow the guidelines that correspond with it.
  • If you’re deploying AIs, choose your AI providers wisely. Regularly reevaluate the provider and review news related to them so compliance issues don’t arise.
  • If you’re deploying a third-party AI system within a website or app, review the risk levels to determine which one it falls under. Make sure the provider is compliant.
  • Regularly monitor any AI system integrated within your products.
  • When using AI-generated content or systems on your website or app, denote it as such if it’s not obvious to anyone who encounters it.
  • Add an AI Act notice to your Privacy Policy page. Let users know how AI has been used and what you’re doing to keep them safe.
  • Consider other issues that AIs pose besides just the human threat risk.
  • Make ethics a priority in web design and development, even if your AIs don’t pose a serious risk or you’re not using AI tech at all.

With the EU AI Act now in place, everyone involved in deploying artificial intelligence to the general public is responsible for what happens with it.

It’s like if you integrate a website with a third-party app and they get hacked. If your customers’ personal data is stolen, it’s not just your third-party partner that’s responsible for the lost data, money, privacy, etc. You’ll also be to blame and liable to receive fines associated with the infraction.

Even though the AI Act is EU-based, it’s something every web developer needs to take seriously. You may not need to add AI consent notices to your sites and apps the way you did with GDPR, but ongoing transparency, responsible usage and data security all matter a huge deal now.


The information provided on this blog does not, and is not intended to, constitute legal advice. Any reader who needs legal advice should contact their counsel to obtain advice with respect to any particular legal matter. No reader, user or browser of this content should act or refrain from acting on the basis of information herein without first seeking legal advice from counsel in their relevant jurisdiction.


AI
SuzanneScacca-headshot
About the Author

Suzanne Scacca

A former project manager and web design agency manager, Suzanne Scacca now writes about the changing landscape of design, development and software.

Related Posts

Comments

Comments are disabled in preview mode.