If you’ve been using AI tools like ChatGPT, Claude, Midjourney, or anything that helps with writing, art, or coding—you might’ve heard about big new rules being rolled out in Europe.
Throughout July 2025, the conversation around the EU’s AI Act has gotten louder—not just in policy circles, but across tech forums, startups, and even classrooms. Some companies are complying. Some are pulling out of the EU altogether. Others are just… stalling. What’s going on? And should we—as students, users, or developers—care?
Let’s walk through it.
—
What is the EU AI Act?
The EU AI Act is basically Europe’s way of saying: “Let’s set some ground rules for AI before it gets out of hand.” It was officially passed in early 2024 and is the first major law in the world to comprehensively regulate AI.
The act groups AI systems into different “risk levels.” The higher the risk (like AI used in education, surveillance, or healthcare), the tighter the controls. It also pays close attention to what’s called General-Purpose AI—the kinds of models that can do a bit of everything (like GPT).
So far, this law has been rolled out in stages, and July 2025 is a key milestone. Here’s why.
—
What happened on July 2025?
On July 10, 2025, the EU published a new set of guidelines for companies working with general-purpose AI, known as the Code of Practice.
This document isn’t law—not yet—but it’s important. It outlines what companies should do if they want to stay on the EU’s good side, especially with tougher rules arriving soon. Things like:
Be transparent about how AI models are trained
Respect copyright when using data
Take safety precautions seriously
Starting August 2, transparency obligations from the AI Act will become legally enforceable. That’s why July felt like a deadline—and the pressure’s showing.
—
How are companies reacting?
Let’s just say: mixed feelings. Some companies signed on—including OpenAI, Microsoft, and Google. They’re agreeing to follow parts of the Code and seem ready to work with the EU.
Others pulled back—like Meta, which declined to join, saying the rules weren’t clear enough and might hold them back. Startups, especially, are struggling. A few have openly said they won’t launch in Europe at all. For them, complying with these rules might mean hiring more legal staff or rebuilding parts of their system — and they just can’t afford that.
This has sparked some backlash. People are asking: Is Europe making it too hard to innovate? Or is it just protecting users before things spiral?
—
What does this mean for students?
Even if you’re not running a company, this shift can affect you — especially if you use AI for school, research, or daily productivity.
Here’s how:
Some tools may get limited or blocked in Europe, either by choice or to avoid legal trouble.
International students in the EU might start noticing differences in how tools work — some features could be unavailable compared to other countries.
Schools and universities might also change how they handle AI in the classroom, focusing more on safe use or updating policies to stay compliant.
If you’re studying AI or planning to work in tech, this is also a sign that knowing the legal side of AI is becoming part of the field itself.
—
Anything else worth noting?
Some companies might start creating EU-only versions of their products. You might see more AI tools include details like “this model was trained using publicly available data” or “this tool follows the EU Code of Practice.” Other countries—including Indonesia, Canada, and even the U.S.—are watching to see how this goes. Europe might be the first, but it likely won’t be the last.
—
What’s next?
As of now, the Code of Practice is voluntary—but enforcement is coming. Companies that don’t follow the transparency rules by August could risk fines, restrictions, or worse. More guidance and monitoring will be handled by the EU AI Office, and whether or not more companies sign on to the Code might shape how the Act is enforced in the long run.
For the rest of us, this is a moment to watch. Whether you’re a student using AI to get work done, a future developer, or someone thinking of studying abroad—this shift is one of the biggest global moves in AI regulation so far. It’s not the end of AI in Europe—but it may be the beginning of a new, more cautious chapter.