Why EU AI Regulations Are Acting Too Slow
- zakchester
- Mar 26
- 4 min read
If you read my last blog post (Which I greatly appreciate) you may have noticed that I tagged underneath the post a disclaimer stating that AI was used in the creation of that work. Whilst the entirety of the written ideation was my own, there were points in which I optimised, tonally and grammatically, how I put forward ideas using the help of ChatGPT. This post, however, will be fully written by my own flawed, human hands. (As a thought experiment, one could compare the two blog posts). The reason I did this was not just for efficiency of the writing process, whilst I admit that was part of the reason. It was in light of the EU AI Act proposals. Specifically, their stance on transparency. In this document, which I will link at the end of this post, they state that by August 2026 the act will require disclosure of AI generated content. So, my previous blog post for example, rather than me disclosing the use of AI in my writing out of my own sense of ethics, it would be a legal requirement. I believe this to be a fantastic step forward. Since the advent of AI generated content, I have considered this type of legislation to be a necessity. What bothers me, however, is that it may be too little too late. By the time this legislation is put through, unless significant investment is put into AI detection tools, the exponential rate at which AI technology is developing may leave legislators in the dust. In this post I will discuss why they are acting too slow (and in my opinion, should have started this legislation when the first iteration of ChatGPT mass proliferated) and what can be done to improve the detection of AI generated content. I would also like to preface this article by stating that these are all my own opinions and conjectures. I have projections for a potential future, I do not know for sure that these things will take place.

Why the EU AI Regulations Are Acting Too Slow (In Relation to Transparency)
Firstly, I will talk about the elephant in the room. What events of global significance are happening between now and the time that the legislation is enacted that could be influenced by AI? The first thing that comes to mind is election tampering. Below I will link a list of every significant election between now and August 2026, however the one that really grabs my attention is the Canadian election in October. Given the current tenuous relationship between America and Canada during Trumps presidency, and the US’s history of overturning foreign governments, it could be a tumultuous election if AI is added into the mix, with the ability of deepfakes. This is just some food for thought, I don’t claim to be an expert in geopolitics, however I wanted to give a tangible example of how AI could be used in the near future to alter world events.
What I believe is far less speculative, however, is that opportunists will attempt to use the small window they have until the legislation is solidified to profiteer as much as possible and stay ahead of the game. AI detection programs are not always reliable. It wouldn’t be a stretch to say that profiteers of AI would attempt to develop systems that can run circles around the detection tools. This is not even mentioning that the window provided could mean these profiteers capitalise whilst the going is good by pumping out as many products as possible to people who may not be aware of the coming legislation. This could be anything from selling deepfake software to entire works created by AI, thereby muddying the waters for when the legislation takes effect. If there is already a critical mass of AI content produced, eventually attempting to force creators to classify when they’ve used AI becomes somewhat redundant.
How to Improve Detection of AI Generated Content
Given the nature of the legislation, it is of the upmost importance that AI detection is of the gold standard. This section will be dedicated to what I believe could be implemented to ensure the enforcement of this legislation is sound.
A standardised method of programming AI that allows for easy detection could be one way of ensuring the robustness of legislation enforcement. For example, right now chatbots are programmed to read as human as possible. Rather than humanising AI, if AI programmers were forced to create a tone of voice that is specifically identifiable as AI, the detection process would be far simpler. Whilst this narrows the scope of what AI is capable of, thereby reducing profits, I believe narrowing the power of this technology is a positive rather than a negative. It pushes the idea that AI is a tool rather than a replacement, or a pseudo-human intelligence. Using this, it would be far simpler to audit technology companies and how they produce and create AI. Just because we can make AI almost indistinguishable from human intelligence, does not mean we should. This does not mean reducing its intelligence but making it easily distinguishable from human intelligence.
I understand the above proposition isn’t popular in a capitalist society bent on maximizing profits and efficiency, taking every material concept to its logical conclusion. So, my alternative solution would be to start investing heavily in AI detection tools. This is an obvious suggestion; however, it seems that AI companies are more interested in reaching general intelligence rather than making sure the path toward and the aftermath of general intelligence is safe. For this legislation to not be a flowery commitment, and for it to be iron-clad, as it should be, there needs to be further investment in detection tools.
Supporting Literature and Media:
Comments