top of page
Writer's pictureMatt Hewett

Our view: digital regulation from Europe and the UK


European Parliament
Photo by Guillaume Périgois on Unsplash

On 15th December the latest versions of the online harms regulations and the digital services act were released to the public. Both documents are focused towards the regulation of large online platforms and the safety of their users.


Regulation of tech and technology platforms is a good thing. We welcome efforts in this area. However, each of these documents suffer from being a bit too vague. The framing of the online harms document in particular is problematic, which we will discuss in the following section.



The UK: online harms regulation


The government put forward its final response to the consultation it kicked off back in April 2019. The current plan is to introduce the legislation next year.



Rowland Manthorpe

I just wish I knew what an online harm actually was

What do you think is harmful? Do you think that you share the same concerns as to what is harmful as everyone else?


Quite a lot of the confusion in the online harms framing is around that ‘harm’ is generally a part of the language of consequentialism. To better understand this we need to have a quick look at the difference between deontological and consequentialist statements (stay with me). In deontology the morality of an action is based on whether the action itself is right or wrong under a series of rules. For example, you might say “it is always wrong to lie.” This is a deontological statement because regardless of the consequences, you must never lie.


In consequentialism the morality of an action is based solely on its consequences. An example of a consequentialist statement is: “it is wrong to make a person unhappy”. However, every person is different. It might make Jake happy to receive an ice cream, but Sabrina may be unhappy because she is lactose intolerant. When you only have a consequentialist framework without any deontological basis, then it is paralysing to act in any way. The online harms framing does not escape this. There are many areas where the language of ‘harm’ will be different from person to person.


We believe the document feels a little arbitrary in places. It’s not obvious why some things have been included and why others haven’t. This has been true throughout the consultancy process.



European Parliament: Digital Services Act (DSA)


We prefer the framing of the DSA because the main aim is the “protection of users’ fundamental rights”. At Hattusia we believe that tech regulation should have human rights at its core and so the DSA initially feels like a very positive step.


The document has also pushed a commitment to transparency as transparency reporting moves from a voluntary action to a binding obligation. To paraphrase what Access Now wrote: it creates a specific “set of responsibilities for “very large online platforms” to recognise the systemic human rights violations that arise from their data harvesting business models and huge power asymmetries.”

The DSA is not without its flaws. The DSA establishes that a systemic risk assessment is necessary however, similarly to the Online Harms document, it relies largely on self assessment by the platforms with limited opportunity for public independent oversight.



 


We would like to reiterate that we need more regulation of internet platforms. However, the framing of this regulation is very important. Different concepts need to be treated very differently in order to have an impact that works for society and for our societies’ values. There is still a lot of work to do in this field.


hello@hattusia.com


Comentários


Stay up to date with the world of tech ethics by signing up to our newsletter here

bottom of page