Social media: How the UK is putting safety first
- Nikkie Kitching
- Jun 17, 2019
- 6 min read

Whether you consider yourself an Instagrammer or Snapchatter, chances are you've stumbled across content that may not have taken your fancy. It might have been offensive, indecent or just downright disturbing.
Whilst most adults who have access to social media have the common decency to ignore inappropriate content, younger users may view this content more seriously. Gone are the days where children were gifted toys and books as this has now been replaced with smartphones and devices. As this pattern of behaviour continues to be more prevalent, children as young as 10 will have full access to the internet which means the possibility of accessing inappropriate themes. A case that has taken over the news recently is that of Molly Russell, a 14 year old who committed suicide after accessing Instagram posts displaying self-harm and suicide.
Molly is just one of the handful of youngsters impacted by the detrimental effects of social media. With this mentality it is easy to point fingers and place blame on social media companies and demand they regain control over their websites. Whilst adding regulation to social media might seem a little contrived, this is something that countries are looking into more closely. One country in particular has made a huge step.
The collaboration of the UK Home Office and the Department of Culture, Media and Sport has resulted in a paper being published. It is worth noting that the UK is the first country to have a regulatory framework put in place. The Online Harms White Paper makes a stand on social media with the purpose of ensuring the internet is a safe place to visit. The paper sets out the following main points:
There will be an independent regulator who will oversee the activities of social media companies and ensure they are complying with what content is being published
There must exist a "duty of care" to ensure users are protected
There will be penalties put in place to those who fail to protect its users
The White Paper expands into various sub categories including terrorism, hate crimes and cyber bullying to name a few.

The UK's Health Secretary, Matt Hancock, recently met with representatives from social media giants Facebook, Twitter, Instagram and Pinterest in order to gauge what is needed to tackle the most harmful content on their respective platforms.
Moreover, Hancock also moved onto say that a number of charities have agreed to help with social media platforms in ensuring the content mentioned is in line with their values. Samaritans, a charity dedicated to the prevention of suicide will play a part in this scheme. In terms of content, Hancock went onto say that Samaritans are "committed to either remove it or prevent others from seeing it and help vulnerable people get the positive support they need."
What are other countries saying?
A few years ago, Germany had a similar proposal in place, paying close attention to hate speech. The country has even gone so far as to implement a fine of up to €50m if platforms fail to remove manifestly illegal content within 24 hours, with the exception of more complex cases which are allowed up to a week to act. The law is known as NetzDG, or in English, the Network Enforcement Act.
Australia has also recognised the need to tighten their social media law. This was in response to the aftermath of the terrorist attack of Christchurch mosque. The attack was live-streamed on Facebook. In an effort to improve matters, the government aims to penalise platforms who fail to remove content, specifically "abhorrent violent material". This is defined as content showing people engaging in terrorist attacks, rape, kidnapping and murder. An individual uploading this content can face up to 3 years imprisonment or a fine. A corporate body uploading this content can face fines of AU$10.5m or 10% of their annual turnover, depending on which is the higher amount.
What about my right to freedom of speech?
However, as with any concept, there will be people who agree and people who disagree. Whilst it seems that the UK is making gradual steps towards security and safety, companies and campaigners in favour of free speech may think otherwise.
According to Article 10 of the Human Rights Act 1998, "everyone has the right to freedom of expression." In social media terms, this means that wherever there's an opportunity to post or comment, you are free to say whatever comes to mind. However, this carries with it a limb to include that "the right may be subject to such formalities, conditions, restrictions or penalties...in the interests of national security, territorial integrity or public safety..." Applying this rule to social media has proven difficult with social media companies allowing their users to be, well, social.
Let's take society today. We use social media to not only connect with others but for some it acts as an online newspaper. Companies know they can target a larger audience online and so it would make sense for them to turn to sites like Facebook and Twitter to post their "breaking news" and spread awareness. Naturally, this comes with comment sections where people are able to freely post their opinions. More often than not, the news reports on negative issues which in turn welcome negative comments.
Even if we put news aside, social media allows us to share links to our own pages which will then appear on our friend's feeds as well. Some of us may post about sensitive and controversial topics but only with the intent to educate. A post is however down to interpretation. If someone were to post about "black lives matter" it may help educate one person and anger another. So again this begs the question, are we bringing a negative influence to the user or are we merely spreading awareness? It has been suggested that we need to analyse the content carefully to ensure that it is doing more good than harm. If the post about "black lives matter" is merely explaining the movement and how it has come to be then this is more informative whereas if the post is a video compilation of barbaric behaviour by the police towards black people then this can encourage further violence.
The aforementioned examples indicate the difficulty in balancing the regulation of social media and free speech. With millions of posts and comments, it would be almost impossible to delete every post or comment with a negative connotation. In saying this, without taking people's views into consideration, this limits our progress in society. Governments and organisations rely on the response of their people to achieve democracy and order. The trick will be finding the fine line between what needs to be said and what can be can be misconstrued.
Final thoughts
With a mass of information readily available online it can sometimes be difficult to filter out the good from the bad. Facebook has responded by saying that millions of terrorism posts were taken down since the wake of Christchurch and Instagram has recently introduced a pop up where the user can get support if they search for a certain hashtag that can cause harm. Whilst social media companies are trying their best to remove negative content, there are still questions as to how far we can censor information.
With respect to the White Paper, there are of course grey areas to consider. Whilst it outlines the areas of concern, it does little to address how this is to be enforced financially and in the long run. There is also vagueness in terms of which parts of the internet the regulator will gain access to and what he or she will define as a "duty of care". The proposals have the potential to backfire but of course this will be down to how far we are willing to go with restrictions and the overall response of the public.
As the UK looks to leave the EU, this will also call into question how the law will change going forward. True, you will never have a universal law that can be applied to every form of social media but when it comes to removing harmful content it looks as if the UK has taken the first step in the right direction. The paper is currently under a 12 week consultation period and encourages responses from the companies ensuring that the internet is a safe place for the youth of today.
For more information on the changing climate of social media, check out the following links below:
Commentaires