TIGA’s Summary of the Government’s full response to Online Harms White Paper

By December 15, 2020 Press Releases

On 15 December 2020, the Government published their full response to the Online Harms White Paper. TIGA have produced a summary of the key aspects of the response relevant to the video games industry.

Part 1: Who will the new regulatory framework apply to?

  • Companies will fall into scope if their services:
    • (a) host user-generated content which can be accessed by users in the UK; and/or
    • (b) facilitate public or private online interaction between service users, one or more of whom is in the UK.
    • This covers a broad range of services, including (among others) social media services, consumer cloud storage sites, video sharing platforms, online forums, dating services, online instant messaging services, peer-to-peer services, video games which enable interaction with other users online, and online marketplaces.
  • The government recognises that some businesses and services present a lower risk than others and that any approach must be proportionate to the level of risk and companies’ capacity to address harm. Specific exemptions have been introduced for low-risk services. For example, reviews and comments by users on a company’s website which relate directly to the company, its products and services, or any of the content it publishes, will be out of scope.

Part 2: What harmful content or activity will the new regulatory framework apply to, and what action will companies need to take?

  • Definition of harm: The legislation will set out that online content and activity should be considered harmful, and therefore in scope of the regime, where it gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals. Companies will not have to address content or activity which does not pose a reasonably foreseeable risk of harm, or which has a minor impact on users or others. Harms to organisations will not be in scope of the regime.
  • A limited number of priority categories of harmful content will be set out in secondary legislation. Some categories of harmful content will be explicitly excluded, to avoid regulatory duplication. This will provide legal certainty for companies and users and prioritise action on the biggest threats of harm.
  • A number of harms will be excluded from scope where there is existing legislative, regulatory and other governmental initiatives in place. The following will be excluded from scope:
    • Harms resulting from breaches of intellectual property rights
    • Harms resulting from breaches of data protection legislation
    • Harms resulting from fraud
    • Harms resulting from breaches of consumer protection law
    • Harms resulting from cyber security breaches or hacking.
  • The duty of care: The duty of care referred to throughout the Government response consists of two parts. The first part relates to the duties on companies and the second part relates to the regulator’s duties and functions. The appointed regulator has been confirmed as Ofcom.
    • The primary responsibility for each company will be to take action to prevent user-generated content or activity on their services causing significant physical or psychological harm to individuals. To do this they will complete an assessment of the risks associated with their services and take reasonable steps to reduce the risks of harms they have identified occurring.
    • The steps a company needs to take will depend, for example, on the risk and severity of harm occurring, the number, age and profile of their users and the company’s size.
    • Companies will fulfil their duty of care by putting in place systems and processes that improve user safety on their services. These systems and processes will include, for example, user tools, content moderation and recommendation procedures. The proposed safety by design framework will support companies to understand how they can improve user safety through safer service and product design choices.

 

  • Safety by design framework: The Government’s forthcoming safety by design framework will set out what ‘good’ looks like for safe product design, and will be developed with industry experts.
    • The safety by design approach can apply from the conception stage of a new business onwards. User safety must be considered when designing the functionality of an online product or service, but also applies to setting in place an organisation’s objectives and culture to fully support a safety by design approach.
      • Examples of a safety by design approach include: default safety settings; clearly presented information; positive behavioural nudges and user reporting tools that are simple to use.
      • The framework will contain high level design principles to guide product design and development work, practical guidance for implementing safer design choices and effective safety features and examples of best practice and case studies on service design.
      • In January 2020, the Minister for Digital Infrastructure announced that the government would be developing legislation to protect citizens and the wider economy from the harms that can arise from ‘smart’, Internet of Things (IoT) or ‘internet-connected’ devices that lack important cyber-security measures. This work is underway with a view to introducing legislation as soon as parliamentary time becomes available.

 

  • Codes of practise: There will not be a code of practice for each category of harmful content. The codes of practice will focus on systems, processes and governance that in scope companies need to put in place to uphold their regulatory responsibilities.
    • The regulator will decide which codes to produce, with the exception of the codes on child sexual exploitation and abuse and preventing terrorist use of the internet.
      • Due to the seriousness of the harms, and to bridge the gap until the regulator is operational, the government has published interim codes of practice on how to tackle online terrorist and child sexual exploitation and abuse content and activity.
    • The government will set out high level objectives for the codes of practice with the regulator ensuring that its codes of practice meet these objectives during drafting.
    • Ofcom will consult with relevant parties during the drafting of the codes before sending the final draft to the Secretary of State for Digital, Culture, Media and Sport and the Home Secretary.
    • Ministers will have the power to reject a draft code and require the regulator to make modifications for reasons relating to government policy. Parliament will also have the opportunity to debate and vote on the objectives and the completed codes will be laid in Parliament.
    • The interim codes are voluntary and are intended to bridge the gap until the regulator is operational and ready to produce its own statutory codes on terrorism and child sexual exploitation and abuse, building on the work of the interim codes.
    • The government will work with industry stakeholders to review the implementation of the interim codes so that lessons can be learned and shared with Ofcom, to inform the development of their substantive codes.

 

  • Anonymous abuse: The legislation will not put any new limits on online anonymity. Under the duty of care, all companies in scope will be expected to address anonymous online abuse that is illegal through effective systems and processes.

 

  • Misinformation and disinformation: Companies will need to address disinformation and misinformation that poses a reasonably foreseeable risk of significant harm to individuals (e.g. relating to public health). The legislation will also introduce additional provisions targeted at driving action to tackle disinformation and misinformation.

 

  • Seeking redress: Users must be able to report harm when it does occur and seek redress. They must also be able to challenge wrongful takedown and raise concerns about companies’ compliance with their duties.
    • All companies in scope will have a specific legal duty to have effective and accessible reporting and redress mechanisms. This will cover harmful content and activity, infringement of rights (such as over-takedown), or broader concerns about a company’s compliance with its regulatory duties. Ofcom’s codes of practice will set out expectations for these mechanisms. The government expects the codes to cover areas such as accessibility (including to children), transparency, communication with users, signposting and appeals. Expectations on companies will be risk-based and proportionate and will correspond to the types of content and activity which different services are required to address.
    • For example, the smallest and lowest risk companies might need to give only a contact email address, while larger companies offering higher-risk functionalities will be expected to provide a fuller suite of measures.
    • The government will not mandate specific forms of redress, and companies will not be required to provide financial compensation to users (other than in accordance with any existing legal liability). Forms of redress offered by companies could include: content removal; sanctions against offending users; reversal of wrongful content removal or sanctions; mediation; or changes to company processes and policies.
    • The regulatory framework will not establish new avenues for individuals to sue companies. However, the existing legal rights individuals have to bring actions against companies will not be affected.
  • Age Assurance and Age Verification: The regulator will focus on ensuring that companies whose services are likely to be accessed by children have good systems and processes in place to protect children. This includes providing terms and conditions and user redress mechanisms that are suitable for children, as well as more transparency about how services are providing greater protection.
    • Under the Government’s proposals companies will be expected to use a range of tools proportionately, to take reasonable steps to prevent children from accessing age-inappropriate content and to protect them from other harms. This includes, for example, the use of age assurance and age verification technologies, which are expected to play a key role for companies in order to fulfil their duty of care.
    • The government would not in every case expect age assurance technologies to be used to block children from content or services, but where appropriate, to protect children within a service and enhance a child user’s experience by tailoring safety features to the age of the user.
    • The government will not be mandating the use of specific technological approaches through the legislation to prevent children from accessing age-inappropriate content and to protect them from other harms. The government expects that the regulatory framework will drive the take-up of age assurance and, where appropriate, age verification technologies.

 

  • Differentiated expectations on companies: The differentiated approach can be summarised as follows:
    • All companies will be required to take action with regard to relevant illegal content and activity.
    • All companies will be required to assess the likelihood of children accessing their services. If they assess that children are likely to access their services, they will be required to provide additional protections for children using them.
    • Only companies with Category 1 services will be required to take action with regard to legal but harmful content and activity accessed by adults. The approach is designed to protect freedom of expression and mitigate the risk of disproportionate burdens on small businesses.
    • Definition of Category 1 services: Category 1 services will be determined through a three-step process. First, the primary legislation will set out high level factors which lead to significant risk of harm occurring to adults through legal but harmful content.
      • These factors will be: the size of a service’s audience (because harm is more likely to occur on services with larger user bases, for example due to rapid spread of content and ‘pile-on’ abuse); and the functionalities it offers (because certain functionalities, such as the ability to share content widely or contact users anonymously, are more likely to give rise to harm). 29 2.17 Second, the government will determine and publish thresholds for each of the factors. Ofcom will be required to provide non-binding advice to the government on where these thresholds should be set. The final decision on thresholds will lie with the government, to ensure democratic oversight of the scope of the regulatory framework.
      • Ofcom will then be required to assess services against these thresholds and publish a register of all those which meet both thresholds. These services will be designated as Category 1 services and be required to take action against legal but harmful content accessed by adults. Ofcom will be able to add services to the list of Category 1 services if they reach the thresholds, and to remove services if they no longer meet the thresholds. If a company believes its service has wrongly been designated as Category 1, then it will be able to appeal to an appropriate tribunal. Ofcom will also be able to provide advice to the government if it considers a change to the thresholds to be necessary.

 

  • Risk assessments: Companies providing Category 1 Services will be required to undertake regular risk assessments to identify legal but harmful material. As part of their duty of care, companies in scope will be expected to consider, as part of their regular risk assessments, the risk of online harms posed by their service, including the risk presented by the design of their service and its features. Companies will be expected to reassess the risk of online harms if they are planning significant changes to their services.
    • Companies will be required to undertake regular child safety risk assessments to identify legal but harmful material on their services impacting children, covering both the priority categories set out in secondary legislation.
    • They will need to make clear to users what is acceptable on their services for such content, and how it will be treated across their services. Companies will be expected to consult with civil society and expert groups when developing their terms and conditions. This will encourage the adoption of terms and conditions that meet user needs and build on existing best practice on how to effectively tackle different types of harmful content and activity
    • Following the risk assessment, companies will be required to take steps to address the risks they have identified. This will be key to them fulfilling their duty of care to their users and delivering a higher level of protection for children.
      • The regulator will set out the steps that companies should take to address the risk posed by their services, and ultimately will have the power to assess whether the steps taken are sufficient to fulfil the company’s regulatory requirements. Failure to fulfil the duty of care may result in the regulator taking robust enforcement action.
      • The decisions taken by a company on the design or functionality of their service will not exempt them from needing to comply with other regulatory requirements. For example, all companies in scope must comply with information requests from the regulator.
      • To ensure the future regulatory framework is well equipped to deal with the longer-term challenges presented by disinformation and misinformation, the regulator will be required to establish an expert working group on disinformation and misinformation
      • The government has also committed to publishing a safety by design framework, as outlined previously.

 

  • Illegal content and activity: Companies will need to ensure that illegal content is removed expeditiously and that the risk of it appearing and spreading across their services is minimised by effective systems.
    • The government will set priority categories of offences in secondary legislation, against which companies will be required to take particularly robust action. These will be offences posing the greatest risk of harm, taking account of the number of people likely to be affected and how severely they might be harmed. Examples of priority categories of offences include child sexual exploitation and abuse and terrorism.
    • For priority categories of offences, companies will need to consider, based on a risk assessment, what systems and processes are necessary to identify, assess and address such offences. The regulatory framework will require companies to address illegal content and activity which could constitute a UK criminal offence or an element of a UK criminal offence and which meets the definition of harm, as set out above.

 

  • Freedom of expression: Companies will be required to consider the impact on and safeguards for users’ rights when designing and deploying content moderation systems and processes. This might involve engaging with stakeholders in the development of their content moderation policies, considering the use of appropriate automated tools, and ensuring appropriate training for human moderators. Companies should also take reasonable steps to monitor and evaluate the effectiveness of their systems, including considering the amount of legitimate content that was incorrectly removed.
    • The regulatory framework will also require companies to give users a right to challenge content removal, as an important protection for freedom of expression.
    • Certain companies will also need to produce transparency reports, which are likely to include information about their measures to uphold freedom of expression and privacy.

 

  • One Stop Shop: The government will publish a ‘One Stop Shop’ with practical guidance for companies on how to protect children online. It will be designed as an interim tool to support businesses ahead of the regulatory framework. The One Stop Shop will support smaller companies in particular, providing practical advice to help them better understand child online harms and their existing legal requirements.

 

Part 3: The regulator

  • The Government have confirmed that Ofcom will be the online harms regulator.
  • The government will introduce a power to allow the Secretary of State for Digital, Culture, Media and Sport to issue guidance to the regulator, with clearly defined scope and use. This will enable the government to set out further detail on regulatory processes but will not stray into operational matters or seek to fetter Ofcom’s independence in how it operates the regime. The final version of this guidance will be subject to parliamentary approval.
  • The Secretary of State for Digital, Culture, Media and Sport appoints the non-executive members of the Ofcom Board including the chair and will work with Ofcom to ensure that the Board has the necessary skills and expertise as it takes on these new responsibilities.
  • The regulator will be accountable to Parliament for its regulatory activities, including specific aspects of the regime beyond primary legislation.
  • Ofcom will also be required to report on the impact assessments it has undertaken in annual reports to Parliament.
  • Ofcom will be given powers to raise the required income to cover the costs of running the online harms regime from industry.
    • Companies above a threshold based on global annual revenue will be required to notify the regulator and pay an annual fee. Companies below the threshold will not be required to notify the regulator or pay a fee. The threshold will be set by Ofcom, based on consultation with industry, and will be signed off by Ministers. Companies in scope which fall below the threshold will still have to comply with all their other regulatory responsibilities. The regulator will, in consultation with industry, prescribe the details of the notification process, including the information required from the company at the point of notification.
    • The total amount of fees to be charged to industry will be in proportion to the costs incurred by the regulator in operating the online harms regime. The fees to be paid by individual companies will be tiered. The intention is that the regulator will calculate the fees based on two metrics: a primary metric of global annual revenue; and a secondary optional metric based on company activity. The details of the second metric will be determined by the regulator and could be calculated using criteria such as the presence of specific functions on a service. The metrics used to calculate the fees will meet the strict criteria of proportionality, affordability and objectivity.
  • The government will work with Ofcom to ensure that the regulator is able to work effectively with a range of organisations. This will be delivered through a range of means including co-designation powers, memorandums of understanding, forums and networks.

 

Part 4: Functions of the Regulator

  • The regulator will set out what companies need to do to fulfil the duty of care, including through codes of practice.
    • The regulator will have the power to require certain companies in scope to publish annual transparency reports, which will empower users to make informed decisions about which services they use.
    • The regulator will be able to access information about companies’ redress mechanisms in the exercise of its statutory functions and will accept complaints from users as part of its horizon-scanning and supervision activity.
    • The regulator will have a super-complaint function, and will accept super-complaints when there is substantial evidence of a systemic issue affecting large numbers of people, or specific groups of people. The regulator will also establish appropriate mechanisms for user advocacy, to ensure users’ experiences and concerns are being heard and acted upon.
    • The regulator will have a range of robust enforcement powers to tackle noncompliance by in-scope companies providing services to UK users, to ensure the effectiveness of the regime.
  • The regulator will have powers to require additional information from companies to inform its regulatory activity, including additional powers to support investigations.
  • Regulatory action should be undertaken in line with the principles of the regulatory framework which means being realised in a way that:
    • Is based on the risk of content or activity online harming individuals, where it gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals
    • Is reasonable and proportionate to the severity of the potential harm and resources available to companies
    • Provides a higher level of protection for children than for adults
    • Protects users’ rights, including to freedom of expression and privacy online; and safeguards media freedom
    • Promotes transparency about and accountability for the incidence of and response to harm
    • Supports innovation and reduces the burden on business
    • Is delivered by putting in place appropriate systems and processes
  • Ofcom will have a duty to consider the vulnerability of children and of others whose circumstances appear to Ofcom to put them in need of special protection when performing its duties.
  • The regulator’s role and functions will include:
    • Setting out what companies need to do to fulfil the duty of care, including through codes of practice
    • Establishing a transparency, trust and accountability framework.
    • Requiring all in-scope companies to have effective and accessible mechanisms for users to report concerns and seek redress for alleged harmful content or activity online, infringement of rights, or a company’s failure to fulfil its duty of care.
    • Assessing and responding to super-complaints.
    • Establishing user advocacy mechanisms to understand users’ concerns and experiences.
    • Taking prompt and effective enforcement action in the event of non-compliance, when it is appropriate and proportionate.
    • Providing support to start-ups and small and medium-sized enterprises to help them fulfil their legal obligations in a proportionate and effective manner.
    • Promoting education and awareness-raising about online safety to empower users to stay safe online.
    • Undertaking and commissioning research to improve our understanding of online harms, their impacts on individuals and society and how they can be tackled.

 

  • Ofcom, as the independent online harms regulator, must already pay due regard to encouraging innovation and promoting competition in relevant markets when performing its duties, as set out in section 3(4)(d) of the Communications Act 2003. A comparable duty to pay due regard to promoting innovation in relation to online harms will be put in place by the new legislation. will be underpinned by a new statutory duty requiring Ofcom to publish information setting out how it will encourage innovation with regards to online harms.

 

  • Transparency reports: Companies providing Category 1 services will be required to publish reports containing information about the steps they are taking to tackle online harms on these services. The Secretary of State for Digital, Culture, Media and Sport will also have the power to extend the scope of companies who will be required to publish transparency reports, beyond Category 1 companies, by setting additional thresholds based on factors such as the functionalities and the audience of the service.
    • To ensure that the transparency framework is proportionate and reflects the diversity of services in scope, the transparency reporting requirements will differ between different types of companies. Ofcom will consider companies’ resources and capacity, service type and audience in determining what information they will need to include in their reports
    • What types of information will transparency reports cover?
      • Information about the enforcement of the company’s own relevant terms and conditions, which should reflect the regulator’s codes of practice.
      • Information about the processes that the company has in place for reporting harmful content and activity (including in relation to illegal harms), the number of reports received and the action taken as a result.
      • Information about the processes and tools in place to address illegal and harmful content and activity, including, where appropriate, tools to identify, flag, block or remove illegal and harmful content and the processes that companies have in place for directing users to support and information.
      • Information about the measures and safeguards in place to uphold and protect fundamental rights, ensuring decisions to remove content, block and/or delete accounts are well founded, especially when automated tools are used, and that users have an effective route of appeal.
      • Where relevant, information about evidence of cooperation with UK law enforcement and other relevant government agencies, regulatory bodies and public agencies.
      • Information about measures to support user education and awareness of online harms and strengthen users’ media literacy, including through collaboration with civil society, small and medium-sized enterprises and other companies.
      • Information about tools for users to help them manage harmful content and activity.
      • Information about the process and steps an organisation has in place to assess risk of harm at the design, development and update stage of the online service.
      • Information about other steps that companies are taking to tackle online harms and fulfil their obligations under the online harms framework, including to deliver a higher level of protection to children where a platform is likely to be accessed by children.
      • The government report on transparency reporting in relation to online harms can be read in full

 

  • The regulator’s enforcement powers will be:
    • The power to issue directions and notices of non-compliance.
    • Fines up to £18m or 10% of annual global turnover, whichever is higher: ○ The regulator will produce guidance on how penalties will be decided. The guidance will be based on the regulator’s operating principles, including proportionality and the extent to which harm was caused to children.
    • Business Disruption Measures, Level One: The regulator will have the power to take measures that make it less commercially viable for a non-compliant company to provide services to UK users. It will have the power to require providers to withdraw access to key services. If providers do not comply, the regulator will be able to enforce through a court order.
    • Business Disruption Measures, Level Two (serious failures of the duty of care): The regulator will have the power to take measures that block a noncompliant company’s services from being accessible in the UK, by requiring the withdrawal of services by key internet infrastructure providers (e.g. browsers, web-hosting companies, app stores, online security providers or Internet Service Providers).

 

  • Criminal sanctions for senior managers: The government will reserve the right to introduce criminal sanctions for senior managers who fail to respond fully, accurately, and in a timely manner, to information requests from the online harms regulator. This power would not be introduced until at least two years after the regulatory framework comes into effect, based on a review of the impact of the framework. The sanction would be a last resort, only to be used if industry failed to meet their information sharing responsibilities. This approach balances industry concerns with many stakeholders’ support for the proposal as a way to drive culture change.

 

Part 5: New measures

  • To support the further growth of the UK safety tech sector, the government will:
    • Deliver the Safety Tech Innovation Network, the world’s first forum for safety tech providers to collaborate and promote their work;
    • Deliver a new £2.6m project to prototype how better use of data around online harms can lead to improved Artificial Intelligence systems, and deliver better outcomes for citizens;
    • Organise a series of events, including a Safety Tech Unconference and Expo, to raise awareness and showcase the best of safety tech to potential buyers;
    • Help to organise trade missions to priority safety tech export markets;
    • Collaborate across sectors, including with the UK Online Safety Tech Industry Association (www.ostia.org.uk), to identify opportunities for innovation, adoption and promotion of safety tech;
    • Explore ways in which best practices in online safety can be included in standards and guidance for buying, building and reusing government technology, such as the 46 ‘Directory of UK Safety Tech Providers’ UK Government, August 2020 (last viewed in November 2020) 79 Technology Code of Practice;
    • Develop a Safety Tech Sector Strategy, to guide future priorities for sector support.

 

Cookies

We use cookies on our site to track activity and visitor numbers - please help us by allowing us to use them on your visit.