Chair, John Carr OBE, asked: “From the outside, people might ask that it should be easy for platforms to verify age. Why is age assurance so difficult?”
Verifying someone’s age is not as simple as it might sound. We have seen this, I think, in the contributions of all the other previous speakers. It is generally acknowledged by policymakers, regulators and also, as we have seen, by researchers, at a global level, that age verification is a multifaceted issue for which, at present, there is not an agreed solution, which in itself speaks to the inherent challenges that are involved. These difficulties have been acknowledged by certain regulators in the European Union and the UK. They recognise that it is unlikely to be a one size fits all solution for all the issues that age verification presents in different contexts. If I may borrow one expression you just used John, ‘context is everything’ and, in this area, this expression fits perfectly. If I may, I would like to briefly recap some of the challenges that Facebook, and also some previous speakers have identified in this regard and that might be classified into three buckets.
- One is that we have different equities at play that need to be balanced. We need to be able to establish how to achieve sufficient accuracy to verify age and, at the same time, not exclude millions of individuals who could be unable to prove their age as well as not inappropriately limiting children’s digital rights. And this balance has not yet been achieved across industries or by regulators’ proposals.
- The second bucket is the opportunity to adopt a risk-based approach in order to conduct the above-mentioned balance. When assessing the various equities at play, including the impact on fundamental human rights and freedoms, we need to take into account the principle of proportionality (which under the GDPR is referred to by a term that could be misleading, i.e., ‘data minimization’). To apply the proportionality principle requires identifying the equities at play and assessing them in the relevant context using this risk-based approach that many of the speakers have been referring to. For instance, the strictest level of age verification should be reserved for areas that are posing the greatest risk. In the two very extreme examples that Professor Livingstone used -i.e., checking the weather and pornographic sites-, the difference that we have in terms of risk is obvious and, therefore, this difference should also apply when assessing which kind of age verification mechanism (or mechanisms) we should be able to apply in the context at hand, as the case may be.
- The third bucket would be human behaviour. As Revealing Reality pointed out, some young people and their parents often bypass the age assurances that the organisations may have been putting in place or use “workarounds” (if we borrow the term used by Revealing Reality). This resonates with my personal experience: I do have friends who are the parents of children who do not have the minimum age to be on social media, despite being clearly stated in the terms of service, who are actually those opening their children’s accounts. So we need to ask all stakeholders why this behaviour happens. Finally, another issue I found interesting from the Revealing Reality’s presentation (which might be linked to the former one) is that some parents want to have discretion to decide, by themselves, when their children are mature enough to access contents or service; this would add another layer of complexity for industry when assessing what could be the best or the most appropriate measure or measures to have in place.
This is just a recap of the challenges that ourselves and others have identified and, probably, there are more. They reflect why this conference is so aptly named, because it seems like this is all about finding the balance, and, I will add, in every context.
Chair, John Carr OBE, asked: “I wonder what you think about establishing industry wide standards so that we can make it as easy as possible for parents and children to navigate these tricky questions, rather than as it were the risk otherwise is that you see different companies adopting different solutions, that might work for individual companies but it won’t help in the space as a whole?”
John, you have pointed out something that is extremely important. I believe that an organisation, and Facebook is definitely not an exception, I would just say the opposite – shall look at what they can do with what they have, including the data they hold. And, of course, the data held by each organisation may be very different. One of the resources that could be particularly useful in Facebook is artificial intelligence: this is one of the technologies on which Facebook has been and continues to invest, including regarding how certain estimations of age can be improved with the kinds of signals that we may have. As Jennifer was saying before, some of these technologies are still in their infancy in the age assurance area but they are already promising resources in respect of the estimation of age for the under 18s. Another issue that I would like to highlight is that we put a lot of emphasis on the age assurance (for good reasons) but, as Professors Livingstone and Van der Hof also pointed out, age assurance is just one tool among others. We need, in each context, to identify the relevant risks at hand, e.g., the kind of contents and behaviours that could be accessed by children, and which are the kind of responses that may be provided. Maybe the responses cannot and should not only rely on age assurance exclusively. Age assurance could be part of the response, but not in isolation. In the case of Facebook, there are multiple measures and safeguards that are in place, in addition to age assurance systems. For instance, we have Community standards which establish rules regarding unacceptable behaviour for all users and which are enforced through artificial intelligence combined with human review as well as user reports. Therefore, even if age assurance proves to be unsufficient, contents prohibited in the Community standards, such as nudity or pornography, would not be displayed to users. In addition, the establishment and enforcement of the Community standards are coupled with ad restrictions or prohibitions for the advertisers to respect (for example, regarding alcohol). Furthermore, Facebook also implements additional safeguards related to privacy and safety settings. For example, safeguards aimed at limiting the possibility of adults that are strangers to connect with young people, which was an issue that was raised or identified by other speakers. Therefore, when we address age assurance, we should see it as an element among a myriad of different measures and safeguards that might be considered in order to limit as much as possible the relevant risks that have been identified. In Facebook, there is no business interest or incentive to create or perpetuate an unsafe environment for any user, and this is valid regarding adults and, even more, regarding children. Now, coming back to another aspect of your question, I believe the industry is also factoring how to avoid too much friction placed in front of the user, because to ensure success, user frustration shall be avoided. And if individuals need to go through very different age assurance processes in each and every company’s site or governmental site, this would probably not constitute the best experience for the individuals, including our children. And this is one of the key values of this project (euCONSENT): it is aligned with our philosophy to have an open dialogue with other members of the industry, regulators and families. As someone else has been mentioning as well, parents and young people should also have an active part of this conversation: it is very important that we also factor the behaviour that we, as families, have or want to have regarding how our children learn to behave in this digital environment. Therefore, we welcome a multi-stakeholder approach. Also, because each of us have limitations. For example, an organisation might be more effective using certain kinds of information for certain age thresholds and not for others (for instance, our platform is not designed for people under 13 years old). Other providers might be much better positioned than ourselves in this respect. I believe that we have an opportunity to explore how we could have these conversations, maybe by combining the “super powers” that each of the organisations may bring.