Advisory Council to Google on the RTBF - London Meeting 16th October 2014

ERIC SCHMIDT: We didn't exactly welcome that decision. The tests set by the court for what should be removed are vague and subjective in our view. A web page has to be quote "inadequate, irrelevant, or no longer relevant, or excessive," unquote, to be eligible for removal. And we're required to balance an individual's right to privacy against the public's right to information. All of that feels counter to that card index idea that I just described.

But at the same time, we respect the court's authority. It was clear to us very quickly that we simply had to knuckle down and get to work complying with the ruling in a serious and conscientious way. So we quickly put in place a process to enable people to submit requests for removal of search results, and for our teams to review and take action on those requests in lines with the court's guidance.

So to give you a sense of scale, we've had nearly 150,000-- that's 150,000 requests throughout Europe, involving more than 500,000 individual links or URLs, each of which must be assessed individually. Now in practice, quite a few of these decisions have been straightforward in our view.

Let me give you some examples-- a victim of physical assault asking for results describing the assault to be removed for queries against her name, or a request to remove results detailing a patient's medical history, or someone incidentally mentioned in a news report, but not the subject of the reporting.

Those are fairly clear cases, we thought, where we just remove the links from search results related to the person's name under the guidance of the court.

Now similarly, there are plenty of clear cases which are on the other side that we decided not to remove. A convicted pedophile requested removal of links to recent news articles about his conviction, or an elected politician requesting removal of links to news articles about a political scandal he was associated with. Those seem very clear on the other side to us.

But there are many cases which seem to be in the middle. Requests that involve convictions for past crimes-- when is a conviction spent-- how many years, ever, whatever? What about someone who's still in prison today? Requests that involve sensitive information that may, in the past, have been willingly offered in a public forum or in the media. Requests involving political speech, perhaps involving views that are considered to be illegal or the breach of law-- these types of requests are very difficult indeed. And they raise very tricky legal and ethical questions.

And it is for these gray areas that we're seeking help from the advisory panel-- both from obviously the data protection regulators and anyone else, including people in this audience. And what we want to do is sketch out the principles and guidelines that will help to take these right decisions in line with the letter and spirit of the ruling, especially since that ruling is likely to last for quite some time.

All of these meetings are being livecast. The full proceedings are being made available to the council's website, which is google.com/advisorycouncil after the event. We invite anyone and everyone to submit their views and recommendations via this website. And everyone's input will be read and form part of the council's discussion.

At the end of this process, and after we visit all seven cities, there will be a final public report on recommendations based on these meetings and input via the website, which we intend to publish by early 2015. And obviously, if the council members dissent with some part of the report from the majority, they're welcome to dissent as well. And we'll publish that as well.

So what I think we should do is introduce the council members, whose expertise speaks, I think, for itself.

Jose-Luis Pinar-- did I get that right-- is a former Spanish DPA and a professor at University CEU at San Pablo. Professor Luciano Floridi, Professor of Information Ethics at Oxford University nearby. Peggy Valcke, who's a professor of law at the University-- is it Leuven? And Sabine Leutheusser-Schnarenberger who is the former Federal Justice Minister in Germany for eight years.

And it turns out that Lidia will be joining us in a minute, and I'll introduce her. And her official title is she's the former Director of Trust for Civil Society in Central and Eastern Europe. She's on my right. David Drummond, Jimmy Wales, Sylvie Kauffmann, and Frank La Rue could not join us in London today because they had other commitments. But they're going to be following along what we're doing.

So we have seven experts today in the field. And on my left, we have Miss Emma Carr. It's Miss Gabrielle-- let's see-- it's Guillemin. It's Mr. Chris Moran. And on my right, we have Alan Wardle, Julia Powles, Dr. Evan Harris, and David Jordan. And we'll introduce each of them as we go along.

So everything's being livestreamed. Everything's on the record, and all of that.

So with that, I thought what we should do is just get started. This first session will run from now till about 12:30. And the first four people will present their case, if you will. Take a break. And then we'll do the remaining three cases in the second session that should end around 3:00. And what we have asked to do is keep our experts' comments for 10 minutes. So that the panel can then ask a whole bunch of questions.

And then obviously, after this, we're going to have questions from the audience. And the way that works is give us a submission of your question. There are cards that you could fill out. Just fill out the Q&A card. And we're going to try very hard to answer each and every question.

I thought what I would do is start with you, Emma, if that's OK. She joined Big Brother Watch as Deputy Director in February 2012, having worked for a political campaign and research groups, she became the director in August 2014. She has a Master of Science in Public Services Policy Manager from King's College London, so from here. And she has a particular interest in cyber security-- cybercrime-- which formed the topic of her dissertation. Emma, you have the floor.

EMMA CARR: Thank you very much, and thank you very much for having a representative from Big Brother Watch in these proceedings today.

I'd like to start as an introduction to my more focused remarks by saying that, although we believe it's laudable in its attempt to enhance the right to be forgotten online, I fear that the European Court of Justice's ruling has focused the burden in the wrong areas and badly misinterpreted the meaning of the right to be forgotten.

As you all know, search engines such as Google do not host or create information, and compelling to censor legally and factually accurate material I believe is the wrong approach. Instead, the problem should be tackled at-source, with the websites that carry the information being required to remove those references. I have made, I think, that clear, my thoughts in this presentation will focus on questions around those who wish to have their data removed, the responsibilities of the sources or publishers of that information, and the future implementation of that ruling.

So the section on requesters that's being put to us with a board definition around what can and can't be classified as public actions and who can truly have that right to be forgotten. In many ways, the best approach in this area is not to set a list of criteria that must be met before a list is taken down, as I believe it could lead to tick-box approach that would be counterproductive. Instead, due to the serious nature of removing those links, each case should be treated on its own individual circumstances. 

Another important point to make is the difficulty of defining the wider public interest, which may go beyond a single individual and what they may want. A serious problem here also is that the line between public and private individuals or behavior is becoming increasingly hard to draw. Rapid advances in technology and social media have meant that the concept of separate spheres for public and private actions have in many ways become obsolete. Any judgment over what constitutes public behavior should hinge on what the public interest is. In the simplest terms, would the public or the private debate benefit from a piece of information being published?

The difficulty in properly separating public and private actions-- particularly on the internet, a clear example of why that test should be applied on a case by case basis. Each request for removal will undoubtably come with its own unique circumstances and legal and moral dilemmas. The question around public figures and the right to be forgotten raises an added dimension of this debate. Although this is another very broad question, it is something as a matter of public record, for example appearing on [INAUDIBLE] within the United Kingdom, and it's very difficult to apply the right to be forgotten.

Often, to allow public actions to be forgotten would be undesirable outcome. In many ways, it once again comes down to each specific case and action. In areas that directly affect members of the public it would be hard to justify a right to be forgotten. However, in a private arena where actions only affect the individual and his or her family and friends, then it becomes logical that, as anyone else, public figures should have the right to be forgotten. 

The particular danger in this ruling is that it risks putting an individual's desires before society's interest in gaining the full range of facts about an incident. And for members of the public to fully understand a situation they need transparency, and this feeling risks creating gaps in the amount of information that is easily available.

So turning to the second theme which considered the source of information. One of the key elements is that a clear process is implemented so that all interested parties fully understand their responsibilities once a request has been made to remove a link. Now, this may manifest itself in the Article 29 Working Groups Guidance, and indeed, the simplest way to do this would be to implement a workable and clear set of guidelines that can be followed.

As has been previously argued in response to the ruling, it sets a dangerous precedent when it made search engines responsible for legal content that appeared on other websites. Ideally, if an individual believes they require removal of information, then the first step should be to contact the source in question. And if they do not receive a satisfactory response, then the next step is to appeal to the relevant data protection authority, which in the UK is the Information Commissioner's Office.

The role of the publishers in this process is also an interesting one, and it would seem only right that the publishers should be given opportunities to justify why that information is being published and argue why it should remain. It's also important that they have the opportunity to refer their decision to a court, or that some form of judicial oversight is in place. An analogy that's been made on the lack of such a body is akin to giving individuals the ability to walk into a library and forcing it to pull books. This reinforces the concern that the power may rest disproportionately with the individual at the expense of the wider public.

So turning to the issue of future implementation of the ruling, the idea of a common understanding and a common approach can also be applied. The idea of a common framework or removals list that can be shared between search engines should seriously be considered. And it's important that the ruling, however imperfect, is interpreted and applied on a consistent basis. Both of these tools can be used to establish a common best practice system in order to speed up the process and to ensure that each request is treated effectively and fairly. 

There are also major questions to be asked around the applicability of the law in a global or cross-territorial sense. Even with the introduction of common guidelines it will be difficult to ensure that we abide by them. So to conclude, what is evident is that search engines should not be responsible for ensuring the right to be forgotten is adhered to. The only way this can be effectively achieved is by placing the onus on the websites that actually host that information.

However, the ruling being what is, there are severe difficulties with separating public and private lives. What is now important is that the public interest is fully taken into account, and this can help to inform a decision as to whether or not a link should be removed. It's also important that some form of judicial oversight is brought into this system, and this can be helpful to all parties, enabling disputes to be resolved fairly and effectively.

Thank you very much.

ERIC SCHMIDT: Thank you very, very much. Thank you very much. Do we have comments from the panel? Who would like to start? Ladies first. Go ahead, Sabine.

SABINE LEUTHEUSSER-SCHNARRENBERGER: Thank you very much. You raised also a lot of questions. We need answers, and you helped. So let me ask you-- you said that the search engine should not be responsible. So we need, in your opinion-- I understand you, I hope, right-- a procedure that not the search engine has decide, but perhaps others. Can you give us some concrete ideas for this procedure?

And my second question is the importance of the right to privacy in the digital era. I had the impression following your statement that perhaps this is a right we can forget because it's belonging to the past, and now in the future we don't need this right to privacy. Is that right or not?

EMMA CARR: I think it's incredibly complicated, clearly. I think one of the first statements on this ruling was that this is not the right to be forgotten. This is the right to rewrite history, in our opinion. For me, the right to be forgotten is if I'm a member of a social media network I want to leave that network, I should be able to delete that information. They shouldn't be able to hold onto it indefinitely in case I when they decide to return. That for me is the right to be forgotten.

And I am by no means a data protection legal expert, nor am I a lawyer, but it seems to me that our data protection laws need to be seriously updated to give more rights and power to the individual if we're going to start allowing them to remove things online. And again, the idea for me-- and certainly for my generation-- is if somebody posts something about me online-- again, via social media network-- what rights do I have to then get that removed if I see fit?

Again, that's not the right to rewrite history, which this ruling is. What we're talking about here is removing accurate legal content, and we're asking an intermediary who has not be responsible for creating that content to remove it. Now, again, I'm not a lawyer, but that doesn't seem to be a proportionate or sensible way about going about things. And perhaps the people who are giving evidence today who represent content creators will have something to add to this.

But in everything that Big Brother Watch advocates, it's an independent judicial system. Whether that be this rights be forgotten ruling or general surveillance and privacy issues. I think having somebody who can independently apply a public interest test would be the best thing for both intermediary and the public. Again, we haven't seen the Article 29 Working Groups Guidance, and I'll be a very interesting to see the proportionality that's applied there, and the fairness that I think needs to be applied both to the intermediaries and to the public.

But now we do have this ruling. I think the most important thing that we see is an incredibly transparent system that's coming from Google and other intermediaries as to how they're going to apply this, and in what cases it has been applied. I think part of the problem that we've seen since the ruling is that a lot of the media interest in this has somewhat been skewered. So when and Mr. Schmidt made the example of a pedophile's details being removed, quite often it's turned out to not necessarily be the case, and potentially the abuse victim or somebody's made a comment in a website comments section-- it's been them who's wanted it to be removed.

So actually, much more transparency around that would've been welcomed, for fairness of both the public interest as that's been applied to that case, but also to the intermediary, because we don't want this to be a case where the public think that this is somewhat now of a free for all, which, in case, of course, it's not.

ERIC SCHMIDT: Can I just ask-- so in the case where you have somebody who's a truly innocent victim who's been maligned in something unrelated to them, your position would be don't take it down?

EMMA CARR: So an abuse victim, for instance, or---

ERIC SCHMIDT: Someone who's referred to-- let's use the example that's ambiguous, at least in our view. Someone is named, right? But themselves is not the victim nor the perpetrator, and really doesn't want to be part of this narrative, and they request taking down the whole narrative. What is Big Brother Watch's view on that question?

EMMA CARR: Well, my it's instinct would be this is why it's wrong for the intermediary to be the ones taking this down. If you had the onus to go to the publisher they could remove potentially that reference without removing the entire article. That's not the situation that we have, and so therefore I think your lawyers would have to make that call, and I think you'd have to make that call in connection with the local data protection regulator, as well. But that's not by any means an easy question to answer, and certainly not one without any legal experience that I can answer easily.

ERIC SCHMIDT: Peggy?

PEGGY VALCKE: Thank you, Mrs. Carr. You seem to treat removing content and removing links as a synonym. Did I interpret that correctly?

EMMA CARR: Yes.

PEGGY VALCKE: And can you explain a bit more why you consider that a synonym?

EMMA CARR: I see it in itself. If you're removing the link to something-- let me phrase that-- you can access information other than by accessing it via Google. Just because it's not on Google anymore doesn't mean it doesn't exist. In all due respect.

[LAUGHTER]

If it's still in print-- if it's still on the BBC website, for instance-- if it's still in the library it still exists. Therefore, we question why the right to be forgotten as this ruling has put the onus on the intermediary.

PEGGY VALCKE: But as far as I understood, the ruling is about if you search on a person's name you need to remove certain links. So the page is still in the index and everything. It's just that the court requires links to be broken. So if you can still find the content on the basis of other search queries, do you still consider that as a problem?

EMMA CARR: Yeah, I think that-- as Mr. Schmidt said at the beginning-- you expect to be able to go to Google and if you type in something and you're looking for factually accurate and legal information, you should be able to find it there. I have no problem with that. And this is why it should be down to the publisher and a judicial oversight that says, that it's no longer relevant or in the public interest to be there. It can be removed from a particular article without removing the article completely. I think that's a fair representation of the right to be forgotten for an individual, and a fair reflection of a public record. So asking Google to take it down or to remove access to a link-- even though they may allude to the link being there or the content being there-- it's not helping anyone, as far as I'm concerned.

LUCIANO FLORIDI: Actually, this happens to be a follow-on from the previous question. Thank you for your presentation.

I was surprised by your reply when you said that time removing a link and removing the related information are synonym, as in Paris equal the capital of France. Because if they are, then there is no difference. And therefore, your argument saying one is better than the other-- if they are synonym, there is no difference. So I suppose, just for the sake of argument, that they're not. Let's assume that they're not for a moment.

What I'll request for your view and comment upon is the following. Suppose that they are not, and if they're not then isn't-- and I'm trying to be devil's advocate here-- isn't it a little bit better as a compromise to remove just the link, rather than the actual information, so that if we change our mind we can still link it again? If we don't change our minds, the information is not so easily available. So some protection to the individual is guaranteed, whereas if we remove the information once and for all, we're not just removing the card from the catalogue. We're actually burning the book. And there seems to me a much more radical intervention.

Now as I said, I'm trying to be devil's advocate. I do want to show too many of my cards, but you can imagine that my heart doesn't stay exactly with this particular argument. But I would like to hear your opinion about that, please.

EMMA CARR: I feel more uncomfortable with an unaccountable, non-legal body making a decision to remove a link-- or potentially reinstating that link and potentially removing it again-- with a potential more lack of accountability than an individual going through a judicial process, fighting their argument, allowing the original publisher to make their argument and to fight their cause of why it should remain, and then that judge process can make a decision about whether it's in the public interest not, once and for all, rightly or wrongly. I think that's a far more accountable and transparent process than making the intermediary responsible and be judge and jury as to whether content should be linked to.

ERIC SCHMIDT: More questions from our panel? Thank you very, very much.

I'd like to introduce our next expert, Mr. David Jordan. David is the director of Editorial Policy and Standards at the BBC, having previously served as controller of editorial policy and chief political adviser. Before those rules, he was deputy editor of On the Record and Panorama, which I'm sure our audience know-- I think BBC'S flagship current affairs show. He founded and edited a range of political programs on BBC television and radio, including "The Westminster Hour" and "The Week in Westminster" on Radio 4, and People and Politics on "The World Service."

David is a member of the Steering Group of Public Broadcasters International, The Organization of News Ombudsmen, and the British Committee of the International Press Institute. Mr. Jordan, would you like to proceed?

DAVID JORDAN: Thank you, and thank you very much for your invitation today to participate in your discussions.

As you said, I'm the director of the editorial policy at the BBC, and my role is to try to ensure that all programming and content is consistent with the BBC'S values as expressed through the BBC's editorial guidelines, which are approved by our governing body, the BBC Trust.

One of the basic values is freedom of expression, a pillar of the democratic values to which the BBC'S charter and agreement commissions. BBC News is the biggest news website in the UK. The BBC'S online archive accrues daily, but the BBC also has a commitment to digitize its analog archive over time and make it generally available, making our collection of searchable online archive material ever greater. Much of this will be news or factual output, which inevitably contains individual data.

In this context, I don't think it will be a surprise to you that the BBC for some time been receiving requests from individuals asking for material about them to be taken down. To help deal with these requests, the BBC's editorial policy team has issued guidance to our program and content-makers. This is called the Removal of Online Content Guidance, and is available on our public-facing editorial guidelines website.

The BBC's fundamental starting point is that unless content is specifically made available for a limited time period, the presumption should be the material published online will become part of a permanently accessible archive and will not normally be removed. To do so, as others have said, risks erasing the past and altering history. And the threshold for removing or amending broadcast or online material is high. We will only to do so in exceptional circumstances. For example, legal or child protection issues, harm and distress, fairness, straight dealing, and accuracy. 

Each request for removal is investigated. Consideration is given as to whether other steps can be taken to address the issues raised, short of unpublishing, to use an ugly but appropriate term, I think. We are transparent to our users about any changes or revocations we've made, unless there are legal or editorial reasons which preclude this. And the BBC has a comprehensive and transparent three-stage complaints process, which goes right up to our regulator, the BBC Trust, which requesters can appeal our decisions.

The BBC does not agree with the ECJ decision to place the burden of responsibility for unsearching-- to use another ugly word for a  ugly precess, in my view-- online material on Google and other search engines. It seems to us that inadequate attention was paid in the ruling to the public's right to know, their right to remember. In other words, their freedom of expression as opposed to the individual's right to be forgotten.

Additionally, the BBC believes the decision to revoke or amend online material should principally be a decision for the publisher. Editorial teams need to decide on revocations with all the facts available, something Google and other search engines cannot do because, as has been admitted, they cannot have all the background and all the facts. Nor do we believe that it's right in principle to restrict search. But we are where we are, until the forthcoming EU Data Protection Regulations' able to redefine the limits of the so-called right to be forgotten. And in the meantime, it falls to Google and other search engines to preserve the freedom of expression, the right to know, as well as implementing the right to be forgotten.

But you asked that we should address some particular questions and concerns to this committee, and question one we approach from a rather different perspective than the question seems to be pitched. We should not be finding reasons why material should remain on the public record, but in the context of each case justifying exceptionally why material should be unpublished. We place particular emphasis on the preservation of the public record, as did the last speaker. We're notified by Google when any of the BBC's online material is unsearched. And we're very grateful for this courtesy, which at the moment does not seem to be extended by other search engines.

But an analysis of the first 46 cases, which the BBC reports were unsearched by Google, shows that the majority involved matters of public record: court cases or analogous circumstances. In some cases, the BBC had already rejected the same request. In others, it seems that a mistake has been made. For example, Google searches for a recent murder conviction now bring up the standard message: some results, et cetera, et cetera. The relevant news article is not found. Worryingly, in this example we think has happened is the somebody with the same name has asked for material to be taken down. And inadvertently, links to other stories with that name have also been unsearched.

In our view, these public hearings are in the public interest. They should be readily available. British justice requires that justice should be done and to be seen to be done. This applies to both convictions and acquittals. The danger with the current system, whereby people are going direct to Google, is that the public record is being hidden. One instance notified to us relates to the trial of real IRA members, two of whom were subsequently convicted. The report could not be traced when looking for any of the defendants names. It seems to us difficult to justify that in the public interest.

We've already made clear and in answer to question two, that the BBC believes the decision to remove or redact material should be the prerogative of the publisher. But given the duty placed on Google and other search engines by the ECJ decision to respond to requests to unsearch material, we welcome the decision of Google to notify publishers. We welcome the recent changes to Google's process to allow additional information to be supplied, which may reopen the decision to unsearch.

But unfortunately, the usefulness of this approach has been circumscribed by the lack of a formal appeal process, the lack of information about the nature of the request to unsearch or the origin of it, the lack of information about the search terms which are being disabled. Some, at least, of this information would be necessary for publishers or others to make adequate use of a formal appeal process, if there was one, or of the process which Google has introduced allowing further information to be supplied and the decision reconsidered.

In our view, it would be desirable for publishers to be consulted before Google takes a decision of whether or not to remove a link to an online article. The publishers have the full background to the story, insight as to whether a request has come in previously-- and if so, why we took the decision not to comply-- or if certain changes were offered or made, and whether it's a new request. We will be able to advise on the issues and, if appropriate suggest possible amendments that could prevent the unsearching of the article.

Seven out of the 46 BBC articles for which we've received removal notifications involved individuals who previously asked us to take down the entire piece, or at least to remove their name. If Google came to us first with clear information as to the nature of the request, the identity of the requester, we'd be able to provide a full briefing on the issues and background information. We can also investigate whether there's a compromise situation short of unsearch. And currently, we're simply informed that our article's being removed, whilst we can now submit any additional information. This is being done in ignorance of all the factors that I mentioned earlier.

Unlike the person asking for the article to be removed, we don't have a right to appeal. On the other hand, the individual making the request can appeal to the data protection regulator.

In answer to your question three, to this end amending the online forms would help. And we'd recommend the following changes. Forms should include a clause saying that the subject agrees to the information being shared with the publisher, i.e. waiving their right to anonymity and making it clear that the publisher will be advised of the nature of the request. However, in our view, the form should include the proviso that the publisher will keep this information confidential.

The form, in our view, should ask if the subject has already raised this issue with the publisher, and, if so, a brief outline as what was requested and why and what the outcome was. The form should also ask for more detail about request-- exactly why the requester wants the information removed. Did they consent to it being published at the time? Are they now embarrassed or are there serious reasons for them wishing to be unsearchable, and if so what are they?

And is the information elsewhere in the public domain? In one case where we were asked to remove information it had been put elsewhere in the public domain by the subject themselves. Is the presence of this article having an adverse effect on their life, and if so how? And if they've been convicted of a crime, has their conviction been spent? These changes would make the system more transparent, allow Google to take better-informed decisions based on all the facts.

I should say, finally, that in the interest of transparency, until this very unfair adjustment has been changed or revised, or until we find a way of working better with Google to insure that there's more general agreement about which articles should be unsearched or even unpublished, we will publish a simple list of all the removed articles or all the unsearched articles affecting the BBC on one page of our news, website provided they meet and comply with the BBC'S criteria for unpublishing.

Thank you.

ERIC SCHMIDT: Thank you, David. Comments from our panel? Go ahead Peggy.

PEGGY VALCKE: Yes, thank you very much, Mr. Jordan. I'm wondering whether your suggestion that the publishers should be contacted before links are removed is limited to your context being professional media, or whether you see this as a general principle?

DAVID JORDAN: Well, I think it would be helpful if it were a general principle. Of course, I'm not aware of on whom the greatest burden of these requests is currently falling. Only you would be aware of that. Only you would be aware of where the greatest problems are to be found.

In the case of the BBC, if we're aware of the number of requests that have been made to unsearch our material, we know that we can cope with that workload. I can't know for sure that other websites, which may have less resources, would be able to cope as well as we can. And so that's clearly a judgment that needs to be made in the context of what burden would be placed on each publisher by doing that.

But I rather suspect that the load is quite widely spread, and that most organizations will be able to cope with that request. I think in principle it would be enormously helpful, and that would give you far more information with which to make your decisions. And I have every sympathy with the difficulty that you face of having to deal with these things in many cases sort of part blind to all of the information that would be best for you to make the decision on.

ERIC SCHMIDT: Jose-Luis?

JOSE-LUIS PINAR: Yes. Well we can agree or not agree with the ruling that the case as Ms. Carr doesn't agree with the ruling, but the case is, you have to apply it. And in this sense, do you have a concept about what can mean public person? What does it mean? Because it could be very interesting to apply. Do you have your own criteria on how to define what public figure means?

And the second one-- just if you can answer this question-- have you any kind of conversation or contact with the information commissioner for knowing how to apply this ruling in your this specific case, BBC?

DAVID JORDAN: In answer to the second point, we haven't had any specific contact with the information commissioner about how to apply the ruling in the case of the BBC, because as it's applied to search engines, we haven't felt it necessary to get in touch with them. But we are in contact with them constantly over the application of data protection rules to journalism.

And we are active in the UK context and in the EU context in trying to make sure that there are carve-outs for journalism in relation to data protection issues and the right to be forgotten and other issues. So we're active in this sphere, but we haven't specifically discussed this aspect with them.

On your first question, I think that's a very difficult question, which no doubt is why you asked it. The definition of a public person I think is not one that's susceptible to a hard and fast rule, and nor should it be thought that public persons are in themselves not due some consideration for their privacy.

So simply to define, I think the definition and the suggestions that have been made in the ECJ's ruling in relation to the notion of public figures and public life presents a most enormous difficulty, because it's impossible in my view to generate an algorithm or something else that defines that for you.

It's a case of dealing with each individual article or piece of information in relation to the particular person concerned. And I think that in itself creates enormous difficulties and complications for anybody trying to make these judgments. But I could come back to the point that the only people that are capable of having all the information necessary are the publishers of the piece, not the search engine which is performing the function that Eric Smith's outlined in his remarks at the outset.

ERIC SCHMIDT: Sabine? Comment?

SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes. You have a lot of experience dealing with removal requests. Can you imagine to make a difference between trivial cases on the one hand and more serious cases on the other hand? Because the procedure will be doing a long time if in any case the publisher or that master or editor has to give a statement and can change his arguments with a search engine. So perhaps we have to make a difference between trivia and nontrivial cases. Is that something you could imagine? Is that practical on the background of your experiences or not?

DAVID JORDAN: Well I think in theory-- I hadn't thought about that point before-- but I think in theory it would be possible for a search engine to say to a publisher when they get in touch pre-decision to say, we, in our view, this is a trivial case, and we intend to remove this within the next, remove the links to this within the next 36 hours unless we hear back from you to the contrary.

And then on things that were more difficult, to take the view that we would have a little bit more time to allow you to respond in case there are things that are relevant to the decision that we're about to make. So I can see how you could do that. It's always then possible for a publisher to come back and say, well actually we don't think this is a trivial case. In our view it's not trivial.

Because of the following reasons, there's more to this than you may imagine. That gives that opportunity. But I think it would be possible for you to give an initial view, this is trivial, we're going to do this unless you come back very quickly with some reason why we shouldn't proceed, whereas this is difficult and we will take more time over it.

And I appreciate that if you're going to involve the publisher and you're going to have a real conversation with the publisher about it and give some opportunity for the publisher to get back to you, the process is going to take a bit longer. And it is, from your point of view, it's a resource-intensive thing, it is going to be possibly more expensive and involve more work on the part of the search engine. I appreciate those things are going to be the case.

LUCIANO FLORIDI: Thank you. That was very helpful. Thank you very much. I have a small factual question, which I'm perfectly happy if you don't have the answer. It's just a personal curiosity. And then a more complicated one. The factual question is, you said that the BBC deals with removals on a regular basis and has a procedure and so on. Would you have some sense of the order of magnitude of how many removals, say, per year, more or less? And I'm perfectly happy if you don't know, I just wonder just to have a comparison with what's happening with Google.

DAVID JORDAN: There's no comparison with what's happening in Google. I mean the number of requests has gone up since the ECJ decision, paradoxically, because initially we thought they'd all come to you. But that hasn't been the case. The number of case requests has doubled. But we're talking about tens of cases a week fundamentally to our news organization, which has been increasing steadily over time. Not the thousands and thousands of cases that you're dealing with.

LUCIANO FLORIDI: So it's a really different order, completely different order--

DAVID JORDAN: Remember the BBC has certain editorial standards, and requests to newspapers or publications that deal more with gossip and with more greatly intrusive material to people's private lives might very well be receiving much more. We have very, very clear guidelines about the intrusion into private lives being justified by public interest, mainly in relation to investigations and so on and so forth. So I'd imagine that other publications and other broadcasters might have more cases to consider.

LUCIANO FLORIDI: Can I go for the difficult question? And the difficult question is the following. Your view about the possibility that, as we progress-- suppose for a moment that we decide that publishers first, if that doesn't work, search engine next. If that doesn't work, data protection agency or court next. If that doesn't work, European Court of Justice.

Suppose that that is, for a moment, the normal procedure for an individual to have some information removed. Now would you-- what would be your view about making this procedure not just an automatic thing? Basically I myself ask first you, it doesn't work, the same request to Google, it doesn't work, I send it to the data protection agency, doesn't work, I send it to the European Court of Justice. When basically every step doesn't make any difference to the next step.

I hope my question is not terribly confusing. You see what I mean, if it doesn't work at step one, when I go to step two, should the fact that it didn't work at step one make a difference or not? What is your view?

DAVID JORDAN: Well it should be taken into account. I would welcome a sequential approach of the sort that you're outlining, that was publisher first, search engine, and then through the legal process if that was required. I'd welcome that. But I think that at each stage, the previous stage should know what the ruling was and why in order to make sure that if it did get to the search engine, say, from the publisher, you were well-aware of what all the facts of the case were or why the decisions being made have been made. I think that would make things a lot simpler.

That sequential approach would seem to me to be much better, but I fundamentally disagree however with the search engine part of it. Just talking to colleagues before we started this session, I was ruminating on the different national approaches to this issue. And to ask a pan-European organization like Google-- or a pan-world organization like Google-- to draw up criteria that would apply across a Europe that takes radically different approaches to privacy in many nations.

The approach to privacy in the UK is very different from the approach to privacy in France. And the approach in Germany is very different. And to get a system that copes with all those national differences, which would be a search engine problem, it wouldn't be a publisher problem-- or not to the same extent at any rate-- is I think asking an awful lot.

Lidia?

LIDIA KOLUCKA-ZUK: Thank you very much. That was very, very interesting, especially that today in [INAUDIBLE] Polish daily, there's an article written by Professor Sadurski, a well-known professor of constitutional law who wrote an article about the new role of media and press, and the new definition of media and press. Because in fact today everybody can be a journalist, especially within the framework of the new regulation that has been discussed in Poland.

He assessed in this article that it will be extremely difficult to implement any ruling, including this one that we are discussing today, because all the standards that you mentioned in your statement are defined a different way, in a very controversial way, I would say.

But I would like to refer to the previous question about the definition-- sorry because I know that this question is complicated, but somehow maybe we can narrow or maybe we can-- and this is my question, whether we can understand or whether we can try to define the public figure through the public activity.

And I just would like to ask you whether it will help if we discuss the public activity, and we talk what is public, it should refer to the public activity, rather than the public figure or the definition of the public figure, I'm not sure whether it's clear. But I'm trying to narrow this definition, referring to the public activity. How would you define the public activity? Would it be helpful or not?

DAVID JORDAN: I think you would take into account the nature of the public activity in deciding whether somebody was a public figure. We would always go that way around. But I find this incredibly difficult, because it's easy and obvious if you're talking about an elected politician at a national level. That's relatively straightforward. But supposing you're talking about a school governor in the UK-- so voluntary position, something you do to sit on the Board of Governors of the school, you help to run the school, you don't put yourself up as a spokesperson for the school or anything of that nature, you're just simply investing your own personal time in helping. Does that make you, because you sit on that board, a public figure in all circumstances?

Perhaps in certain circumstances, if you've entered into the debate about the nature of education in the UK, perhaps that would justify you being called a public figure. But supposing all you do is to restrict yourself to issues around whether or not the quality of the school dinners in that particular individual school is good enough for your pupils or something of that nature, does that make you a public figure? Does simply being a governor make you a public figure?

It's where you get down to those lower-level public activities that I think it becomes very difficult to decide, without linking the activity to the status and deciding on the basis of both.

ERIC SCHMIDT: One quick final question. Can you give us a generic example of a really hard decision under your policies for the BBC? What's an example of a grey area decision that you struggled with? Without the specifics, obviously.

DAVID JORDAN: I think an interesting area is ASBOs. In this country, young people can be issued with orders that prevent them from, for example, going into certain areas of a town at a particular time of night or something. This would generally be applied to people who are guilty not so much of crime, but of serious anti-social behavior, where they've been hanging around congregating on a street corner somewhere, causing a lot of noise or disturbance or upsetting people, that kind of thing. So you're not talking about serious levels of criminality.

But of course in order for that order to be effective, it has to be made public. And you're talking here about young people, you're talking very often about people under the age of 16 and defined as children in our case. So what happens when, for example, somebody actually becomes a better person and stops doing that kind of thing and moves on, or moves from the area-- these are examples we've had-- moves from the area that they used to live in, where they were in trouble a lot, had difficulties in their personal life, parents were divorcing, that kind of thing. Move to another area. And then their school friends in the new area can Google them and up comes the ASBO, and all of their past is sort of visited upon them in those circumstances.

That's a matter of public record, where in general we're very against altering the record, as against a piece of quite intrusive personal information. That's where the real difficulties lie. And that's what we try to find a solution that's based not on taking stuff down and unpublishing it, but on amending in a way that makes it less searchable but doesn't alter the public record. So for example by getting rid of the surname. This is something you can't do. In the instance that I'm thinking of, we took the surname of the individual child away, and therefore it was much less searchable. If you knew it was there, if you were doing a research project on ASBOs in the UK you would find it, but you wouldn't find it if you simply put the name into a search engine. That sort of approach is something you can't do, and I think it's a more subtle approach than you're able to do in your circumstances.

ERIC SCHMIDT: Thank you for that. Why don't we move on to our next expert, it's Gabrielle  Guillemin. She's a senior legal officer of Article 19, an international free speech organization based in London. She has been leading the organization's work on internet policy issues since 2011. She is a member of the UK Multi-stakeholder Advisor Group on Internet Governance, and an independent expert attached to  Council of Europe Committee on Cross-Border Flow of Internet Traffic and Internet Freedoms. Prior to Article 19, she worked as a registered lawyer at the European Court of Human Rights for four years, and she was called to the bar of England and Wales in 2006. Why don't you go ahead, and thank you very much.

GABRIELLE GUILLEMIN: Thank you very much, and thank you very much for this opportunity to address the council. Let me begin by saying that Article 19 share many of the views and concerns that have already been expressed by Mr. Jordan and Mrs. Carr, so I apologize for any repetition. You'll have received Article 19's written comments on the implications of the Costeja judgment, so I'll confine my remarks to three main points.

First I'd like to highlight what we see at Article 19 as an unfortunate development in the law of data protection. We are concerned that, by holding search engines as data controllers for the purposes of the data protection directive, the effect of the Costeja decision is to considerably and unreasonably broaden the reach of the protection of personal data beyond its original intended purpose. In particular, we think it is obvious that the inadequate, irrelevant or no longer relevant test was not conceived to address the practical issues that arise in the context of so-called right to be forgotten requests.

More generally, we think that the judgment is symptomatic of a more general development whereby the line between data protection, privacy and defamation is becoming unhelpfully blurred. And here I'd like to distinguish between, first, data protection and the right to privacy. Whereas the right to data protection is widely understood as a subset of the right to privacy, we think that the scope of both rights is significantly different. Whereas privacy generally protects information which is private, data protection concerns the protection of personal data, that is data about a person which may be both private or public. This is, in our view, a significant distinction which presents serious difficulties where the protected interests may diverge. And again this is apparent when the information is made available in the public domain.

Looking secondly at the distinction between data protection and reputation, here we're seeing that data protection is increasingly relied upon by individuals as a means to protect their reputation either as an alternative to, or in addition, to the established principles of defamation law. But again, defamation and data protection protect different things. The purpose of defamation law is to protect people against false statements of fact which cause damage to their reputation, i.e. diminish the esteem in which other members of society hold them.

By contrast, data protection enables individuals to request the erasure of information which is truthful so long as it is inadequate, irrelevant or excessive. So the test is different. It is lower and easier to overcome, and does not involve the defenses that would be available under defamation law. So it may be the case that the journalism exception under data protection law kicks in and that this prevents actions against newspapers. But we think that the development of the law itself in this regard is wrong in principle. 

A third distinction I'd like to make here is between the right to privacy and reputation. Like data protection, the right to privacy can be used to prevent the dissemination of accurate information of a personal nature, such as photos taken surreptitiously in a private home. The effect that these facts have on the reputation of the individual concerned is immaterial. The deciding factor is whether the plaintiff has proven wrongful intrusion in his or her privacy. What we're seeing, however, is the European Court of Human Rights, for instance, increasingly finding the right to reputation as part of the right to privacy.

Overall, we think that this development isn't helpful because concepts that should be properly regarded as being distinct for a good reason are becoming unhelpfully muddled. The second point I wanted to make is about archival information, and here I'd like to begin by stating that we think that the starting point and the starting presumption should be that information in the public domain shall remain in the public domain. 

Looking at information of an archival nature, again we think it's helpful to look at it as different categories. So for instance, looking at news archives. Here we think that the position is clear, and this panel has already heard that the European Court of Human Rights was very clear that it was not the role of judicial authorities to engage in the rewriting of history by ordering the removal from the public domain of all traces of publications which have in the past been found by final judicial decisions to amount to unjustified attacks on individual reputations. So we very much agree with what Mr. Jordan was saying earlier.

Now if we look at personal information that may be held by public archives, I think there's interesting things that can be learned from that. Usually public archives would be managed by public bodies, who are subject to data protection laws. So they would already apply data protection principles when looking at whether some information should be disclosed. And here it's interesting to see that, in relation to personal information, what public archives look at is whether the disclosure is about sensitive information that would cause substantial damage or harm to the data subject. We think that this is a very important distinction.

And here, when looking for instance at the code of practice on archival information of the National Archive of Scotland, they are very clear that the test of substantial damage is not one of mere embarrassment or discomfort, nor is substantial distress sufficient. Actual harm is required. So we think that in the context-- the board of context in which Google finds itself, so to speak-- it is a test that would be helpful in weeding out unmeritorious claims.

Now my third and final point is about the process on how to handle the right to be forgotten requests. Let me begin by saying that we think that the process is likely to resemble somewhat the notice and takedown system under the e-commerce directive. I want to be very clear that this is a system that Article 19 opposes for well-known reasons. First of all, the procedure itself very often is unclear. The individual whose content is sought to be removed is not systematically informed that a request has been made to remove that content, and as a result freedom of expression suffers.

But going back to the process under the so-called right to be forgotten, here we very much agree with what Mr. Jordan was saying. We think that it's absolutely crucial that the data publisher is informed that a request has been made, so that they can put their case to the data controller when looking at making a decision on any particular request. And if that decision is that the search results should be delisted in relation to a particular name, then the data publisher should have an opportunity to appeal that decision to a data protection authority, or ideally the courts or adjudicatory body directly. We think it's absolutely essential, and not just a matter of good practice.

And here we would look at the concept of positive obligations under European human rights rule. This concept was developed, and very much underpins in some ways, data protection law, because it forces member states to adopt rules that regulate relationships between private parties. And here we think that in a situation where the data controller is making a decision that interferes with the free expression of the data publisher or actually also the person receiving information and looking for what information which they may think is relevant, the state has a duty to put in place rules that give an effective remedy to that person who's right to freedom of expression has been interfered with. So I'll close my remarks here. Thank you very much.

ERIC SCHMIDT: Thank you very much. Do we have some comments from our panel? Sabine.

SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes, thank you very much. First, can you will give us more concrete criteria what are sensitive information? Sensitive information. And second is, would have the publisher have the right to go to local court under the British law system after removing a link by a search engine, has then the publisher under the current British law a right to go to a local court or not? Or it is-- it depends on, or is there no legal basis for this?

GABRIELLE GUILLEMIN: Thank you. In relation to your first question, when I mentioned sensitive information I think it was by reference to data protection law itself, which usually concerns medical data, for instance. In that context. And in relation to your second question, in some ways perhaps Mr. Jordan would be better placed to answer this, but I don't think that there's a right currently, under the current law, to go to court when there's been an interference essentially by a private actor with freedom of expression.

DAVID JORDAN: Sadly I'm not sure you are asking the right person that question. I should have said-- I'm not a lawyer. So I'd have to take advisement on that. I don't think so, but I would want to be safe and certain.

ERIC SCHMIDT: OK. More questions? Luciano.

LUCIANO FLORIDI: Fine. Invitation to tell us a bit more about something that you mentioned which I found very interesting. At some point you mentioned embarrassment, discomfort, and if I got your point right, you said well that's not really enough, is it. We want to have something that is legally described as harm, and therefore proportionality. Well, if there's real harm then we take that request seriously. If there is no real harm but just the fact that you don't like it, it's not good enough. That was my understanding. But if you could tell us a little more about that proportionality and harm, and how that relates specifically to the actual decision taken by the European Court of Justice where harm must have played a role at some point I guess. Could you tell us a bit more about that?

GABRIELLE GUILLEMIN: Well I think precisely this notion of harm, which you find in defamation and the protection of reputation or an intrusion in privacy, you don't find it in relation to data protection law because it's all about just relevance or irrelevance, and it's by reference to the data subject.

So as Mr. Schmidt was mentioning earlier, it is very subjective. Also when the information is already in the public domain, that information-- the data subject may think that that information is not relevant, but someone looking for that information might think that it is. Therefore, the mere fact that the information is embarrassing should not be the criteria. There should be some harm to the data subject, him or herself.

LUCIANO FLORIDI: Just to press that point a little bit further. And again, trying hard to disagree with you, not easy. Trying hard. What if the embarrassment-- we know that all this comes in degrees, and social embarrassment at some point becomes a social stigma, it becomes losing their job because you can't bear that anymore.

So is there a way of putting some kind of threshold beyond which embarrassment, discomfort, unpleasantness of that particular link showing up again and again on that Google page becomes social harm? I mean basically you have to resign. I'm talking to a particular case that was discussed by the BBC during a program where a teacher had to basically resign because she could not face that classroom anymore because they were constantly looking for those pictures that were legally available when she was younger and so on. So do we have a way-- probably not, I don't know, but I'd like to hear your view-- do we have a way of understanding when embarrassment, discomfort, unpleasantness becomes harm?

GABRIELLE GUILLEMIN: But I think that in that sense, a better remedy surely should be defamation, because that's what we're talking about. It's lowering the esteem of the person in the eyes of other members of society. And in order to develop the notion of what that harm means, I think that the courts would be much better-placed to determine when that threshold is being crossed.

ERIC SCHMIDT: Just as a follow up to that. The problem in the defamation case as I understand it is that if you then follow the legal process, there's even further defamation of yourself by making the defamation claim, because then it gets widely covered. So many people in these situations decide to not pursue their legal redress for fear of further publicity and further embarrassment. This is a case where the two rights conflict in a very complicated way. Do you have an opinion on how to solve that legally?

GABRIELLE GUILLEMIN: Well I think that if someone is pursuing a claim in defamation it's precisely because on some level they want to be vindicated. So if you're pursuing litigation you're taking that risk. It depends on what's more important to you ultimately.

JOSE-LUIS PINAR: Yes, thank you. Just two questions. The first one, you said that the publisher must be informed in any case, is not a matter of practice and is a general principle. This even without the consent, or against the consent, of the data subject. Perhaps the data subject doesn't want the publisher to know that he's asking to remove some information because the publisher, after removing, can create another webmaster and make published again the information previously deleted. And the second one, in taking the decision of whether or not to remove some information, the distinction between privacy and data protection could be a criteria even taken into consideration that we are talking and their rulings are talking about not privacy but data protection. Data protection that is a fundamental right recognized in Article 8 of the European Charter of Fundamental Rights.

GABRIELLE GUILLEMIN: On the point of notification, I think it is absolutely crucial again for the data publisher to be able to give their side of the story. I think it's a strange idea not to want to contact the data publisher, because in a way they could do even more for the data subject if they're agreed that the information could be actually removed, which is beyond in some ways what Google or other search engines would be doing. And in this sense, it seems that Mr. Jordan's proposal of having a sort of confidentiality agreement as the case may be might be a good way of solving, resolving, these particular concerns. As regards the second question on removal, I'm not sure if I've understood your question correctly. But it seems to me that, despite what I said in a way, we have a situation where now all these concepts have been linked together so that notions of privacy will inevitably come into the balancing exercise. And I think that's what we're already seeing in practice with the types of requests that are being made.

JOSE-LUIS PINAR: The question is-- perhaps I don't understand. I understood that you have distinguished between privacy and data protection. And do you think that to remove it's necessary to distinguish if we are before a question of privacy or just a question of data protection?

GABRIELLE GUILLEMIN: I think that in the context of data protection, the information we're talking about may already be public. So I think that's an additional difficulty, if you like. I think there would be more of a claim if the information was private to begin with, because then it would be easier to establish some kind of wrongful intrusion or harm, whereas when the information is already public, which is what the data protection framework allows data subjects to do, I think it's an example of where data protection might be going too far.

PEGGY VALCKE: Thank you very much, Ms. Guillemin. I have the following question. We've heard already about press archives, you gave the example of public archives. Both are considered to fulfill kind of public duty, there is the interest of the public in having records, archives, which are intact. So the guidelines that have been developed by these archives put standards quite high in order to change data in the archives or to have things removed or make things less searchable. I perfectly understand that, because there is this duty, this public task involved.

But if I look at the Transparency Report, the website which is most affected by the requests for removal is Facebook. Can you say the same about Facebook? Because of a lot of data, personal data information, comes from sources which do not have this task or making sure that archives remain intact, and therefore somehow contribute to the general interest. Or do I see this wrong? Thank you.

GABRIELLE GUILLEMIN: I think my general point is that the test that's been developed in that particular context would be helpful in considering requests in the context of search engines dealing with the types of requests that they get in the context of the right to be forgotten. Of course, public archives and newspaper, you could say, have a public function on some level. But I think that what we're seeing is that the information that can be found with search engines very much-- the search engine gives access to information in the public domain. So individuals can decide what they think is relevant as part of that, so they should be able to get access to that particular information.

So when looking at whether or not this right should be restricted, it should only be under exceptional circumstances, and subject to the kind of test that's been developed in the context of public archives.

PEGGY VALCKE: It's a very concrete question. You said embarrassment is not sufficient to have certain data removed or changed in a public archive. I understand that. There should be something more than just simply embarrassment. But I don't follow that embarrassment is not sufficient to have something removed on-- no, it's not removed that a link to information on a Facebook page that comes from a person I might not even know, is kept. I don't see the public interest underlying your arguing for keeping that link.

GABRIELLE GUILLEMIN: Well, if you look at this way, for instance for historians even trivial facts might become relevant and actually become even more relevant over time. So I think that, although the public interest is a very important test, it depends really on how also you look at it. And I think that in this discussion that we've been having, and of course data publishers have a vital role to play in that discussion, but at the same time it's also important not to bear sight of the individual exercising his or her right to a freedom of expression in seeking the information in the first place. So in that sense, if the information is already available in the public domain, I may find that this information is relevant. And I find it interesting. Maybe the data subject might think that it has no public importance, but I can also make a judgment as to whether or not I think that it is important, which is why when the information is already in the public domain, and in looking at whether or not access to that information should be restricted, I was suggesting that this test of the substantial harm might be helpful.

I think we've run over, but this is important. Perhaps we can-- thank you very much for that. And let's introduce our next expert, Dr. Evan Harris. Dr. Harris was a liberal Democrat member of Parliament from 1997 to 2010, so 13 years. And during this time he took a particular interest in human rights, and especially freedom of expression. Having been responsible for the abolishment of the UK laws of blasphemy, seditious libel, and criminal defamation, he founded the libel reform campaign which led to last year's Defamation Act. He's a medical doctor and has been previously a longstanding member of the Medical Ethics Committee of the British Medical Association and helped to write policies on confidentiality of medical data. He's now a trustee of Article 19, which as you know campaigns for freedom of expression, the right of information, and for journalists worldwide, and is the associate director of Hacked Off, which campaigns for free and accountable press in which sole capacity he is speaking today.

EVAN HARRIS: Thank you. What I wanted to do is set out our position, then talk a little bit about Google's position because I think it's important for the independent panel to reflect or hear reflections on who they are giving advice to and what their interests are. I want to talk a little bit about the nature of the ruling, although not necessarily repeat arguments that would've been heard legally before the judgment was made, and then talk about the process, particularly this issue that's arisen about whether the publisher should be notified prior or around the time. I think it's useful for me to make a declaration of transparency that I've never worked or been funded by Google in any way, but I do use Gmail and Google is my search engine of choice, and I find it very valuable and helpful.

So our position is that we represent victims of press abuse, and that covers-- this is Hacked Off-- that covers both unlawful material that appears in newspapers, but also things that are not unlawful but all breaches of the code, the code that the newspapers themselves have agreed to sign up to. And it is the failure-- well, there's several reasons-- but it was the failure for people who signed up to it to stick to the code or for that to be enforced that has caused the proposed changes in press regulation. And therefore, what we're aware of are cases which don't involve anything unlawful, but things that we wouldn't in fact want to be unlawful, such as intrusion into private grief. We're not suggesting that be made criminal or a new tort, but if newspapers say we don't do that, they should respect that.

We don't proactively go out around this so-called right to be forgotten, but if obviously people mention it to us we refer them to the Google form. So it's not-- our main remit is not to seek out people who can benefit from this judgment now. The other thing that is critical to note-- and I want to come back to this when I discuss what Google's role might be or how it might improve its performance in one of these areas-- is to recognize the bias in media coverage of this issue.

And I don't know how you're selecting your witnesses, generally you've had three so far that are generally opposed to the ruling, and maybe that will be corrected this afternoon, but if you look-- any academic study of press articles would find probably a ratio of 9:1 opposition to the ruling, and to the rights that are supposedly due to be protected in that ruling. And that's the right of the publishing world to do that, they have a vested interest. But I do think-- and I'll come on to this-- that Google has a duty to ensure that factual errors are corrected when dealing with Google's business. No other business I don't think would allow plain factual errors to remain on the record, unless it suited their business.

The right that we're talking about, which I don't think has been referred to so far, might be considered to be set out in Article 8 of the EU Charter of Fundamental Rights, and that is that everyone has a right to the protection of personal data concerning him or her. Such data must be processed fairly for specified purposes, and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified, and compliance with these rules shall be subject to control-- ultimately, I think they mean-- by an independent authority.

And so it's no good wishing that's away, that's is in the EU Charter of Fundamental Rights and was the basis of the law. So testimony that you've heard regretting the fact that that exists, particularly if it's publishers against whose commercial interests that right might be seen to conflict-- well that's by the by. It's there. And I think Google was right to recognize whether it liked the judgment or not, and it clearly didn't because it was opposing it, it was a judgment against it was right to recognize that, at least until the law has changed, it has to comply and be seen to comply.

So I just want to say a word or two about Google's position, and I'd like to be frank if I may, Mr. Schmidt. I hope you won't mind and I hope you'll take it on board. I've already said I, we, recognize the Google has approached this in the right way generally by saying no good cavilling about the judgment, and I'm sure Google is spending a fortune on lobbying for the law to be changed, that is its right as a huge multinational company and a player.

But I think it's only fair to acknowledge that Google has a vested financial interest in its card indexing, as you described it, being as extensive as possible. It's a legitimate interest, but it is a commercial interest. It's not being done for charity, and it's not being done as a public service per se. Income is derived by the utility of it being represented by the extent of its use. Now sometimes Google is quite happy to unindex results when it suits it. So it has made a decision commercially that it wants to abide by copyright law. So it delinks millions-- not just tens of thousands or hundreds of thousands-- but millions of pages without bleating about it, because it's deemed that it's right to comply with that law. And indeed when it comes to images of child abuse, quite rightly and no one would argue with this, it delinks to those as well. And there may be other local laws that it chooses to comply with. So it's not a new concept. One might be surprised from what's gone before that Google can decide to delink. It's not breaking new ground, it's just that this particular judgment is against its legitimate but nevertheless commercial interest.

So I believe that-- I think it's important to recognize that. Now, Google's motives have been impugned. For example, it's been said-- I think unfairly-- to have deliberately overinterpreted the judgment, and therefore removed search indexes for search terms beyond what it should. I don't think that's the case. I think that's a pure misunderstanding of ignorance of the media who are reporting this, and indeed commentators of the search term. And you'll see that in article after article in the British media and maybe other media that someone who's mentioned the wife of a pedophile seeks for her name when in a search term to be delinked as is her right, and it's portrayed in the press as removing, deleting the article, which it isn't, or that a pedophile has made the request, which it isn't. And I think Google could and should do far more with its huge resources to seek to correct the record when newspapers get it wrong. Good luck. But at least attempt. And it's almost every newspaper, I'm afraid to say.

On the question of the ruling, my view-- this is our view-- is that we wish more had been said about the importance of protecting the public interest in the right to information and freedom of expression. So on that, it's no surprise that I agree with Gabrielle on that point. And I hope that in rulings that have been such as the one recently in Holland, that this will come out, that that should be read-in that there will be public interest criteria. And I hope Google will make sure that it doesn't simply delete search, deindex, certain search terms when it feels there may be a public interest, and be prepared to see that tested in the domestic courts. So there's a body of case law.

I think it's also important to recognize that there is no nirvana, and I really want to make this point. There's no paradise where you can just go to the publishers and they will remove articles or remove words or names within it. I mean the BBC is wholly exceptional as one of the very few regulated, statutorily regulated, publishers. Every other-- and this is a good thing-- every other form of the press is rightly un-statutorily regulated, and of course the rest of the internet is almost entirely unregulated. So it's not realistic to say, oh, people can go to the publisher. That's even before you consider that many of the publishers will have their own reasons, sometimes malicious, sometimes other reasons, for seeking to continue to publish it.

Now I just wanted to finally, with this question of process-- and I'm happy to take questions on some of your questions about process-- I don't think it's going to be possible, despite this question having come up, for Google, without the consent of the person making the request, to simply contact the publisher and say this person has given us this search term. That strikes me, with my limited knowledge or non-legal knowledge of the Data Protection Act in this country, as a clear breach of you've got no right to process that information and share it without consent. That's just a fact, and I don't think there's any interpretation of the EU directive that would give you that right because of the consequences that may flow, and I've got an example of that.

The second-- so you could say, with consent, could we share it with the publisher in the way that David Jordan has said? I think, and David Jordan said and of course the publisher would agree not to publish it, not to say, ah, this person's asked, here we go again, and republish to the world the contentious matter. How would you enforce that? You'd probably, and if you want to go down this path this is something you could do, seek to have a contractual agreement with certain publishers where they agree you will do that. I don't think that will be a large number of publishers, but where that is the case, why not. So that's something that could be explored. But it's not going to apply in my view to the majority. 

The final point I want to make is that we've already seen the problem of notice being given. So the Oxford Mail, which is a newspaper as it happens from my old constituency, was told by Google that a particular URL was being delinked, and it was about a man who eight years ago had a conviction now spent under UK law-- a spent conviction-- for shoplifting, and the request had come for the article to be removed. Well they worked out that it was probably from the person themselves, and simply republished it, saying this is a terrible ruling and so on. And now maybe he has a cause of action, because there's no public interest arguably in that data being processed, so that probably doesn't even pass the journalistic exemption. And that's just the Oxford Mail, which is not a malicious publisher. I want to make that clear. But there will be people out there who, if notified, will republish. And it's not realistic to expect individuals to go to law, at least in this country, which is extremely expensive.

ERIC SCHMIDT: Thank you very much, Dr. Harris. Comments or questions from our panel? While they're contemplating your testimony, just one clarification since you were very direct on the copyright law. Google operates under something called the DMCA, and there's absolute clarity there we are governed by the DMCA and we must follow it. And the procedures that we're following under this new ruling are not dissimilar from the procedures that we followed for the DMCA. A little background for people who don't know, the Digital Millennium Copyright Act is a legal mechanism that provides a safe harbor for intermediaries when copyrighted information that has been stolen or is not supposed to be in somebody's hands is taken down.

EVAN HARRIS: Yes, I'm aware. I was mainly making the point that this is not a novel concept, the removal of search terms, as it is. And I'm not saying you've ever said that, but if you read the press, you'd think the sky was falling in for the first time.

ERIC SCHMIDT: Comments or questions? Luciano.

LUCIANO FLORIDI: I think it would be impolite to leave you without any comment at least, especially since you mentioned the Oxford Mail. So the comment was-- it was a moment of not surprise but astonishment at your emphasis on the commercial nature of Google. Not because it's not true, but because it's blatantly, obviously, plainly, undoubtedly true.

So I was just trying to understand better the reasons behind the fact that you highlighted that with so much emphasis. Because when someone says 2 plus 2 is equal 4, and he says that in Times Roman bold 24, I look for a reason. And now it normally is Bertrand Russell-- not trying to explain to you why it's not so important. So why are you telling us something that we cannot possibly disagree with you in any sense? What's the reason behind? Is there more that we need to understand here?

EVAN HARRIS: Well someone I know caught up to Paul Bernal-- who I understand is also known to my colleague on the right here, Miss Powles, who has written a number of articles pointing out that what's often been overlooked-- and this is the point I was making in the media coverage of this, which is where most people hear about this issue-- they don't go and read the. Judgment they read it through the lens of the media. In this country, they read it through the BBC and the newspapers. And some of that reporting, especially by "The Guardian" has generally been good. But some of it is full of factual errors.

And one of the things that hasn't been mentioned-- and I'll send you, if I may, the article that he's brought out-- is the fact that what's underplayed is the power of Google as a player-- the fact, for example, that they will no doubt, as is their right, be lobbying for changes in this law, that there are people in the administration in the US and in the UK, who are former employees, who are very high up now in political roles. And it just seemed to me strange that that was unmentioned-- not a criticism. I didn't mean it as a criticism. I hope it wasn't taken as such.

And he makes a number of points in those articles, which have not been seen. And I think there is this perception-- and I was just next door at a conference on surveillance-- and the UK media is sometimes inconsistent. That's a shock. But when it comes to, for example, the government's holding data, which is not being republished, but just held-- then we feel very strongly about it. But when private companies-- for example, media corporations hold huge amounts of data. The media doesn't feel so strongly about it. And sometimes it's left to people who are sort of thorns, as it were, to point out that there is another way of looking at, not just through the prism of the media.

And I did want to say that the people who are directly affected, who are using this so-called right to be forgotten, are not likely to come into public fora to advertise their requests. I know some who have found stuff there, particularly in relation to newspapers because that's the work we do. And I'd be happy to try to arrange a private meeting for you with them if you felt that was appropriate. But you're not going to capture-- its difficult for you to capture the perspective of the people who are seeking, for whatever reason, their information to be deindexed. So I'm happy to work constructively to try to find ways to make sure that you hear their perspective.

ERIC SCHMIDT: Thank you. I'm sorry. Go ahead. Peggy and Sabine-- whoever first.

SABINE LEUTHEUSSER-SCHNARRENBERGER: You mentioned these people are coming to you, you are dealing with. These are, in general, in your diction private figures regarding this ruling-- not public figures?

EVAN HARRIS: No. Public figures-- successful people have lawyers. So they don't go to me who's, I would say, underpaid and under-powered. They go to lawyers. And they can take action, at least for the torts, for their lawful acts. And they can also try to get action-- although it's very difficult-- through the code, through the press self-regulation. Don't get me started on its failings, but there you are. The people that come to us are those who can't afford access to justice in this country so easily and seek for us to assist them with their cases. So we generally represent the people who are ordinary people who, for example, are victims of crime or family members.

And an example I want to give you is-- this happened a couple of years ago. There was an accident-- road traffic accident in Switzerland. A child died. The newspapers in this country published photographs of the child's, I think, 10-year-old sister grieving, laying flowers at the site of the accident-- stolen, they claim, from their Facebook site. And they've been seeking to get redress on that.

And that's the sort of person we tend to ask for. Is it unlawful to take a picture from a Facebook site, if the privacy settings aren't right, and then publish it to the world in a newspaper or a newspaper website? Probably not. Is it ethical? I don't think it is. People like that may now benefit in respect of unregulated sites, who are wholly unregulated by having their personal data of them grieving taken down. Now I disagree that it should be for the reader to decide whether that's the public interest. This is a grieving 10-year-old child who has that right. And it's hard without this. I guess that's the basis of the ECJ judgment for her to have that privacy restored to her, except by deindexing.

ERIC SCHMIDT: Peggy, you had a comment.

PEGGY VALCKE: Thank you, Mr. Harris. You mentioned earlier that you would be happy to tell a bit more about the process. And we've heard so far from the experts that it would be appropriate to involve the publishers, even before removing links, and hear about their opinion. Whereas you are of the other opinion, if I interpret that correctly. So I would like to hear more about your view about the process, especially now you mentioned this example of a newspaper figuring out themselves who might have filed a request, and then re-publish the article again. So what is then the right approach?

EVAN HARRIS: So my point was that it shouldn't be taken-- and I'm sure you wouldn't-- from the witnesses you've had that every publisher is like the BBC. There's a vanishingly few publishers like the BBC, which is why it's the BBC. And therefore, you cannot rely on being able to obtain an agreement to keep this information confidential, even if you have that consent, as I say I think you would probably need, though I'm no lawyer, of the person making the application to process their data to the publisher. So therefore, I think where you can get an agreement, you could do it.

Where you can't get that agreement-- an enforceable agreement under contract, I suppose, then I'm not sure it's going to be that easy. I know you do, at the moment, notify people that a URL has been delisted. I've given you one example of a non-malicious newspaper choosing to use that opportunity to republish information about a spent conviction. Now I don't know how you avoid that if you generally notify publishers.

Now having said that, I do think there's merit in a lot of what David Jordan said about the need for more information to be captured on your referral form. So you could ask, for example, for them to say whether they feel they're a public figure. So I don't want to re-go over the list. I know we're running over. But there's much of that that will be a benefit. Whether you can under the terms of the judgment require them to do so, I think, would be difficult.

ERIC SCHMIDT: Let's get a quick comment from Luciano on that, and then--

LUCIANO FLORIDI: Really, for our own mental understanding, processing of all this-- are you suggesting, therefore, that the Google search engine should not inform the publisher? Is that what you're suggesting as a way forward? Am I getting it wrong?

EVAN HARRIS: No. I'm very careful not to go that far. I've identified that there may be consequences from doing so, that you shouldn't take the example of the BBC, which is careful, as the basis.

LUCIANO FLORIDI: I understood that.

EVAN HARRIS: So what I'm trying to do is raise the point that there may be adverse consequences, which could involve unlawful breaches-- unwittingly, I think, in the case of some of the publishers I've mentioned, that could embroil you in further legal action if, as a consequence of your processing that information that's identifiable by virtue of the nature of the story-- particularly if that person has done, as other witnesses have said, and gone to the publisher beforehand. This happens when they go to the publisher, ask for the article to be taken down, it's not. Then it goes by Google. And then they're now ah, it's that person, right.

LUCIANO FLORIDI: Allow me. That's clear. What I don't understand is what is the alternative? I understand what you're saying. What is the alternative? What do we do if you are right-- which I think you are-- what's the alternative?

EVAN HARRIS: Well the alternative is not to notify publishers with whom you cannot get an agreement. And I'd be interested to know what your conclusion is.

ERIC SCHMIDT: Thank you. We've run over, but again, important. Let's take a 20 minute break. And then we'll return for three more fascinating comments.

ERIC SCHMIDT: [INAUDIBLE] such as Google, Facebook, Twitter and Reddit. He's an advocate of democratizing audience data in the newsroom, is one of the core team responsible for the creation of the Guardians bespoke real-time data tool used by more than 700 staff a month. Chris, please proceed.

CHRIS MORAN: First of all, thank you for inviting me. By way of introduction and giving some context, I need to say that I'm not a legal expert, and also I clearly have the least impressive biography of anyone on this esteemed panel. I'm a key member of the editorial team, and as the introduction stated, I'm responsible for the audiences who come to us from beyond the borders of our own website. Coincidentally, that means that I'm also the key editorial stakeholder in Webmaster Tools. I was the first person in the Guardian to know that Google had begun implementing the right to be forgotten ruling.

Part of my job at the moment is to maintain a database of those complaints, and to quickly identify the complainants, which is a trivial matter. Then to give some context for our key editorial staff. So I want to talk about our editorial response, our current room for maneuver in response to the rulings, and perhaps a little about the various responses we've seen from other publishers as well. I'd also have to say that we'd echo a great deal of what David Jordan said earlier.

My initial position when I saw the first removal was bullish, reflecting the Guardian's default position that as a news organization, Article 10, the right to publish information, is where we start. As an editor and someone familiar with the workings of Google, the ruling seemed to me unenforceable, potentially undesirable, and could lead people to believe that they were being offered a protection that was mainly illusory. We also had no context, no formal right to reply, and no initial way of entering a conversation with Google about each ruling, although we'd been happy to see the implementation of a more effective feedback form.

So this seemed even more of an issue considering the chaotic nature of the first few removals. There was a well-reported example on the first day of the Scottish referee. He had a number of articles removed from a number of different news organizations. But before the end of the day, those articles were all reinstated in the search listings for his name. We weren't informed of that reinstatement, and that made it more difficult for us in the media to divine exactly how Google was implementing the ruling. It also led to a great deal of confusion and misreporting. There have been other reinstatement since then which we've only identified by regularly rechecking ourselves.

There's also huge issues, as Evan mentioned, around understanding what the removal actually means. It isn't unpublishing. Much of the early response misunderstood this. But also a lack of transparency from Google about the implementation I found personally befuddling. So it took some time for official word on whether an article was removed only for an exact name search, whereas that isn't the case.

So with no recourse to appeal, my inclination at that time, like the BBC and the Telegraph, was to republish the pieces in some way. Perhaps as a simple list in an article, there were also conversations had the Guardian about whether we should do this as a Twitter feed, although that was later not pursued. Currently we are not republishing anything.

We're also an organization that's always taken a good line on protecting personal privacy. We're protective of our historic archive, but there are very rare occasions when we erase people's personal details, especially where the information is personal, not important to a story, and the information is causing distress. A good example is of parents who write about a child's peculiar problems, and the child later seeks to delete that piece. This is the kind of issue that our reader's editor often has to deal with, rather than our legal team, and it's something that's increasingly dominating his day to day business.

To give some more context on the other side, it's important to say that so far we've only seen 27 removals from search of Guardian content. To balance that, in the last seven days we've received over 20 million page views from Google, over half of that to content that is more than a week old. And that's not counting a significant portion of unknown referral that is also very likely Google as well. Google is, in a very real sense, the front page of our whole archive.

As time's gone on and we've seen the sheer variety of the removals and other organizations' responses to the removals, I've moved more to a position that each of these requests need to be evaluated on a case by case basis. To illustrate that, I'm going to give a few examples of specific ones that are obviously anonymized in these cases.

One general thing that we've seen is the case of a person with often a very specific name making a trivial contribution in comments or as an interviewee to an article which has then dominated their personal search results, partly because of their specific name and partly because of our domain authority within Google. Now here you could argue that this is a perfect fit for the ruling. The Guardian itself probably wouldn't remove the article because of the triviality of the contribution being balanced against the value of the piece as a whole. So it's removal only as a search term for that name makes it more palatable prospect, you could argue. But the question there might be whether this ruling is the correct organ for people who are effectively engaged in the activity of just cleaning up their personal brand online.

Another regular example which is more troubling is a person who's committed a crime and requests removal of a piece from their search results, which is a contemporaneous reports of the trial. We would never remove that kind of piece ourselves, but the list of requests we've seen within that group are very varied, and here are two examples. A case where a perpetrator requested a removal for a particularly unpleasant crime. The report included details of their main victim and members of their family, which if we were to republish in any way or draw attention to would be hugely damaging for those other individuals, making them effectively collateral damage.

The other case is one where a private figure with recognized mental health issues assaulted a person in the public domain in a very specific way. The perpetrator requested three removals-- a court report, a comment piece supportive of the victim, and a later news piece in which the victim expressed anger that the perpetrator had been allowed to plead guilty to what they saw as lesser crimes, was then put on probation, and at that point the victim began receiving anonymous letters harassing them. This is a troubling example, which requires context and thought before being even partially scrubbed from the record.

In conclusion, we strongly believe that as a news organization with a track record of public interest reporting and also protecting personal privacy where appropriate, we have the right to reply on a case by case basis, and that the more transparency there is around each complaint-- for example the name of the complainant, context from the complainant-- the better placed we are to representation. We also believe the context we could offer pre-removal is in fact essential to Google making a far more balanced decision in each case. I also believe that some of the more ill-considered responses from the news sector which Evan alluded to earlier, admittedly born out of a great deal of impotent frustration, would be much less common if there was more consultation. We would echo that we believe Google isn't the correct organization to make these kinds of decisions. Thank you.

ERIC SCHMIDT: Questions and comments from the panel? Go ahead, Jose-Luis.

JOSE-LUIS PINAR: Thank you very much for your very interesting presentation. Do you think that now, in this very moment, that Google is essential for the freedom of information and for the freedom of expression?

CHRIS MORAN: I guess I'd answer that by referring again to how significant Google is in the discovery of content in our archive. It is entirely-- you could argue it is almost entirely responsible for that. Our front page obviously doesn't often reflect older content. So in that case I would say yes. It's absolutely essential.

JOSE-LUIS PINAR: Excuse me. Essential but must be absolutely objective and must be the mirror or the intimidator of all of the information that is on the website, because otherwise it could become not essential or an objective tool for the freedom of expression.

CHRIS MORAN: I agree with you, but I'd also echo there Evan's comments that Google also removes things for a wide variety of other reasons, too, so the integrity of the archive could be questioned.

ERIC SCHMIDT: Peggy?

PEGGY VALCKE: Thank you, Mr. Moran. I have some technical questions with regard to press archives. I assume you have guidelines to deal with requests from people who have been mentioned in reports on how to amend or not amend or remove or not remove certain details in the report. Are those internal guidelines, or are they publicly available? And are they identical to the guidelines in other press outlets? And the second question, what kind of responses can you give to such requests? Is it thinkable that you, in the metadata or the robot.txt file, give an instruction that a certain article remains findable but not for certain search queries, being the name of a person? Thank you.

CHRIS MORAN: Thank you. The way that the Guardian administers these kinds of decisions is via the legal team, but also by the reader's editor I mentioned, who is an independent ombudsman. I believe his guidelines are available online. He also writes a weekly column discussing exactly these kinds of details, so it's important for him that the processes are transparent.

In terms of the technical aspect, it's an interesting one. We don't do anything to the metadata. We would make changes at the journalistic level. It's an interesting idea. It would worry me a bit, because I think that the integrity-- I think that the piece should stand up for itself, if that makes sense. The journalism itself should probably deal with any of these issues, rather than some kind of technical get-around. Just looking at the ruling in itself, the simple fact of the right to be forgotten, the simple fact that you can search in Britain on google.com for the same name and find it, you hit those kinds of problems.

PEGGY VALCKE: So have you the opinion that to answer this problem there should be a deletion not only of the local link, but also of the link from google.com?

CHRIS MORAN: Well it's more specifically around right to be forgotten, I guess. I mean yeah, if you remove the article itself or you amend the article itself, you hit none of these problems. That's why I said in my introduction the ruling itself could give people a false sense of security in terms of what they are removing.

JOSE-LUIS PINAR: This is a practical kind of question. We heard-- and I think with plenty of justification-- why publishers perhaps should not be informed, unless there's some kind of agreement with the person in question or a contract or something else. And I think that-- as a philosopher, I can see the point. There's definitely very good reasons on their side.

At the same time, that's the beauty of philosophy, I can also see plenty of good reasons why the Guardian and would like to know whether the link has been removed, and wouldn't like to discover accidentally or systematically by doing its own search through the search engine, whether something has been removed has not been removed, what happens. So there's a lack of transparency.

But the point is that we're reaching this particular crossroad, where at some point, someone will have to decide do we or do we not inform the publishers? I mean, what do we do? Do we go towards, say, one direction-- let's not informed the publishers. We do the other? Or as I heard, we inform only some kind of publishers, the good publishers?

And that is also a mess. Like, oh, the BBC, the Guardian, you can trust them. You can inform them. They will keep that note confidential, the other ones, maybe not. Now these are a variety of options, but sooner or later, we well have to cross that particular bridge. Which way you would like to go?

CHRIS MORAN: So first of all, there is no way that we would practically be able to discover that a deletion had happened without you informing us. There is none, unless we were-- yeah, unless we were, every day rechecking every single link for every single search term and in the index, which is clearly impossible.

I think earlier there was a bit confusion as well, I felt around what everyone was saying about informing publishers before the link was removed and him saying that that might lead to people responding. Well that's happening already. It wouldn't matter whether it was pre, or the time.

There is a specific example of another organization-- and again, I think Even mentioned it-- which involved the pedophile, where the piece-- I could tell within three seconds of googling that the pedophile had not requested the deletion. I believe the Mail went to Google and asked who had removed it, and they still went ahead and spoke to the victim's father. That kind of response from the press I think is awful.

But you can't condemn-- I mean it's not possible to say that the BBC and the Guardian all better in this respect than any other org and we'll all different and we all take quite different approaches to it. So you can't say one is better or worse than the other.

So frankly, my recommendation would be engage with us. Because I think you'll remove that sense of impotence which was driving a huge amount of the early responses through the press to this ruling. You know, they were just angry and they wanted to fight against it. But at least if there is some kind of engagement with us about individual links, perhaps that will solve the problem instead.

ERIC SCHMIDT: But following your logic, if every time you notify the press-- which we do, and want and we need to do that unless we're prevented to do so. If the press then simply republish everything then how does that jive with the court's intent, which we're trying to follow?

CHRIS MORAN: Well I mean, you know ultimately that's still each individual news organization's decision. I'm not saying it's a good decision to republish. I mean, it's interesting because people are responding to it in different ways. So the BBC are maintaining a partial, I think, list? David? Of what the removals are? The Telegraph are listing all of them in a single article.

DAVID JORDAN: Well, we will have a complete list, other than those that we ourselves think would qualify for some form of removal.

CHRIS MORAN: Yeah.

DAVID JORDAN: So take the case of children, for example.

CHRIS MORAN: Indeed.

DAVID JORDAN: That's what we will do.

ERIC SCHMIDT: Go ahead, Peggy.

PEGGY VALCKE: Yes, if I may ask about your opinion whether the criteria for removing links should be identical to the ones that you use to accept request with regard to your online archives? Because the way I understand the ruling is that the court considers search activities as different activities than publishing.

They expressly mentioned it-- the effect or the harm is not so much resulting from the initial publication but the fact that it keeps coming up prominently in search results for a person's name. It's like a megaphone or a magnifier. So the court basically said take away that magnifier but leave intact the original source. So why then would you have to use the same criteria?

CHRIS MORAN: I don't necessarily disagree with you. There is a clear difference between removing an article, or significantly amending an article, and it not being findable in a search for specific name? I personally think there is a clear difference there. At the same time, I think it would be rather useful to Google to know our views and to give more context the initial reporting.

ERIC SCHMIDT: Thank you very, very much, Chris.

CHRIS MORAN: Thank you.

ERIC SCHMIDT: Our next expert is Julia Powles. And she's a lawyer, currently undertaking a Ph.D. at the University of Cambridge. She's worked in private practice, the Australian court, and a tribunal system, and at the World Intellectual Property Organization. Her work focuses on the intersection between law, science, and technology. Please take the floor.

JULIA POWLES: Thanks very much. I'm going to try to respond in the spirit of discussion to some of the comments that have been made and things that I've heard during the last, admirable 55 experts that you've all listened to.

It seems to me that this council has two primary functions. The first is to provide independent, rigorous review of how Google is handling the 150,000 requests now received from individuals across Europe. What guidelines are being applied? What technical capacity exists? And whether the concerns of individual internet users are being addressed?

The second slightly more hazardous role is that, in association with this I think the counsel will be making some comments on the broader landscape of data protection more generally. And I think I'd warn that the concerns, a lot of which we've heard this morning-- we've had five people that come from a particular press background, or are concerned about press issues, particularly.

That might somewhat obscure comments that are made on the data protection framework which it's a complex and very wide, broad-reaching framework that impacts not only on search engines but on individual uses and on small and medium-sized enterprises.

I actually, having listened to these hearings, I think there are some questions that would be really useful to put on the public record. So if it's a bit unusual, but perhaps, if I may, I'd like to ask some questions of the council to express some concerns that I and others have.

The first is whether the council-- in relation to that first point I made or being able to provide independent review of Google's processing-- whether the council has seen some more fine grained detail about the 400,000 URLs that has been actioned already by Google.

The processes that are being applied and a breakdown for example, of the number of cases that concerned press articles, and within that number of cases that concern press articles-- where the subject of the press article is the person making the query. We've just heard that there are 27 from the Guardian, 46 from the BBC. That's a very small proportion of the 150,000 requests. So whether that information is something that the council is being exposed to, I would be interested to know.

ERIC SCHMIDT: Why don't you just go ahead and just list your questions.

JULIA POWLES: OK, I'll list my questions, sure.

ERIC SCHMIDT: Perhaps we can all answer them real quick after you ask.

JULIA POWLES: The second question then would be, if the council hasn't seen those details, whether it will be seeing them before it makes recommendations about how Google will be implementing the ruling.

The next question I had was whether Google has provided some of that detail which I think would be very helpful to the Article 29 Working Party, which is the group of European data protection regulators that is trying to provide recommendations that are opposite to the request that people are coming forward with.

Again, if the Article 29 Working Party hasn't been given those details, I would be interested in knowing whether there's any intention to provide them before the working party gives its recommendations in November. A further question is whether Google has discussed technical alternatives to de-listing in implementing this ruling?

Another question is whether the processes that have been implemented-- what has changed to speed up the rate of processing? I know that from what one can detect from the public record the rate of processing has increased quite significantly and would be interesting to know whether that's-- I completely appreciate that this is a new task and it may be that the experience that's developed could be something that could be usefully examined from outside parties.

I think that further on the fine grained detail, it would be useful to know how many of the actual cases concerned perpetrators of crime, or victims of crime. The examples, I think there are 46 cases that I've heard from reports from the transparency report and other statements.

And there are, I think, quite clear cases. And the reason I'm pushing these points of queries is that I think that from the discussions we've had with some of the Google employees and from the statement you made at the start, Eric, there are a number of very straightforward cases here.

And it may be that what we're doing in these public forum is extrapolating and generalizing from the problem cases, and not considering the fact that there may be 90% of the cases that a very straightforward, that concern for example, an individual who is incidentally mentioned on a press article.

And in any one of the examples that was given, I think in the New Yorker was somebody that the only information about them on Google is the fact that their partnered had been murdered. And that's a significant thing to see every time you Google your name. And I think that that's a real concern that can be very efficiently dealt with by a small request to Google that's processed efficiently.

And I think that that would be my response to some of the comments that we had from [INAUDIBLE] and others, that this sort of nirvana of going go to a DPA or to a publish to deal with some of these queries-- it may be that there's actually a very efficient customer service that Google can offer in de-linking information that is disproportionately harmful to an individual rather than going through what is quite a Dickensian process of data protection authorities, courts, and very extensive battles. And I think the arguments that just because it's on the internet we should therefore have to go through a significant process to remove it is unhelpful.

A final question that I'd be interested to know is in relation to the second aspect, the questions that this council is addressing and the issues it's considering in terms of data protection law more generally-- is what would be the ideal outcome for Google of the EU reform process towards the new data protection regulation?

So those are my questions, I don't know if-- I would very much welcome some responses. If I can then address, in my remaining few minutes, some things that I think that the council could very productively do concern names, games, and law.

The names issue-- I think that the right to be forgotten, everybody agrees, it's a pity Jimmy Wales isn't here, it's the one point that I'm absolutely in agreement with him, is that the right to be forgotten is an unhelpful name. Right to de-list, to data obscurity, not to be reminded so prominently of out of date information. I think that there could be a real service from the council in naming this properly.

I reiterate Evan's comments. I do think it's important to emphasize the power of Google in this process. And I cannot leave the continued reference to Google as a card catalog or card index. I think that's quite unhelpful in that when we consider that there are six to eight million copyright removals a week, and search engine optimization is a real practice that does affect the information that's already received by individuals. 

On the games I think that the important and law-- I think there are actually some very significant legal issues that should be addressed. One of them that is very pressing is that the ECJ ruling does not address the provision on sensitive data in the European data protection directive, which prohibits, absent waiver, the processing of information about a person's sex life, political opinions, religious beliefs, racial or ethnic origin.

Read strictly, this means that Google shouldn't actually be processing any information about even public interest figures who are convicted of crimes, who's sex life-- I think the example that we've used in class has been being what if Rolf Harris came to Google and said, I want to de-link all the information about my charges and convictions for sex offenses, strictly speaking, article eight of the European Data Protection Directive, absent Rolf Harris' consensus says that Google should not be processing that information.

And the final thing I'd say on the law is in relation to the point about notifying webmasters. I think that it is inconsistent with Google's obligations as a data processor or individuals information in making a request to pass that information on without a disclaimer. That to republish it would cause, potentially unnecessary and unwarranted damage and distress. And that that, in itself is a manner of shirking the duties that Google has as a data controller. I think I'm out of time.

ERIC SCHMIDT: Thank you very much. Let me start and try to answer some of that, or maybe I could have the council help me.

LUCIANO FLORIDI: You can answer all of them if you life.

ERIC SCHMIDT: But Luciano you're so good at answering the first questions. So as we understand, and when I say we, I'm referring to the Google lawyers, of which we have a lot-- have been through this pretty thoroughly and we are pretty heavily restricted in the kinds of communication that we're allowed in our legal judgment to provide to everybody else.

So a lot of the details that you talked about, which would be, in my view very, very interesting-- it does not look like we can broadly disseminate them as part of our function. This is our legal conclusion. And there are debates in the press and among legal scholars as you heard earlier about this question of even webmaster removal, which we are doing which is part of our standard practice.

So as far as I know, we have not provided-- I want to actually answer your questions. As far as I know, we've not been able to provide the council more detail, which I frankly think would be a very good idea, but we have not done that. And you asked if not, would it be soon? And my guess is it would be unlikely. The same thing for article 29. We talk to them, we have the conversation.

You're a lawyer, you asked a very precise data question, and I don't think we have done that. So we had extensive conversations with article 29 groups, as well as the DPAs. And the question about discussing technical alternatives to de-listing-- there aren't very many because we've been found to be a data controller, so our function is to provide these lists. And we're ordered to not list them, right. You could imagine other solutions, a most obvious one would be de-ranking but the finding is quite precise. And therefore we felt that it was relatively unambiguous.

You asked about improving the process. I'm sure we've made a few minor errors which we corrected. Chris mentioned a few. But I think mostly this is just efficiency. The decisions are made by humans not by computer. They have a set of guidelines. Without going into the details, because I don't think we're allowed to, I would say the vast majority fall on the one side or the other side, which was your suspicion.

And there are the question of these hard cases, which is precisely why we have this panel. And you and I may agree or disagree on each one of the hard cases. And I thought from this panel we actually heard a good number of both good ways of thinking about it. And if we were able to work more closely with the original publisher, which I'm not sure we can, because we're a data controller and they're not.

I mean there's a lot of good ideas here. But I think most see the improvements that you've seen have been simply process improvements. I think at the moment we're sort of stuck with de-listing and manual review. We as a company would prefer an automatic way to do this and we would of course-- we would all like the law to be more precise as to the definition of the terms that have been described.

You asked about the data protection, the general legislation things. I think it's better right now that we focus on where we are now. That's a sort of interesting theoretical question. And the reality is we have this existing law which is drawn from Article Eight which the court has found us very clearly. And we just have to implement this.

And my assumption, this is my personal view, is that this will true for quite some time. In other words that it would take a very long time for some other set of answers to emerge and that's why the work of this council is so important. You know, it's not like this is going to change in December or January. And the number of requests is only going to increase. The number of deletions and rejections, in an absolute basis, will also increase.

Now have I answered or attempted to answer your questions precisely? Did the panel have comments or questions or anything else? Luciano, I'm happy to have you--

LUCIANO FLORIDI: I'm going to go back after anyone else.

ERIC SCHMIDT: Everyone has agreed Luciano, you will make the first comment.

LUCIANO FLORIDI: Philosophers, they always talk too much. I fully appreciate, and there's no rhetoric in this that we're reminded about 90% of the cases have been completely not true. I mean, like, yes, yes, no, yes, yes, no, no. We have 150,000 cases, that leaves 15,000. Kassam Stadium in Oxford hosts 12,000. Have you have a seen a stadium full of 12,000 people? It's huge.

And every single individual is a single life. Now that's 12,000. We actually have 3,000 more. That's 15,000. Now that means that we have 15,000 individual life's, or process or problems at stake. I think that's huge. And I don't mind the fact that the other 90% is dealt easily. I want to know how we deal with these 15,000 people, there, sitting in the stadium, looking at you. And that I find hugely important. And I'm sure we are on the same page.

In terms of not looking at every single request, we're probably not allowed, most likely. But I would like to say, personally, individually, that if I were allowed, I wouldn't like to see them. Because I have no interest in solving every single case insofar as that specific case. I'm interested in finding the general principles that will deal with the whole stadium fairly enough.

So I don't want to be exposed to the absolute heart crushing case of that individual or the embarrassing-- because I think that we should look at the abstract, the more general principles guiding our decisions. And that's why I haven't even asked. I didn't know we were not allowed. I didn't even ask to see the details because I don't think that that's what, personally, certainly my own work, would make any difference.

JULIA POWLES: To clarify, I'm not asserting that you should look at the individual cases. It's more to understand, I think even that I didn't have an appreciation of how many of the cases are those straightforward ones. And I think perhaps some of the arguments about how we always need to go to a publisher, we always need to go to a court, some of them that we've heard today, are a bit unhelpful, in the sense that, actually these can be dealt with very efficiently, according to guidelines.

And it's more that I think that those guidelines-- and for the individual that comes to the Google form to know this is a routine matter. This is not, and if it is not, what process will be followed so that they are empowered to know what process they're going to be put through. And they're not putting information in a black box that then might end up republished in the Oxford Mail and cause further damage.

ERIC SCHMIDT: Well it seems to me like you're trying to make helpful suggestions to make our process more predictable. Which is probably a good-- some set of those ideas are probably quite helpful, subject to the constraints of the decision. Are there comments from, yes, Sabine?

SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes, you are a professor, and so I would ask you-- we need legal basis for our recommendations. We are now hearing a lot of expert, different opinions. But then we also to go deep into the legal aspects. Only on a legal basis can we make recommendations.

Now my question to you-- is there any legal basis to notify the webmaster or publisher before the decision of Google without, or only with permission of the data subject? Is there any legal basis to notify of the webmaster after the decision of Google? And could there be a legal basis that now Google has to decide? But a third body, private or public?

JULIA POWLES: Thanks. I think it would be very helpful for the council to engage with-- I mentioned the sensitive data point, for example. Even how the form is constructed is a particular viewing on what the legal requirements are. And the point about notifying publishers, Google has said in its response to the Article 29 Working Party that it does not consider that it's processing personal information if it notifies, for example, Chris as a webmaster and it doesn't identify the person. I think that's incorrect because the person is still identifiable. Which means that the processing duties are engaged.

Google said even if it was the case then it's in pursuance of the legal obligation, I think it's what they say. And I think the difficulty there is that Google's obligations are to ensure that there isn't further damage to the data subject.

The press has media exceptions. So once it's passed on in this sort of, oh, we're just giving you this query, we're not identifying who it is. And as Chris said, often not warning about the fact that the person that made the request might be just a commenter not the person who's the pedophile in the example that gets plastered all over the newspaper.

So I think that Google could-- consent would be an express to do it. The form, I don't think, ensures consent when it says we may notify webmasters. I don't think that gets around Google's legal obligations as a processor.

But I think that perhaps productively, one of the ways that the council could look at this is saying, we can have an efficient streamlined process for the 90% of straightforward cases. In the less straightforward cases, this is perhaps where your creativity could be engaged to think about alternative solutions.

Do we want to have contextualized results? Do we want to have an external third party, obscurity agency, or something, that can work across search engines. That can draw in independent experts and that engages publishers in considering these far reaching-- the kind of questions you're talking about, Luciano.

The impacts over time of some of these de-listing requests. And perhaps there, there could be recommendations about what legal framework would be required. I don't think the existing legal framework is sufficiently nuanced. And in fact, one of the issues the council might like to look at is the fact that the existing legal framework produces very far reaching implications for the general publishing of content and the rights of data subjects to remove it if they consider it irrelevant or out of date.

SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes, I would like to thank you accordingly for your very interesting points that you listed. It reminded us of some of the things that were discussed with regard to, for instance, your question, a very legitimate question of how many hard cases do we have to deal with?

I remember that there was an expert in Paris of VIP reputation it was, who did his, kind of own survey and found out that the hard cases are limited. But nevertheless, I agree with Luciano that we should take them seriously. In Warsaw there was an expert mentioning a technical alternative like push results further down the list. But as Mr. Schmidt has rightly pointed out in my view, the court doesn't leave margin for that kind of creative solution. I mean, I'm speaking on my behalf now. We haven't discussed this with the whole group.

Also an interesting point you raised with regard to sensitive data. Thank you for that. And I would like to hear your view on it. As far as I understood the court's ruling, search engines can rely on Article Six. So the process for further statistical, historical, scientific purposes, right?

So aren't they entitled to assume that the underlying data is legal? So it has either been published under the journalistic exception or has been published with the consent of the data subject, even when it's relating to sensitive topics. Or are they under duty to check all that? How can then perhaps, principles on the e-commerce directive come to rescue you. You have any idea, I would love to hear your views on that. Because it can't be the results of our current data protection rules, that a service like a search engine is no longer possible because sensitive data might be involved somewhere. Thank you.

JULIA POWLES: I think this is an example of where there could be a really productive discussion between the advisory council and bodies like the working party in the European Commission. On my reading, I think it's actually very difficult to sustain just reading the law for what it is. I think that it does preclude search engines from processing sensitive data.

In the advocate general's opinion he said that would be ludicrous. I think most of us think it would be ludicrous, if that was the case. But it's, I think, an example of one of the areas where European data protection law is inconsistent with the expectations and capabilities of the internet and internet users. And that, actually, the opportunity we have now, at having inherited a legal regime born 40 years ago, pre-Google, is not completely adapted for today. I think that Gabriel's comment that it's trying to achieve some of the functions of other areas of law defamation, reputational rights, and the system we have is something that all countries of Europe could agree on, which is actually lacking a normative core.

And it has these categorizations of data. General, personal data and sensitive data. And these sorts of broad categories and broad application are difficult to reconcile, not just with the activities of search engines, but with everyday use of the internet. And I draw the analogy to copyright.

There's a real problem, I think, for the rule of law, when we have a system that's very far reaching and that is routinely ignored. And, I think there are some very good things in one. Obviously I'm a supporter of the sorts of delisting that Google has been engaging in when there's a real interest. But I think that there's some broad issues with European data protection law. That from the position of looking at this as a whole, issues like sensitive data and recognizing that, actually, this is an inconsistency that needs to be fixed in the new European data protection regulation. And the DPS should address as to how the ECJ ducked the issue. So we don't know and I think it is hard to square with the public interest exception.

ERIC SCHMIDT: I think we've sort of run over here. Any final, quick comments? Anyway, thank you very much, and let's move to our final testimony. As a reminder to our audience, we're looking forward to your questions. So make sure you get them in now so we can make sure we get them up here.

So our next expert is Alan Wardle. He's the head of policy and public affairs for the NSPPC's work around policy, government relations, and campaigns. His background is in government affairs, and he led up that function at the Local Government Association and before that at Stonewall, which is the lesbian and gay campaigning organization. While there he led Stonewall's lobbying and campaigning work, including the Civil Partnership Act, and established their policy of research function. Prior to that, Alan was in the civil service doing various policy financing, private office jobs for the Department of work and pensions, including he was the private secretary to Alastair Darling when he was Secretary of State. He studied law at University of Glasgow in Cambridge and qualified as a solicitor. Alan is also a trustee of Centrepoint, the youth homelessness charity. Alan, please go ahead

ALAN WARDLE: Thank you. I'm glad to be here this afternoon. I am from the NSPCC, which is the National Society for Prevention of Cruelty to Children, which is the largest child protection charity in Britain. We've been in existence for about 130 years. And one of the things that we run is a service called ChildLine, which is a 24/7 helpline for children who can contact us with any problems that they have. It's been around for nearly 30 years. I say that because I'm going to be referring back to some things that children tell us. So around about 300,000 in-depth counseling sessions are carried out by ChildLine every year, but 2/3 of these online now. And again, that informs what we do, because a lot of what we inform by is what children tell us through ChildLine.

So, in terms of the right to be forgotten, it's new for us, as with many other people here. So we haven't really fully developed all the policy positions and everything. But what I thought it might be helpful to do is just discuss this from the perspective of the child, and some of the principles that could be adopted when children are involved in some of these cases. And I suppose the fundamental principle for us is the consideration of a child, in any request. In terms of principles the committee decide, the presence of children should be a factor that gives additional weight to any considerations that are made. And I think that applies both when things are done to children, so for instance when they are the victims of crime. Or when children do things themselves.

So why would we treat children differently? I suppose the fundamental point is we already do. In the offline world we have well developed laws, practices, and procedures that recognize that children need additional protection because of physical, mental, emotional development, and where they're at in their life stage. So, for instance, we routinely classify films according to age. We, in terms of court cases, are reported for children involved routinely protect their identity so they're identities aren't revealed.

And when children themselves commit offenses they are generally in this country rehabilitated earlier, they're given shorter sentences. It's interesting that the internet, in many ways, is unusual because in many ways because of it's open nature essentially treats children as adults at quite a younger stage. So many of the social networking sites, for instance, which is largely governed by American law, so children can sign up to sites like Facebook at the age of 13 and then are essentially treated as adults. Even though quite often they're not. But we do know from what children tell us, and from the evidence, that the implications of information that are available to children that carry on into their adult life can be deeply damaging to them if that is not dealt with properly and accurately.

So I'm going to touch on a couple of areas. One I mentioned, things are done to children. And then secondly, when children do things themselves. Before I do that, actually, things which we haven't touched on earlier was, we know that Google already does delist. And certainly one of the areas that is relevant for us is the area of child abuse images, which is quite often known as child pornography. And the search function around that, and certainly, let's pay tribute to what Google has done. Certainly over the last couple of years we've worked with them and worked with other places in the world to ensure that people who are searching for child abuse images online, that it's much more difficult for them to do. By, in terms of search functions, directing people towards help for if you have these urges you, should try these sites and direct them towards content. Again, not necessarily in Google's commercial interest to do that, but we're grateful for them to do that, and hope they will continue to support those very important activities.

And, in terms of one of the issues that most concerns us is around where people have been involved in crimes against children. And interesting, that the statistics show that already many requests have been made for people in this category. So, people who've been convicted of possession of child abuse images, people who've committed offenses against children. And that's not surprising. The nature of many pedophiles will use these techniques, will very much seek to operate as far as they can within the law to try and regulate their activity, and cover up as much of their activity as they can.

So the principles around risk management, around protection of children, is in the public interest, I think, are really important here. I think there's going to be factors that the committee need to look at quite carefully. Clearly there's a balance to be struck because there's a scale of offenses, as well. At one level you've got convictions for serious sexual crimes. You could have minor convictions, you could get someone who's found not guilty. You could get someone who was charged but not taken to trial. You could also just see unsubstantiated gossip and innuendo, as well.

Also, again, it has been discussed, the different types of [INAUDIBLE]. We've got colleagues from the BBC and the Guardian here, eminent institutions. But we know a lot of these things happen on gossip sites and what have you. So it is difficult, and a balance clearly has to be struck. But, as I said, I believe the presence of children in this would be an additional factor . Now, whether that is a presumption, that's anything where there's been a conviction or a case involving a child, that would automatically be in the public interest. So that is kept listed, whether it's an additional weighting, whether it's an additional factor to be considered, but we believe that weighting should be there. And obviously that needs to be how that's applied needs to be important. I think we'd be concerned if, for instance, contemporaneous factual information was removed, even if someone was not convicted. Even if they're conviction had expired. 

Secondly, as I mentioned, there's also the issue of what children do themselves, and how you approach this. We are increasingly aware about how children's brains develop. And certainly, teenagers we know, haven't quite got the issue of consequences for actions is not fully developed. They can be impulsive, and actually we need to acknowledge and recognize that. So generally a society recognize adults need to take responsibility for their actions. For children, I think we need to temper that a little. Because we do know that the actions they take that they later regret can have considerable impact on them and their future, if that keeps coming up in such functions, and that needs to be looked at. We see it already reflected in other contexts.

So, for instance, children who commit crimes, as I said that are rehabilitated earlier. It's an interesting example that David gave earlier on about anti-social behavior orders, where you see an unruly 15-year-old child who's running around the neighborhood and then rehabilitates themselves. The orders expired, and they're a 20-year-old looking for jobs. Do we really think that it's still a matter of public record? Should that be coming up every time their name is searched? I wouldn't think so.

And a particular issue for us, which is a phenomenon that's on the rise is the issue is what's called sexting. So, where children or young people, essentially, send sexual images to themselves. ChildLine, which I mentioned earlier, we counseled about 1,300 children on this issue last year, which was a rise of 46% from the year before. So it's an issue that's going up and up. Mostly girls, generally how this happens, they're in a partner relationship. It's expected of them, that young people will send these images to each other. It goes wrong, it's sent around, it's sent around to school, sent around to social networks, and causes great distress and great alarm to the people who that's happened to. If it's an image, these things, particularly under 18, these things can and should be taken down. 

In ChildLine, we have a relationship with the Internet Watch Foundation where, if a child comes to us saying, this image is me and it needs to be taken down, we can get that removed. But, of course, it's not just the image. It can be the content, and all the written allegations around it, particularly the social networking sites. What we would always say to young people is, we need to educate young people about the dangers of this. That's all very well and good. We would encourage them to go to the data provider in the first place, or the content host. And again, some do.

And I think one of the challenges around this area, some social networking sites are very responsible, and others are less so, in particular when they are in a jurisdiction that's far away. Than can be very difficult to try and get them to even force their own terms and conditions. So again, in those sort of cases you could definitely see a case for delinking, just in terms of something that happened to a young person as a child that could have significant consequences for them going on.

The children who contact ChildLine, it's interesting. The younger children are more concerned about their parents finding out, or they're going to be bullied. When they get older, when they get to 16, 17, 18, they realize, actually they're worried about the future, they're worried about their careers. So as the brain function develops they realize this is consequences for the future as well. And it's quite an interesting thing to bear in mind. So again, the presence of children should be an additional weighting in these matters.

So, I recognize it's difficult and there's always going to be a balance. The quality of the source, the nature of the report. Is it factual? Is it gossip? The seriousness of the incident. I think these all need to be weighted out. But as I said, as happens in many other cases, I think the presence of children in these cases in protecting children, helping keeping them safe should be an additional factor in any criteria the counsel recommends and develops. Thank you.

ERIC SCHMIDT: Thank you very much. Do we have some questions from the panel? Yes, Jose de Luis.

JOSE-LUIS PINAR: Yes, thank you. Perhaps, reassure, to clarify. Do think that, regarding children, we can write two different criteria? The first one, if we are talking about crimes or offenses against children, this could be a general principle not to remove the information. On the second one, the children's interest as a general criteria for removing the information regarding directly to the children. Do you think that those could be the criteria?

ALAN WARDLE: You could definitely see that for crimes against children there would be a presumption, perhaps. I mean, again, as we've seen in this country, in the last couple of years there has been a huge rise in interest around child sexual abuse. There's been some very high profile cases here. But you will find if you know certain people's names who are gossiped about, but there's never any substantiated evidence, you can Google their names and you'll find it being alleged that they were at some care home 30 years ago, et cetera.

Now, on the unsubstantiated cases like that, and there's never been a court case, there's never been any arrest, there's never been anything, and someone's name is there you could see, perhaps, that link being removed. But I think you have a presumption, if it has been a criminal case involving children, in terms of the public interest in keeping children safe. I think that would be quite an interesting one, a useful one. But again, as you said in terms of when things are done to children.

Again, it sounds a bit trite, but what is in the public interest isn't always what the public are interested in. And quite often those things can be confusing, particularly when children did things that were silly. They regret when they were younger. Actually, I think we need to recognize that's not in the public interest. And that's where, perhaps, I would disagree with some the points Gabriel was making. I think minor facts, trivial facts, most people will not be of interest to historians 50 years down the line. Most people just want to go on with their lives now. And at the age of 21 would rather get a job and not be constantly embarrassed by some picture that was posted online about them when they were 17. So I think balancing that's quite important.

ERIC SCHMIDT: Go ahead, Lidia.

LIDIA KOLUCKA-ZUK: Just to clarify. I absolutely agree that the links to the information regarding the crimes against children shouldn't be removed. But what about the situation if such a request is submitted by the victim. And the victim wants this information to be removed. Should it to be an exception or not?

ALAN WARDLE: That's a very valid point. I think that's something that needs to be looked at carefully. And again, I think it would probably depend. Some cases may be of a sufficiently high profile that the nature of the offense and the nature of the offender may well be that it's actually in the public interest that case is still there. And maybe, in other cases, yes, it is right and proper that person being named is not there. It's a matter, certainly in this country, generally the victims, in such cases, wouldn't be named.

And so if someone has been convicted of raping a child who's still alive, that child, generally, would not be named. So if someone was named, you could definitely see an exception for that. But I don't think it should be an absolute, though, because it could be, in some cases, in the public interest for that person's name to be there, to be linked to that.

ERIC SCHMIDT: Other comments? Peggy.

PEGGY VALCKE: Yes, thank you very much for your intervention. I would like to hear your view on one of the statements that an expert in Paris made. Which was, we shouldn't remove these links to stupid things that youngsters have posted that they regret afterwards, because otherwise they will never learn to work with the internet. And they should understand what the impact can be. What do you think of that argument?

ALAN WARDLE: That's a harsh way of education children, actually. I think, generally, we would advise lessons in school, parents talking to their children about these things. And, actually, by the time children contact ChildLine, if it has been circulated or gone round that has caused them enormous distress already. I think, certainly, most children will have learnt their lesson by doing that. So having an incident that happens when you're young, and that carrying on for a considerable period of your life adversely impacts your career, your ability to get into university, potential ability to get a job, et cetera. I think it's a not very proportionate response to teaching children a lesson. So I disagree.

PEGGY VALCKE: If I may add to that, isn't it also our responsibility as society as a whole to, perhaps, put less weight to those kind of stupid things that these youngsters do? One of my colleagues at the university in Leuven mentioned that apparently in the US, it's not cool if you laugh at people because they have posted stupid pictures in the past. So social norms change, and perhaps it might no longer be necessary to remove links to that embarrassing material. If we all agree that you don't laugh at each other because you once posted or ones friends of you have posted stupid pictures. What's your view on that?

ALAN WARDLE: You're absolutely right. Social norms do change. You speak to young people, and actually sending images of themselves to their partners or whatever, is actually pretty normal. In a way that 10, 20 years ago you couldn't do it, really. So that is changing, and I don't think as a society, it may well be in time that we decide that is different. And it may well be for some people they're perfectly relaxed with pictures of them at age 17, doing embarrassing things, and they're quite happy for that to be up there.

But for others, again, it might cause them distress and anxiety and upset. And say, you've got quite an unusual name and it keeps popping up, time and time again. I think that should be a factor. But absolutely, social norms are changing how children interact with technology. The fact that for them online and offline, there's no difference. It's all how they live their lives. And how they interact with social media is absolutely changing very rapidly. And I think, as a society, we need to keep up with that.

JOSE-LUIS PINAR: Little question. How could the age for considering that person to be a child? 13? 14? It depends, some, on the relation?

ALAN WARDLE: And certainly, in this country, we say a child's under 18.

JOSE-LUIS PINAR: No, I know, but 18 is considered a minor from a legal point of view. But, for instance, in their regulation, the draft regulation, your pen draft regulation, I think that it's considered under 14. So it's very different, not a legal concept of minor.

ALAN WARDLE: Yeah, I think I understand what you're saying. So we see this already, sort of, in social networking sites. For instance, a lot of children can go at 13. A lot of them go on younger than 13, as well. So 10, 11, 12. Again, this is one of the points I made earlier. That's the internet quite often treats children as adults when they're not ready to be treated as adults.

I don't think the way to solve this is by delinking. That should be a remedy, but actually the most fundamental thing is education, and how we educate our children about how to stay safe online. We encourage parents to be having conversations with their children at young ages. We know from the ages of eight and nine the children are being given more freedom to go online by the parents. And the internet is a very positive thing for young people. But parents do worry about who their children are speaking to online. Are their friends who they say they are?

So again, how do we ensure that we are schooling up our children and young people? How are we ensuring that parents have the confidence to this? Because, again, we know a lot of parents say, oh I don't know, the technology is so difficult. I just don't really understand what do. How do we help parents understand that it's part of their parenting? Just as you teach a child how to cross the road, you teach your child how to interact on the websites. It's a wider societal issue about how we teach our children the role of parents, the role of schools, the role of the internet industry, as well, in helping create solutions to keep people safe. So it's something that's evolving.

ERIC SCHMIDT: Why don't we have our final comment from Luciano?

LUCIANO FLORIDI: So just a quick question following what you said, which was very interesting. Would it be possible, you think, to develop an argument in favor of public interest in making some information less available. Or, available with more hurdles. Including, for example, changing the age at which you may, for example, register for social media. Along that particular road so that the educational task becomes, also less difficult. Because, supported by a culture that sees in the removal of some information, or the more difficult task in getting that information, a value. Not just an unfortunate event. Is that something that one could develop as a strategy, proactively as well?

ALAN WARDLE: Absolutely. And again, we don't think it should be the search engine's responsibility, here. Ultimately, if there is upsetting content or if it's terrible bullying, if again, images of children, et cetera, online. Ultimately, it should be dealt with at source. It shouldn't be for Google to be delinking or for it to be further down in search results. That content should be removed.

As I said earlier, regrettably some companies are more responsible than others. Some are very good to moderating content, some are very good when you report concerns. They act on them swiftly and take that content down. Others don't. Others don't, and again, whether there is a way of doing it, and I think there are discussions at European level to how we can try and enforce that more readily. So that, actually, whether it's children or parents who something has been posted that they're unhappy with, that can be looked at, that can be moderated, et cetera. Ultimately, it shouldn't get to the stage where it has to be done by a delinking process by search engines. It should be dealt with by sourcem but again, having that as a back up, in those cases, I think, is important. Because not all tech companies are as responsible as some of the others are.

ERIC SCHMIDT: Well, I want to thank you again, Alan, and I want to especially thank you for fighting for the rights of children-- something we all care a lot about. We have time for some questions from the audience, and what I'll do is if we know the name of the questioner I'll read their name. If it's assigned to somebody I'll say, and then I'll read the question. Now, the first one is from Nick Reynolds, and it's a question to Eric Schmidt. What does Google think about spent convictions? I.e. Should people who have committed low-level criminal offenses have the right to not disclose those convictions to prospective employers?

So, this is an example where we're asking for advice from this council. The laws actually differ in countries in the UK. There's something called the UK Rehabilitation of Offenders Act. In other countries it's not as clear. In the US, it's somewhat unclear. So, these are good examples of the hard questions for the panel.

The next question is from Nicole Carbonara. Question to me. Question: in one of the previous meeting someone stated that the ECJ's ruling may not be final. Is Google contemplating investigating this option?

The short answer is no. As we read this-- as we understand the law in Europe and as we understand this law, it is final. It cannot be appealed and needs to be followed. And we intend to do so. And that's why, again, we're having this process.

The next question is from Stephanie Bodoni who is a reporter from Bloomberg News. Question, and it's to Eric Schmidt. Would you extend removals to google.com has some European policy privacy regulators have said they would like to see?

So we've look at this and as we read the law-- the European Court of Justice applies to the EU, which is the jurisdiction of the court. And of course we have now done 150,000 reviews and so forth and so on. The google.com domain is actually US targeted. And what happens is when you come to Europe, your default access is to the dot UK or dot DE or dot FR, or what have you. And since the court focused on European users, we're going to focus on those domains. A very small percentage, less than 5% of European traffic goes to .com. So 95% or more, I don't know the exact numbers are to these sites and that's where the action is.

Next question. Catherine Williams from the UK National Archives. It's a question to all panel members. Who is representing the cultural heritage sector, and who is acknowledging the concerns and issues for archives in particular?

Maybe someone other than myself could answer that question? I'll read it again. Who is representing the cultural heritage sector and acknowledging the concerns and issues for archives in particular?

SABINE LEUTHEUSSER-SCHNARRENBERGER: Well I suppose that the panel is limited. So we could not have a representative of every sector on the council, I assume. But we heard from-- I'm checking my notes from Madrid, but I remember that in Madrid there was one of the experts. It was Milagros del Corral who represented--

JOSE-LUIS PINAR: The former director of the National Library of Spain.

SABINE LEUTHEUSSER-SCHNARRENBERGER: So we carefully listen to her comments and we will take them into account.

ERIC SCHMIDT: Julia. Go ahead.

JULIA POWLES: This is a very helpful contribution, I think, from the British Library as well that has said something from a UK perspective.

ERIC SCHMIDT: OK. Shall we keep going? The next question is from, let's see if I have these in the right order. From Luca Shavoni, question to Eric Schmidt. What is the financial impact of the European courts ruling on Google? i.e., which cost does Google incur whilst complying with it?

I wanted to comment in the dialogue that occurred earlier and say that there's a presumption that this is an economic issue for Google, and I can assure you that it's not. We make very little money from name searches and advertising on names. It's minuscule. So this is about service quality and answering questions and so forth. It has nothing to do with actual revenue or ads or anything like that.

And so the primary cost to us is the cost of, what appears to me, to be a permanent staff of people who will be doing the removals. It's not at all obvious to me that this can be ever automated, and therefore, as the burden of reviewing them grows, the cost will grow. And we won't go into the specific numbers, but that's the specific cost. There's obviously some cost of this event and the legal reviews and so forth.

Next question. From Charles Miller to Eric Schmidt. Could Google have complied with European court's ruling by saying it was happy to remove links, but only after a court had decided whether each case had merit or not? In other words, that Google would put up a defense for the link remaining up. In other words, could we have challenge everyone? Is a way to understand that question.

And I can assure you that that's not a pragmatic approach for us. It would be very expensive. It would tie down the courts. It's just a bad model. Must better to kind of do-- I'm quite comfortable with doing this.

The court is clear that the primary burden of assessing requests defaults to search engines. It specifically says that the burden falls on Google, and the other and our competitors. So if you hear-- as I will extemporaneously comment-- that when you hear people criticizing Google for doing this, my answer is, the law forces us to do it. If you are advocating we not follow the law, then go have a conversation with whoever makes the laws in your country, which is by far not Google.

So next question. From a gentleman, Arpun-- is it Gonguly? A recent-- also to myself-- a recent "Telegraph" article showcased some of the right to be forgotten requests that Google entertained, received and rejected. My question is, when you get millions of requests, how do you automate the ethics of these requests?

The answer is, we don't know how to. We would if we could, right because we like to automate things. It makes everything more efficient. But we have real people reviewing every request, and every URL and every request. And then what happens is, we have, again, without giving the specific numbers, there's things which are no brainers, and there's things which are in the gray area. And then we have policy experts in each of those areas who attempt to make these decisions.

I don't know this, but I would suspect that there will be hard cases where the person is unhappy that they couldn't get it off of Google. They're unhappy to get it off the original publisher. And then they will follow legal action. My guess, and the lawyers here will probably concur, that over time a body of law emerges. You're the law professor, so a body of law emerges from the court filings that help define these things. I think that's the way I suspect how it works for this.

But in our transparency report we published the following number so you know. Europe, 58% were not removed and 42% removed. So slightly more not. And in the UK, 65% not removed, 35% removed. So 2/3 not removed, and in the UK, 1/3 removed. To give you a sense of how the decisions are made.

Another question from anonymous, directed to Eric Schmidt. In the European Court of Justice case, the courts held that the publisher of a newspaper benefited from the journalist exception. Did Google argue that its publication was also a journalistic purpose? After all, the questioner says, newspapers make money from ads, too.

No, we did not raise that issue. We view ourselves as an indexer and a pointer to the source, sources we don't create content. So we're fairly clear where we are in that process. We did not ask for such an exemption.

This is a question from someone who called themselves T. And it's for Luciano and myself. Removing one link, Luciano, will result in the publication of such removal, resulting in an adverse effect. How can Google guarantee that links are not remove just once, but constantly to avoid the Streisand effect?

And for context if you have not heard this term, the Streisand effect refers to the singer who was upset that pictures were published of a home that she owned. And you can look it up on your favorite search engine. And what happened was she litigated and ultimately I believe lost the case. But the publicity caused a great deal of discussion and viewing of the photographs of her home. Luciano.

LUCIANO FLORIDI: Yeah, that sounds inevitable, especially with the Spanish gentleman who will not be named. Of course, now everybody knows who he is, and we know more about his financial proceedings than we never dreamed in our life. So that seems to be a counter effect. Unfortunately, it's not just a property of this particular ruling. Sometimes when you try to do the right thing, doing the right thing has counter effects that undermine doing the right thing.

Now, this is not a way of saying that the European Court of Justice did the right thing. But it's just saying that it follows it with the logic of protecting someone's privacy and not through the removal of the links. The links, if we follow one of the alternatives, in other words, as seems to be a general perspective, Google or the search engine informs the newspaper in this particular case, or the newspaper will not tell people that that think has been removed. There's more search.

Today, if you just search for a name and surname, you will find at the bottom, sometimes if it is not a public figure that links or references might have been removed. That triggers immediately some curiosity. Now, is that because that person did ask for the removal, or is just an accidental case because Google is doing it systematically for anyone who's not considered to be a public figure. I don't think that we have a way out. At the moment, we're stuck with this particular problem. Hopefully some members of the panel will come up with a brilliant solution. I don't quite see how this paradox can be solved.

ERIC SCHMIDT: You're going to have to discuss this in your deliberations.

LUCIANO FLORIDI: I think so.

ERIC SCHMIDT: The next question is from Holly Woodhouse, question to Eric Schmidt. Are requests processed, considered, and decided manually or automatically? And how much time is spent deciding each request? They're done manually, and we take the time to review each one. Next question, from Maria Martinez Caron. It's to the panel. How would you manage removal requests?

She actually has two questions. I'll do the first one. How would you manage removal requests for any reason from a non-public individual who in a few years becomes a public person? Question to the panel. I'm dying for your answer. Luciano, you're always first. I think Julie would like to answer the question. Let's have Julie answer the question.

JULIA POWLES: Yes, OK.

ERIC SCHMIDT: And then while-- go ahead.

JULIA POWLES: Going back to my point, I think it depends what the case is. I mean, if you're mentioned peripherally in an article that you witnessed an event or something, then it wouldn't matter if you're going to be a public figure. But if it's one of these controversial cases, I think that's all the more reason that we'd want to keep track.

ERIC SCHMIDT: Go ahead.

LIDIA KOLUCKA-ZUK: I cannot agree more. But I think that this is the question that we ask ourself every day, working as a member of this council. And I think that will be the matter of our further discussions during our private meetings after the hearings, right?

LUCIANO FLORIDI: It's a general lesson to be learned from this, at least for me, which is the ruling seems to be treating, at least from my understanding, the internet as a static thing, as something that we have and we now have to deal with. The truth is that technologies, as we all know, are developing so quickly that, for example, identifying a way in which someone who's not a public figure becomes a public figure, goes out and is no longer a public figure, and then he goes back into being a public figure, is just normal.

Therefore, the technology will follow. Therefore, we have a problem about a dynamic approach to linking the linking and appeal, not appeal. As you can imagine, it's a mess. And that's why we're having this debate. But it's not trivial, and it's not coping with the reality of a dynamic context, which is no longer treated as if we're publishing a book, and it stays in the library.

ERIC SCHMIDT: Let's continue. Her second question is-- oh, I'm sorry. Go ahead, Evan.

EVAN HARRIS: Just on that question, because it's obviously one we've all heard of, that someone seeks to expunge from Google searches evidence that they committed fraud, and then, once they've done that, stands for Parliament, OK? [INAUDIBLE] not me.

ERIC SCHMIDT: Speaking as a former member of Parliament.

EVAN HARRIS: I was a medical student. There may have been things that I did as a medical student. But that's why I think it's unfortunate that the judgment, as I said, which I have concerns about, did not go into more detail about this public figure. Because what matters-- and certainly this is the case in some English or Welsh jurisprudence-- is that the matter is on a matter of public interest.

So if someone, not a public figure, wants to remove record of fraud, which isn't spent, then the fact that they're a private individual is secondary. It's a matter of genuine public interest. And therefore, so if you look at it that way, rather than what I think is over simplistic, saying is it a public figure or not, you can protect the public interest from the expunging from search indexing of things for people who then stand for public office.

ERIC SCHMIDT: Let's ask her second question. Are the links being deleted from Facebook pages that are listed in Google results because Facebook users are insufficiently protecting their privacy through Facebook's tools? Otherwise, why would Google be indexing those links? And this is a question for the panel. Are the links that are being deleted from Facebook pages listed in Google's results? In other words, if there's a Facebook information which is in Google, which is then subsequently deleted through one of these requests, is that because the Facebook users are not protecting their information?

LIDIA KOLUCKA-ZUK: OK, whatever we say will be controversial. I think to some extent, yes. But this is, I think, this is a very complex issue. Because we can say that, OK, people do not read agreements that they sign with the banks, and people do not read the policy on privacy provided by Facebook. And this is a huge problem, that people do not protect themselves on the internet, and they are not aware if this is a subject of the public debate right now, that we should be aware that every activity of individuals in the internet is somewhere recorded. And especially if they use the social network like Facebook case, they should somehow be aware of the consequences as well. But this is a huge issue, and I think that this is the subject of never ending social campaign or educational campaign, how to protect ourselves on the internet.

JOSE-LUIS PINAR: I think this is very important because this is a problem of awareness. It's a problem of privacy policies. And two or three very important concepts regarding privacy. Privacy by the sign, privacy by the fall. All the social networks must take into consideration all these problems because I think this is very, very important for having a better privacy. But it's a problem, in this case, could be a problem not only for Facebook, but also for the data subject.

ERIC SCHMIDT: Go ahead, Chris.

CHRIS MORAN: Sorry. Could Google perhaps tell people-- because of the number of complaints from Facebook is so large, could Google not, when there's a complainant, point them towards the way in which to deindex their profile from Google?

ERIC SCHMIDT: A lawyer would have to answer that question, given our behavior is governed by the fact that we have been found to be a data processor. And I don't know whether Facebook has also been found such, et cetera, et cetera. Let's move. We have a number of more questions I'd like to move through as quickly as we can.

Professor Howard Williams of Strathclyde University has asked six questions of Eric Schmidt and to the panel as well. How do you justify using business practices to implement a judicial decision? Hm. Well, we are a business. So we have no choice.

How does Google balance commercial interests versus the right to forget when the judgment explicitly says business interest must not be furthered? I believe that I said very clearly that we have almost no revenue or advertising associated with the information in subjects of this. I can tell you, I've been in many discussions with this where there's been no discussion whatsoever of revenue, impact, advertisers, money, cost. I don't even know what-- I don't personally know what the costs of this are. And whatever the costs are, we have to bear. And therefore, we are doing so. That's the reality of this.

Next question from Professor Williams. How safe is the reliance on the in the public domain as a definition of public interest? Not sure I understand that question. Let me repeat it for the panel. How safe is the reliance on the in the public domain as the definition of public interest? Go ahead.

PEGGY VALCKE: I think this has been mentioned already. It's not because the public is interested in an issue, and that it's in the public domain, that it is in the general interest. I think it's safer to turn to the standards that have been developed by, for instance, European Court of Human Rights, to assess what can be considered in the general interest. There's abundant case law on whether, mainly the context of press, whether a press article, country, or could also be books. There is case law on books or internet websites, whether it contributes to a discussion, a debate of general interests. So we can certainly draw inspiration from that. I hope that answers the question.

ERIC SCHMIDT: Go ahead. Real quick.

LUCIANO FLORIDI: So consider a definition, it goes both ways. Water is H2O, H2O is water. In this case, there's no such thing. Whatever is in the public domain doesn't have to be in the public interest, and what is in the public interest doesn't have to be in the public domain. So it is not a relationship or definition at all.

ERIC SCHMIDT: Let me-- there are three more, and then we have a number of other important questions. Once a conviction is spent, is the level of the court a factor in deciding whether to delink or not? This is a tricky area, and something we've asked the panel to advise us on. Why does your web form allow 1,000 characters? Why not 100 or 10? More is better. Final question. Why does Google not provide guidance on its interpretation of the key elements of the decree relevance, out of date, et cetera.

We are constrained by the decree on what information we can provide. We do provide, if your request is rejected, which is, I indicated earlier in the UK is 2/3 of the time, we do give you the reason it was declined, you personally. And we have, through our transparency report, given some limited information. I'd like to keep going.

We have a question from anonymous to David Jordan and Evan Harris. David and Evan. There you are. Defining public figure, does a public figure have to be an original public figure, or can someone through his or her behavior become of public interest, and thereby public, ie, criminal in a spectacular crime?

EVAN HARRIS: I refer the questioner, anonymous, to the answer I gave just before. That I think public figure is to an extent a red herring. It should be-- and this is why I regret the Court of Justice didn't go into this in more detail, didn't discuss Article 10, didn't mention Article 10 at the ECHR, and I agree with Article 19 on this-- that where something is a matter of public interest, which is not the same, as Luciano said, as being of interest to the public, then that should be a factor to be weighed against the interest, one might say right, of the subject. So I think it's the wrong question. If it's the question you're being asked, then you could have a whole conference on the exact definitions of a public figure.

DAVID JORDAN: I agree with that entirely.

ERIC SCHMIDT: OK. We will continue. We have three more questions from the audience. From Gabriel Hughes, it's to Gabrielle Guillemin or anyone on the panel. The recent European Court of Justice ruling declared Google to be a data controller in respect of its search engine. If this is incorrect, and if so why? In other words, does the panel or our experts believe that the court's decision was incorrect with respect to the data controller question. This is addressed to you. Do you want to start?

GABRIELLE GUILLEMIN: Yeah. Well, I think, as I said earlier, we don't think that Google should have been considered as a data controller. In our view, they're merely a conduit to disseminating information. So they shouldn't be considered as a data controller.

JOSE-LUIS PINAR: I agree. I think that it's perhaps better, the construction of the general [INAUDIBLE], that considers Google not as a data controller, but as a third party that in fact processes some kind of data, but it can be considered as a data controller. But the case is that, according to the ruling, Google is a data controller.

ERIC SCHMIDT: Yes. And we can confirm that the court did find us a data controller. Two more questions. Go ahead, Evan.

EVAN HARRIS: It's very important, this, because people sometimes get confused between what's been found. Don't disagree with anything that's just been said. In European law, certainly in English defamation law, and I believe the same applies at the ECHR, search engines are not a publisher. And that's very important for defamation. And there have been cases about snippets, Google snippets being sued on.

And my Labour reform campaign hat on, it's absolutely critical that that is maintained. So often people say this is a change because suddenly they've been found, but this is a different law on data protection. And as Jose said, we are where we are. You've got a judgment in the final court. But I think people are confused, and they think that that case law's been changed by this decision. It has not, fortunately.

ERIC SCHMIDT: Let me continue. This is a question from-- is it Benedicte Pavion? To a member of the council. Question, you said everyone can be a journalist. Yes, no? Agree, disagree? Sabine?

SABINE LEUTHEUSSER-SCHNARRENBERGER: No. Yes, you can become a journalist, yes. But not each person is born a journalist. And if someone is watching something, then he is not a journalist. You need to experience. You have to go through education and so on. So not each individual is a journalist.

LIDIA KOLUCKA-ZUK: That was me who mentioned that, and that was a quote from the article. And it was not also that Professor Sadurski who wrote the sentence, believes in it. I mean, today in Poland, there is a discussion that's going about the new regulation regarding the press law. And the standards that are in this regulations that are provided in this regulations are very low.

So it is, I would say, an ironic [INAUDIBLE] saying that everybody can be a journalist. I strongly disagree with this, and I fully agree with Sabine. You can name yourself as journalist only if you are really a journalist, meaning that you have a proper experience, maybe not education, but proper experience, and you know what it means to be a journalist and what the journalist means.

ERIC SCHMIDT: Thank you. And our final question is from anonymous, addressed to Eric Schmidt. Shall we, the public, move to searching google.com, rather than, say, google.co.uk, and so to avoid edited or removed information? I am not recommending that. However, we have reported that some people do it. Second part of the question.

Why did we start late? Well, Google does not necessarily start exactly on time. And we did not quite exactly end on time. But we're very close. And I want to thank first the audience for sitting through literally four hours of important and sometimes difficult discussion. I want to thank our experts who took the time out of their busy schedules to talk to us.

And I want to especially thank our panel, for whom this is the seventh? Sixth? Sixth of seven days devoted. They're doing this out of the love of getting things right, and for nothing else. And I for one cannot wait to hear the answers to these questions. And literally, call me or email me or text me the moment you figure these things out. Because the sooner you figure them out, the quicker we can implement them. I want to thank you guys so much for this, and thank you all.

[APPLAUSE]