Advisory Council to Google on the RTBF - Brussels Meeting 4th November 2014
DAVID DRUMMOND: Hello everyone. I think we can get started. Welcome to the Brussels meeting of the Advisory Council to Google on the Right to be Forgotten. This is the seventh and final stop on the council's seven-city schedule. I'm David Drummond. I'm the senior vice president at Google and the chief legal officer of the company.
So I want to start by saying that we've always seen search as a card catalog for the web. So when you search on Google there's an unwritten assumption that you'll find the information that you're looking for. So as you might imagine, when the European Court of Justice handed down its ruling and in May obliging us to deliberately omit information from search results for a person's name we didn't exactly welcome the decision.
The tests set by the Court for what can be removed are vague and subjective. A web page has to be inadequate, irrelevant, or no longer relevant, or excessive to be eligible for removal. And we're required to balance an individual's right to privacy against the public's right to information. All of that feels a bit counter to the card catalog or the card index idea.
But at the same time we respect the ruling, we respect the Court's authority, and it was very clear to us that we had to quickly sort of knuckle down and figure out how to comply with the ruling in a serious and conscientious way. So we quickly put in place a process to enable people to submit requests to us for the removal of search results and for our teams to then review those requests and take action in line with the Court's guidance. Now to give you a sense of the scale of all this, to date we've had over 150,000 requests from across Europe involving north of half a million individual URLs, web pages, and each of those have to be assessed individually.
Now in practice, most of the decisions we have to make are pretty straightforward. For example, a victim of physical assault is asking for results describing that assault to be removed for queries against our her name, or a request to remove results that cover a patient's medical history, or someone incidentally mentioned in a news article but not the actual subject of the reporting. Those are fairly clear cases in which we remove the links from the search results related to that person's name.
Similarly, there are plenty of cases where it's pretty clear that we shouldn't remove the link. A convicted pedophile who's requesting removal of links to recent news articles about his conviction, or an elected politician requesting removal of links to news articles about a political scandal he was associated with. These are real examples. But there are many cases where taking the right decision is much more difficult.
Consider the following, requests that involve convictions for past crimes, when is a conviction spent? And what about someone who's still in prison today? Requests that involve sensitive information that may, in the past, have been willingly offered in a public forum or in the media. Or how about requests involving political speech, perhaps involving views that are generally considered illegal or actually breach the law.
These kinds of requests are often difficult, and they raise tricky legal and ethical questions. And it's for these gray areas that we're seeking help both from Europe's data protection regulators and from the members of this Advisory Council. We hope that they will sketch out principles and guidelines that will help us take the right decisions in line with both the letter and the spirit of the ruling.
So all of the council's meetings have been live cast. Today's no exception. The full proceedings will be made available on the council's website, which is google.com/advisorycouncil, all one word, and that will be available after the event. Now the council invites anyone and everyone to submit their views and recommendations via the website. We'll read all of the input, and that will inform part of the council's-- the material for the council's deliberations. Now following today's meeting we'll begin our work on a public report that will contain recommendations based on the meetings as well as input via the website. We hope to publish that report by early 2015, and council members will have the ability to descent from any of the findings.
So with that introduction, let me introduce the council members themselves, whose expertise I think you'll find speaks for itself. So joining me today from my right, your left is Frank La Rue, the UN special rapporteur for the promotion and protection of the to freedom of opinion and expression of the UN HRC. Next to Frank is the Sabine Leutheusser-Schnarrenberger, who many of you I'm sure know. She's been a member of the German Parliament for over 23 years and has served as the German Federal justice minister for eight years, I believe.
To my right is Lidia Loluchka-Zuk, former director of the trust for civil society in Central and Eastern Europe. To my left we have Peggy Valcke. This is her hometown meeting, and she's a professor of law at the University of Leuven. Then we have Jose-Luis Pinar, former Spanish data protection-- head of the Spanish DPA and professor at University at CEU San Pablo.
Next to Jose-Luis we have Sylvie Kauffman, editor at the French newspaper Le Monde, which you all know. And finally, we have Professor Luciano Floridi. He's a professor of information ethics at Oxford University. So we have eight experts who will be presenting today to the council, and they're here in front of us on this great stage.
So to my left, we have Mr. Stephane Hoebeke, we've got Mr. Robert Madelin, and then we have Jodie Ginsberg, Mr. Philippe Nothomb. And then on the right, he's yet to arrive but I'm assured he will be here, Mr. Paul Nemitz. Then next to him we have Hielke Hijmans, then Mr. Karel Verhoeven, and then Mr. Patrick Van Eecke. So we will have more full introductions of each of them as they prepare their remarks.
So just in terms of process, today's meeting will be conducted-- the meeting will be conducted in English but there's be presentations in both French and English, I understand. So headsets are available if you need them, and you can listen in French or in English. As I said, we're streaming the meeting live to the Google videos channel on YouTube. Transcripts of the meeting will also be available in English and French.
So we expect the first session to run for about 90 minutes, from 12 o'clock to 1330, and we'll have our first four experts. Then we'll have a 30-minute or so break, and then we'll present the other four cases in a second session from approximately 1400 to 1530. We'd like the experts to keep their presentations to 10 minutes, we have a clock to remind you, so that we can make sure we stay on schedule.
We also hope to take some questions from the audience after the presentations are concluded. So if you'd like to submit a question, please do so by completing the Q&A card that you may have received when you came in, and drop it into the post office boxes around the room. Now we may not be able to answer all the questions but we'll definitely try to get to as many as we possibly can, time considering. We'd certainly like to hear your thoughts on the matters well.
So with that, let's go to our first expert. And first up is Mr. Patrick Van Eecke, and he is a partner and head of the internet law group at the Brussel's office of the global law firm DLA Piper, a very small law firm with two or three offices around the world. That's a joke. It's very large. He's also a professor at the University of Antwerp. He teaches European information and communications law.
He's a data protection officer of the University of Antwerp, and is the chairman of the university's advisory council on personal data protection. He's the author of several legal articles and books on computer crime, electronic signatures, electronic contracting and privacy, so all things digital. Mr. Van Eecke, you have the floor.
PATRICK VAN EECKE: Thank you Mr. Chairman and good morning. Thank you very much for this opportunity to share some ideas with you about how to put the right to be forgotten in practice. I'm sure you're not interested any longer in knowing what's my opinion on the decision of the Court of Justice, but you would like us, or me, to dig into the practical consequences of this decision and how a system can be put in place allowing a search engine like Google to act in compliance with the decision. In other words, you would like me to be solution oriented. Is that correct? Yeah.
Well when preparing this statement you asked us to take very specific questions into consideration relating to the issues to consider when evaluating a request. How a removal request form should look like, et cetera. And in my statement I will elaborate a little bit further on these questions and how to find, in my humble opinion, a proper balance between respecting the Court's decision and the rights and obligations of all stakeholders, not just a search engine but also the data subject, the content publisher, and of course there are deals through the search engine.
My proposal is based on three elements, three building blocks, if you could call them. First of all, I think it's a joint responsibility of all stakeholders involved and not just the search engines. I indeed believe it's not just Google or any other search engine who should bear the burden of implementing the right to be forgotten. This joint responsibility has an impact on the procedure, and I will come back to this a little bit later.
The second building block is, I believe it's not Google who should decide about whether or not to remove a link from the search results. I think it should be independent arbitration body consisting of a pool of a few hundreds of panelists maybe. And the third building block, it's about the criteria. I think the independent body should base its decision on criteria which have been pre-defined but which leave enough flexibility for well-founded decisions by the arbitrator. And I would like to elaborate a little bit further on these three building blocks.
So the first one is implementing the right to be forgotten is joint responsibility of all stakeholders involved, and they should be reflected in the procedures. As to the publisher, the content owner, in my opinion, the fact that they make their website searchable, or at least decide not to make use of the opt out mechanism using robot text, et cetera files, it makes them jointly responsible. A website owner has the possibility to choose to be indexed or not, and if they do, it makes them, or it could make them, a joint controller, or at least it could make them accountable for the fact that the contents is picked up by search engines. They want the website and the content's to be found.
And what actually, what does it mean in practice and for this right to be forgotten procedure? Well first of all, I think the content providers should decide whether or not they want to make their content searchable. Secondly, if they do so they should think twice before making information identifiable. For example, a newspaper which would today decide to make its old archives available on the internet, I think they have a responsibility to think about it whether or not this could have consequences for the future, these old newspaper articles. This would be line, in my opinion, with the data minimization principle.
Thirdly, the publisher should set up a kind of notice and takedown mechanism allowing data subjects to make use of the right to be forgotten at the source itself. And interesting, for example, is a very recent court case here in Belgium where the Court [INAUDIBLE] decided that a Belgian newspaper should minimize the article and not the intermediary, the search engine. And a data subject, I think, should first try to enforce its rights at the source before he or she can complain towards a search engine.
And this brings me, actually, to the data subject, the requester. I believe your current form is far too easy. I think the procedure is triggered currently by simply filling out a form and by adding a scan of your ID, which just takes a few minutes. The threshold for a data subject to initiate a procedure is, in my opinion, too low. Referring to my earlier comment, I believe they should, for example, show they unsuccessfully initiated a request to the publisher of the information.
And what is at stake? Namely, balancing the right to be forgotten with the right to freedom of information. It's far too important for just clicking on a Submit button. So a proper procedure should be put in place, and I will come back to that in just a few minutes.
But first, the third stakeholder, which is you, the search engine. I believe the search engine should not be involved in deciding whether or not to remove a hyperlink, as in such case you would be party and judge at the same time, and this would be against the basic principles of justice in a democratic society. My proposal would be that after a removal request has been received, this would trigger a few actions.
And the first one would be that the hyperlink would be put on hold by you. It could be frozen so that you cannot click through any longer but you can still see it, and accompanied by a warning that this link is under dispute, is under scrutiny as long as a decision has been taken. And then secondly, an arbitration procedure should be initiated, which actually brings me to my second building block.
And what is that second building block? Well, I believe that an independent arbitration body should be set up which would have to decide on each of these cases. This body should be funded by the search engines. It should consist of independent arbitrators coming from different regions and having, of course, good reputation. The procedure should be fully online. The publisher should be informed about the proceedings and should be invited to become a party too. Also, in case of platforms, think of Facebook, et cetera, they should also have the possibility to be invited and to have their say in the proceedings.
The decision should be applicable to all search engines who joined this arbitration system. And of course, the possibility to go to court afterwards would always remain a kind of appeal procedure. Cases would be published, and, of course, anonymously. They should be published so that case law can slowly be built up. And as you know, a successful example of this kind of approach is, of course, domain name arbitration where 27,000 cases have already been dealt with. So it's the same kind of magnitude as you are currently dealing with as a search engine.
Third building block, final thing. Which criteria this arbitration body should use when deciding upon a removal request? Well this is exactly-- this is my third building. And I believe it's up to the arbitrator to decide whether or not removal should take place based on a set of criteria which are clearly set in a dispute resolution procedure and which are known to everybody. These criteria should be common principles and they should not be dissected into a kind of list of questions for the arbitrator to respond with yes or no because I think law is not a binary science, which cannot be translated in an algorithm. And I know you guys from Google, you can do a lot with these kind of things, but still, I think it's not going to be possible to have just a binary checklist, yes/no.
It should be the arbitrator, who is the judge, who decides based on the criteria that you set up, based on his experience, her experience, and maybe earlier cases whether or not a removal request should be granted. And I think that slowly case law will develop and, hopefully, distill some common principles on how to judge upon those criteria. Some of these criteria could already be found in the court decision itself, such as the basic principles of when it's inadequate, irrelevant, no longer relevant, excessive in relation to the purposes. These are the main criteria.
And the Court continues saying that there should be a balance between public interest and private interest. And you should decide is there a public figure or not? And you could complete these kind of basic criteria with other elements, like which type of information is there, or what's the source, what's the format of the information?
I believe it's the role of you, of the Advisory Council, to identify those criteria and to propose a procedure for such independent arbitration body. Thank you for listening to my suggestions, and I'm really looking forward to receive your comments, and I'm sticking to the time, which is very uncommon for a law professor.
DAVID DRUMMOND: Well thank you very much Mr. Van Eecke. Are there questions from the panel? Peggy, then Sabine will be next.
PEGGY VALCKE: Thank you very much Professor Van Eecke. Patrick, this was really insightful and very helpful. I would like to continue just a little bit on this idea of the independent arbitration body. If you refer to independent, what does it mean? Independent from the market actors but also independent from the state, or what exactly do you have in mind? Do you see a role for the DPAs in that's body or not? Thank you.
PATRICK VAN EECKE: I'll [INAUDIBLE] part of the question. I don't think there should be a role for the data protection authorities there. It should be an independent, not-for-profit association that should be set up, funded by, I think, the search engines at first. And maybe, but I think it's going to be more difficult, it could also be funded by initiating procedures, that a kind of minimum threshold should be put forward which could afterwards, could be retrieved back from the funds. So that it is really independent, not just independent from any kind of stakeholder but also has its own sufficient financial resources.
SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes, I have a general question. You just described your position that you have first to go to the source, to the owner of the website, and so on, and to try to remove the content itself. Is that in compliance with the Court decision? I think, in the Court decision they mentioned really clearly that the requester has both possibilities to go to the search engine, on the one side, and to go to the web master, to the journalist, to the publisher. And also if there are legal contents, you have the right, as the requester, to go to a search engine to request the removal of a link. And I would hear some arguments why is this could be a possibility to resolve these problems we have to deal with the decision of the Court of the European Union.
PATRICK VAN EECKE: Thank you for that question. Indeed, the Court makes that kind of distinction between the source of the information itself and, of course, the referral information, and clearly focused on the second thing. But still I think, when going into the different removal requests, very often I notice that it's not just about the hyperlink itself but it's an issue with the information.
And I think nowadays, if it could involve the same way, it's just about looking for the weakest link. And at this moment, I think, it's the intermediary who is the weakest link where you can get action from. And so even if it's not an issue relating to the hyperlink itself, but it is an issue when it comes to the source, I think you should first trigger that action towards the content provider.
I'm not saying that it should be excluding because that would be, indeed, against-- no, it would indeed be against the Court decision. But I think that as a data subject, as a requester, you should think twice before pushing the Submit button, and think, is this really what I want, and isn't there really an issue more with the source itself that with the hyperlink?
SYLVIE KAUFFMANN: Should there be an appeal process or stage after your arbitration party that you propose? And do you see a role for the judiciary somewhere there, or if not, why?
PATRICK VAN EECKE: Thank you Mr. Kauffmann-- Miss Kauffmann for that important question. Indeed, I think there should be a kind of appeal procedure which we also have seen already in other kind of alternative dispute resolution mechanisms where you can actually, if you do not agree with the decision of the arbitrator you can actually go to court. It's not really a kind of appeal procedure but you can actually, you can just continue to debate, do it a second time at the court of the jurisdiction which would be competent.
It could be in that case, that instead of going to court you would, for example, go to the data protection authority in that jurisdiction. I think there we should give the liberty-- we should not decide upon that kind of possibilities. It's up to the member states, or it's up to the jurisdictions, it's up to the data subject to decide whom they would go to in case they do not agree with the independent arbitration decision.
DAVID DRUMMOND: OK. Listen, Lidia, why don't you take the next question.
LIDIA KOLUCKA-ZUK: Yes. I just would like to make sure that I understand the concept of the arbitration court. So-- because if you want to have this idea work we have to be very precise. So my question is that if you propose that this body, this arbitration court, should be set up by the search engine, can you envision that it is set up by a few search engines or just by one like Google? And should this body should be endowed? I mean, if you could just elaborate about how we should secure the funding for that? Thank you.
PATRICK VAN EECKE: I think, in an ideal world and on the long term, it should be a kind of mechanism that should be so attractive that each search engine would say this is the way to go forward. Instead of setting up our own mechanisms we would just subscribe to that kind of system. And by paying a kind of fee they would be able to actually to provide funds to that mechanism. And I think the more search engines would actually become a party to that kind of agreement then, actually I think, the more powerful that kind of independent body could become.
And if you look at it in practice it's actually a very lightweight body in the sense that it's just kind of secretariats because the arbitrators are just independent people from all over the world, hundreds of people who, of course, we need to make sure that they passed the kind of test of being independent, et cetera, and providing the right certificates. But that's the kind of procedures that we've seen before already in other kind of dispute resolution mechanisms.
DAVID DRUMMOND: OK. Frank you're next.
FRANK LA RUE: Thank you. Two questions. I have difficulty with one more basic question, is whether the right to be forgotten is a right in itself or not, or whether what we're talking about is privacy? Because I do respect the right to privacy, which is a recognized human right. But I still have my serious doubts about the right to be forgotten, or as someone called it in one of our consultations, the right to change your mind or to regret that you ever did something or put something up, because if you look at the freedom of expression, and you have the right to seek and receive information, that's easy to challenge if you're making a very easy challenge to erase information. That goes against the grain off a fundamental right.
I look at this from a human rights perspective so I believe in privacy, strongly in privacy, and I think that information that is harmful or malicious should be withdrawn. But that's not in reference to anything that anyone can erase because there's a collective right to seek information and to rewrite history if you want a society to redraft their history or to acknowledge history with it's good and it's bad parts. That goes for individuals, that goes for the peoples of a nation. I'd like to hear your opinion on that.
And then secondly, on this arbitration body, I think it's a good idea to have an independent body, but if it's set by search engines what makes it independent? If you truly want to have credibility there should be some level of separation between the search engines and this arbitration. Ideally, we should be a state body. But even if it's a private sort of institution, for me-- in some countries you actually have private services of arbitration, which is fine, but there should be no connection to those that are actors in the decision making process, nor those that make the request, nor the search engines, or the publishers, or any other stakeholder in this process to truly have something, truly and absolutely independent that has some degree of recognition by the court system even though it's not a court and that could be appealed to a court. Otherwise the independence, I think, would be questioned, no?
PATRICK VAN EECKE: Thank you Mr. La Rue for your questions. On your first question, I fully agree with your opinion on that, that indeed, to me as well-- at least that's how I understood your statement-- the right to privacy, that's a fundamental right. But to me, the right to be forgotten is not necessarily a fundamental right in itself, and that's why I think that we should--
First of all, we shouldn't make it too easy to make use of that right because it's not a fundamental right. Secondly, it's a very complex situation. That's why I think a search engine, on its own, should not be deciding whether or not the right to removal should be given or not. So that's why I think an arbitrator, or an independent arbitration body should work on that. So I indeed, I fully agree with your opinion on that.
When it comes to the independent nature of that arbitration body, I certainly didn't want to say that, however, the search engines should initiate such kind of body. I do believe it should, of course, it should make sure that-- and you can do it perfectly by putting that in the statutes-- it should be an independent body without any kind of reporting line, et cetera, et cetera to any kind of search engine. So it is initiating and then letting the boat go.
DAVID DRUMMOND: OK. Thanks. I think we have two more quick ones. Luciano, you're good? OK. We'll let Jose-Luis ask a question.
JOSE-LUIS PINAR: This is just to clarify two or three points. First one, coming back to the independence of this arbitration body. How do you warranty the independence of the body from an economic point of view? I mean, who's to finance the body, to clarify this point?
Second one. It's national, European, or international independent body? And the third one. It's necessary a sort of code of conduct to apply and to specify the criteria-- you mentioned the criteria. How did the binding nature of this code of conduct, and who is to write, to decide the criteria in this code of conduct to be applied?
DAVID DRUMMOND: Thank you Mr. Pinar. You all ask very difficult questions. But I think it's, for sure, I think it's the role of the Advisory Council to find proper solutions for that. I do believe, again, looking at what's happening already in, for example, domain name system.
There we see that there are two, three even independent arbitration mechanisms being put in place. One of them, as we know, has been initiated by WIPO, and it's under auspices of the World Intellectual Property Organization. So I can imagine that there could be any kind of currently not-for-profit associations already existing who would say, well, we can take care of that secretariat, and we need to get funded, and we will make sure that the organizations who fund the organization do not have an impact, do not influence the decision making.
It should be transparent, of course, the dispute resolution mechanism should be very open, transparent, be made public. And especially, when you're working with hundreds of peoples, arbitrators from all over the world, you can make sure that you can find that balance and because you are not able to influence all these people. And I think it could be one of the solutions to put forward.
DAVID DRUMMOND: OK. Well thank you very much Mr. Van Eecke. We are going to move to the second expert, and that is Mr. Stephane Hoebeke. Mr. Hoebeke has been a legal counselor at RTBF, here in Belgium-- we know that does not mean the right to be forgotten-- obviously, the public broadcasting organization of the French community of Belgium. He's been there since 1989.
He's also the coordinator of media education. He has a master's in law and is the author of several articles on media law, and has co-authored a book on the rights of press, freedom, and the rights to press. Mr. Hoebeke, please proceed.
STEPHANE HOEBEKE: Thank you. [SPEAKING FRENCH]
INTERPRETER: Thank you for the invitation. I'm very proud to be able to answer here is this wonderful place. Of course, historically, this was part of the radio television Belgium, both languages. And the building was sold some years ago, problems of money. And so there was a move to other facilities. So this is a historic place, audio-visual, and we can see how times are changing. Here we are talking about Google.
What about the right to be forgotten? Having listened to the speaker and the questions, I'm glad that I'm not the last person speaking today because we would, practically, have said everything already. It was very interesting to listen to because what we see is that, depending on the points of view and the legal angles, you can say one thing but you can say exactly the opposite as well.
So as the legal expert at the RTBF and a media expert, I'm against and for because I think the first thing to put a question mark against is not the consequences of the ruling and should a new body be created? No, the first thing is to speak against this right to be forgotten very strongly. Why?
Well it has more or less been touched on. It can be argued strongly against. There's not agreement on it yet. The Google Spain ruling is there. Google, as an institution, which is respecting the rights, and is supposed to respect such a ruling but is there a direct effect or not?
Now what about national legislation? Do they have to adapt national legislation? Well we know that there is draft legislation for a regulation in this area, possibly, that the right to digitally being forgotten being incorporated.
Now we dealt with early in Belgium, in relation to the legal past of a person. Well of course, this is a serious issue. If a person has been convicted and has paid his debt, and has been sentenced to so many years in jail, and has come out of prison after that time, that person should be given a second chance and be able to then go back into society.
So a person's criminal past, according to some case law, of course it's case by case, that there is a right to be forgotten, that we say, OK, we're not going to dig that up again. Now that having been recognized, it was in relation to specific cases. There often were arguments against because it also meant that a publisher, or some sort of a press body-- the RTBF, for instance, was involved in such cases.
Now a Belgian court decision applying the right to personal data gave a completely opposite judgment after some months. This was in Brussels not in Viaje. And the judge said, very clearly, that the law on personal data, Belgium law was not applicable to newspaper archives or journalistic archives. This was at European level because there is an exception for journalists at that level. Therefore, the court did not grant the request of the requester. And this referred to press archives.
Now we have this idea of journalism, and I must add to that the concept of freedom of artistic expression. These are two large areas of exception which we find in European legislation nowadays and already there in regulations or laws, either on data protection are involved. Now journalism can't be defined in a restrictive way.
There may be a ruling from the EU Court of Justice a few years ago which says that this idea of journalism does not only refer to professional journalists. It's not it's restrictive, it more or less refers to anybody who's communicating information or ideas. Well this is a very wide interpretation of the notion of journalism, and I agree with this.
And it's found, again, in Belgium in, for instance, the arbitration court's ruling, which is the highest court in the Belgium, which interpreted the notion of media-- particularly in the one on the secrecy of sources-- in a very, very wide way. It's not only professional media which are covered by this, written press or RTBF, or others, but as simply a person who's surfing the web, someone on Google has a blog and gives his opinion or some information. Now this is very important development in the law applied to using the liberty of expression, freedom of speech not simply limited to professionals but to anyone, every citizen, any citizen. This is a development which is essential and which is there now in law.
Now we've said already, in the contrast between freedom of speech and right to be forgotten, there is literature that we can look at here, and there's Article 6 on the European Convention on Human Rights. Article 6, of course, has case law, it's very solid and very ample, in which we can see indications, particularly, in the referring to archives, digital archives. 2010, in the "Times," showed the importance of a digital archives in relation to Article 10 of this convention. These archives are part of Article 10. It's not only that you consent but you can receive them if you want information.
Limiting freedom of speech, being able to limit press freedom or media freedom, this court would have to invoke an absolutely urgent social need. If you look at the ruling, the Google ruling, you talk about a legitimate motive. It's more than that. In order to be able to restrict Google or a press editor, or Le Monde or RTBF, for instance, one needs to have an absolutely imperative social urgency, a need.
It was said already, it's not somebody just waking up one morning and thinking, hey, I said this 10 years ago, and I put this photo on the net, and I've changed my mind today. Perhaps I have a different marriage partner, whatever, I just want that deleted. No it's not a whim. It's not simple regret or remorse. There has to be an extremely strong need. And it has to be stronger than freedom of speech and the other rights.
So this means there'll be very, very few cases in which this can be applied. It's not that there is a door which is wide open. It's a door which is only very slightly open. Or is completely closed, mainly.
So one needs to respect this very rigorous analysis. One has to apply a very rigorous analysis. Now I hope I haven't gone over time.
Two points in relation to Google. Of course, if I'm in [INAUDIBLE] position as a content editor, a person who designs programs, distributes information and ideas, well, we certainly don't want Google on its own to decide, for instance, to withdraw an article from the RTBF, or that you would delete something from archives or from electronic research, some program of the RTBF.
No. This would be a type of censorship, or self-censorship, or simply censorship, and it would be impinging on our freedom of speech. So we could not agree that Google would be a sort of private judge and judging on freedom of speech. That's a very important point.
And the last point I'd like to emphasize is media education. It's fundamental now, when we talk about the right to be forgotten, we have to reframe this so-called right in relation to the right of freedom of expression. We are in an information society now. People need to understand they don't have some sort of absolute magic right which allows them to place photos and information on the net and then at any old moment, decide to withdraw it.
If people have published an idea or information, it has an impact. And we are in a society which is not silent. It's a transparent society in which people exchange. Thank you.
DAVID DRUMMOND: Thanks very much, Mr. Hoebeke. We have questions. Sylvie, why don't you start?
SYLVIE KAUFFMANN: [SPEAKING FRENCH]
INTERPRETER: Two questions. You say we don't want Google to decide on its own. Well, who else? Do you agree with Patrick Von Eecke's suggestion, or do you have another one to make? Secondly, do you make a distinction between text and images-- so the format to be forgotten, or that one requests forgetfulness for.
STEPHANE HOEBEKE: [SPEAKING FRENCH]
INTERPRETER: What I was saying about Google being the sole judge on relevance, and so on, of information, here I can't give you a ready made solution. Reacting to the first presentation, I would not like to see the creation of some sort of an arbitration body, administrative body. We in Belgium-- and I think you do in France-- we have so many authorities and bodies which are supervisory, so-called.
If we were to create a new one, even if it were an arbitration body, which would decide on the relevance of all the requests-- now if Google is already receiving tens of thousands of these, can you imagine? Who is going to manage this? Who's going to manage such an organization, even if it's arbitration? Somebody said, who's going to pay? The other side-- to me, it seems completely something that cannot be implemented, apart from not being able to be paid for, perhaps.
If Google is notified by somebody who says that they would like a link removed to RTBF, then I think it's perfectly normal that it be a automatic contact between Google and RTBF or whichever other editor there is, publisher there is, that they look at the matter actually substantively.
Of course, there can be exceptional cases. Should a correction take place? Should it be added to? Because often, behind the right to be forgotten, there is in fact-- in case law, we see this-- more a right to add information. Things have changed. And what people really are asking press bodies is to add more information, so that the original information remains, but more is added, perhaps, with an additional link.
And a possible distinction-- the question about text and image-- well, my personal opinion, and I think case law also says that no, there isn't really a distinction to be made between the two. Freedom of speech, of course, develops, evolves, as technology develops. The principle, though-- the fundamental principle of freedom and the right to expression-- hold, whether it's for speech, text, image, or sound.
DAVID DRUMMOND: Luciano. Then Jose-Luis.
LUCIANO FLORIDI: Thank you. Actually, I'd like to thank both speakers for their presentation. A quick comment, and a concern, on which I hope you may wish to comment, in case, if you'd like, too.
The quick comment is the following. Normally, academics-- I'm not sure about politicians, but academics know that the best way of killing any project is to set up a committee to take care of that project. We do that on a regular basis to make sure that nothing happens and nobody can complain about it, because there's a committee. So sometimes, it's dangerous to establish a new body to supervise things, because that new body makes sure that things don't get anywhere.
If that is the background, the concern is the following. What would prevent a search engine presented with the alternative of contributing, paying, working for an advisory body-- a new body-- to opt for the opposite alternative, which is, let's accept all the requests that come in? It just a handful of requests. 100,000, 200,000, half a million requests-- it can be done automatically at very low cost, and well, Europe will have a groviere of a web. So be it. They asked for it. That's what they get.
That's my concern, if the alternative is to set up a body. And I would like to ask your opinion on this particular problem, the financial side of it. Isn't it more convenient just to say yes to everything that comes in? And so be it.
STEPHANE HOEBEKE: May I ask you to reformulate your question in one simple question?
LUCIANO FLORIDI: So if the alternative is, please pay to set up a body that will take care of this, [INAUDIBLE], we can do this automatically. We can just say yes to every request that comes in. It's a click away-- a concern that was expressed quite rightly. You ask, you get. From today onwards, Europe will have name surname search. Which is lacking some information. It's not a big deal. Google doesn't lose any money.
STEPHANE HOEBEKE: The two solutions, to me, are not sufficient. The first one, I told it before. And the second-- yes, if Google only automatically says yes to every demand to redraw, then it will cause prejudice to editors and to other people, not only for the editors. [SPEAKING FRENCH]
INTERPRETER: I'm continuing in French, says speaker. The question actually isn't limited just to RTBF, "Le Monde," or whatever. Every citizen is involved. If the right to be forgotten is recognized-- if this is given to an individual-- then any other people who may have had the same thing happen to them, the same situation, those people, then-- that would mean that the people who perhaps were involved somehow are placed under the obligation to remain silent. They're no longer allowed to talk about them. They were part of the same situation.
So logically, one has to be talking about this in terms of principles. What was said exists. OK? Things change. Why would one want to behave as if this thing never happened?
We have to realize, of course, there are certain reservations. The information that was published by a certain person, or by a professional journalist, of course must be correct. It must be solid. We're not talking about defamation or perhaps inciting people to hatred, but other cases. Of course, these are different. In such cases, we could imagine that yes, it is important to withdraw certain information.
But I'm not talking about those. Those are the exceptional cases where there is actually an offense. And there is a very strong reason. And the Court of Human Rights would recognize that.
So this body-- I can't really see how it would work financially, either. How could it survive? Now another solution would be some sort of a purely automatic system-- well, without any human beings behind it. Both of them I think are going in the wrong direction.
I would prefer a system whereby when Google is informed about a case, after having been filtered already, because it looks pretty serious, it will then involve the editor, if there is an editor involved. In that case, the press editor or publisher must be consulted.
JOSE-LUIS PINAR: Yes. If I understood you well, do you think that the same criteria on the balance between privacy and the freedom of expression should be applied both to a blogger, for instance, and to professional media?
STEPHANE HOEBEKE: To me, they are not-- [SPEAKING FRENCH]
INTERPRETER: There's no reason-- no really relevant reason-- today, in 2014, seeing how technological developments are heading, to say that we should treat a professional journalist differently from a blogger. Of course, assuming that we're not in the area of incitement to hatred or whatever.
Professional journalists, on the other hand, also don't have the right to write just anything at all. These laws apply to everybody, professional or not. Whether you're anonymous or a blogger, a Facebooker, or whatever, or a professional journalist, or an author, perhaps-- maybe a famous novelist-- everybody is under the law.
Of course, there are certain more fine details, but in principle, the laws are the same for all.
LIDIA KOLUCKA-ZUK: I would like to ask the following. Do you see additional cases to the cases that you already listed as exceptional cases, that we can consider as a stronger need to be removed, and the links to be removed, stronger than the right to be informed-- the right to information? Additional cases? Like-- I don't know. For example, if the request is submitted by the adult who request to remove the links to his name or her name posted information in the past, when he was a kid, for example. When he or she was a kid and he posted information, and submitted as an adult in the past.
STEPHANE HOEBEKE: Yes. [SPEAKING FRENCH]
INTERPRETER: So this is really where we're dealing now with judicial questions. There are various legal norms, like a statute of limitations, amnesty, or even rehabilitation-- these are legal norms in democratic societies. But these don't mean that the past is deleted. The past continues to exist.
But coming back to this past-- if today, some information would emerge which actually comes from '55-- perhaps a message on Face-- a government minister. The fact that he posted messages which are pretty dodgy, as you could say, even if this comes out five years later, this can still be relevant.
Someone who was acquitted-- the fact that they were acquitted doesn't mean that they can delete what happened to them. I mean, they were in court. They were acquitted. The information is there. Why would you delete the possibility to research and find out that this person once had to face trial?
Examples. For instance, if your employee looks you up and says, well, I can see that you were sentenced to three years in prison on a certain date. If the employee shows you a conviction, or even that you were acquitted, and uses that in order not to give you a job, there is the problem. And this really is a problem of educating the employers. Because this is not really allowable. Here, the person could take him to court, because it is a form of discrimination.
SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes, the ruling is only dealing with the removal of links. You mentioned-- or your concern is censorship or something like that. My question is, why is the removal of a link a kind of censorship? Because the content still remains.
STEPHANE HOEBEKE: [SPEAKING FRENCH]
INTERPRETER: Now if Google is in a position-- it's in a dominant position. And if it's Google deciding to de-index articles, delink, then of course this attitude has an impact on the flow of information. This is just the way things are.
Now here perhaps we're getting outside of the substantive topic. It could, for instance-- if you have to press editor, or an audio-visual editor or publisher, one could then perhaps say that censorship is taking place, because links-- Google's activity is creating links. This is based on Article 10 of the-- this is, in Europe, Article 10, but I think it's the First Amendment of the American constitution.
Google has the right to receive information. We have a right to receive that information. If we are not receiving the information, this could be a violation of our right to receive information.
DAVID DRUMMOND: OK. I think we'll keep going. We're over time. Do you want to-- very quickly.
FRANK LA RUE: This was a very quick-- I think it's important to follow up on the press thing. And precisely, following up the question-- two things. I think the press plays a very important role, and it falls under freedom of expression. But certainly it's not the only sector than enjoys freedom of expression or access to information.
One could argue the same thing for historians and researchers and writers, who are not necessarily journalists, but who are documenting history or documenting events. And the question would be precisely that. If just taking away the link is making access to written information more difficult, isn't that a breach of access to information? Should everyone have easy access to all historical information?
And just as a footnote, I find it interesting last night, in the news, on television, I saw that the elections of today in Congress in the United States-- there are four Congressman, or candidates for Congress running who have criminal records. And that did not affect their possibilities of election or reelection at all. It seems that information is important. And oftentimes people as a public interest would rather have the information than erase this information, no?
DAVID DRUMMOND: OK. I think we'll take that as a closing comment to that session. Let's move on to the third expert. Thanks, Mr. Hoeebeke.
The third expert is Mr. Karel Verhoeven. Mr. Verhoeven is the Editor in Chief of "De Standaard," leading Flemish newspaper. Please proceed.
KAREL VERHOEVEN: Mr. Chairman, members of the Council, thank you very much for the opportunity here to present our view in this delicate matter. I will speak to you from the side of practice-- practice of publishing.
And I would like to focus on two issues-- namely, first of all, the relationship between us publishers and search engines like Google, and the very different responsibilities that we have towards the ultimate source of information, which is our archive. And secondly, I'd like to tell you a bit more about the policy that we have developed about removal of information from our databases.
We are quite a bit more pragmatic in this, certainly in comparison to Mr. Hoebeke, and let me tell you why. Cultural entrepreneurs throughout Europe like to talk about the cultural exemption. And I think that the notion of cultural exemption applies well to the kind of position in the information market that we as a newspaper hold in our linguistic community.
So as you said, we are the leading quality newspaper in the Dutch-speaking part of Belgium, which is 6 million inhabitants. We have a venerable history. But we also are thriving. So our circulation is growing, keeps on growing. We're now selling about 100,000 copies daily. And we are about double size of our nearest competitor. And also, digitally, we're doing well, taking more than our fair share in the market.
Now, why do I tell you this? That is because we hold a slightly different position towards Google than, for example, an English language newspaper would. Because when readers are looking for specific information, they will not only Google, but they would also try our database directly. And so to many, our archive serves as a sort of direct source of information. So whatever Google decides on the removal of links or not, we, for the time being-- it change quickly-- but for the time being, we do not have such strong feelings of infringement on our freedom of expression. Our main worry lies elsewhere, and that is the integrity of our archive.
Now, given the position that our news medium holds in our linguistic and intellectual community, changing that archive quickly feels like really amending history. And it's not nominally. It's real. Because our archive, we're one of the first newspapers in Belgium to digitalize it, so it's quite large. And it's also of critical value to researchers and to our readers. So to us it feels also a matter of principle that we do not accept destruction of certain information, of our archive, and that we go to great lengths to defend it.
But of course, also, we receive about daily requests to remove information from our archive. And we do acknowledge the right to be forgotten. We do acknowledge that. Because this right to be forgotten is called in Dutch [DUTCH], which is the literal translation of the French that has been said here, [FRENCH], which had been used at first. And the correct translation this [DUTCH], [FRENCH], is actually the right to oblivion.
Now, oblivion cannot be enforced by law, obviously. Law cannot dictate to us to forget something. But we feel that a more correct approach is that you would redefine it as a right not to be mentioned anymore, which is essentially then an obligation comes to us, or can come to us as an obligation to delete or destroy information.
Because we are not blind to the fact that thanks to very powerful search engines such as Google, our archive is not just an archive. It's a continuous publication. And the time lag is wiped out. So in new circumstances, old information can be more damaging than it was at the time of publication.
We issued our first policy concerning this issue in 2008. And seven years later, we do not have tailor-made answers. There is not a limited set of types of requests. There are always more exceptions that we do have to take into account carefully.
Let me tell you about the set of rules that we have developed in the past two years. And let me also tell you that the counsel of journalism, which is a self-regulating body that was created in 2002, they have issued or established a set of norms and regulations. But they are so vague that they hardly serve as an instrument to us. So we have been developing our own practice.
First rule in that practice is if the information was erroneous, obviously, be corrected. [FRENCH], as they say in French. Second is if new information comes to light, we have to edit it to add information if it seriously changes the original source. So the first article still remains relevant in the chain of information, in the chain of events, but we have to link it, at least, to the articles that follow. Or if there's substantial information, we sometimes add the information, the actual information.
Classical case, classical example is the hearing in a criminal case where you have a public prosecutor demands a severe punishment, large article, but then no report or a very small report on the person who is acquitted. We have to establish the links. Most importantly, most of our cases concern anonymity. And that's a very interesting one, because our rules as to the explicit mentioning of names, they have changed over the past few years.
And our internal guidelines now follow strictly those of the Council of Journalism and their rules have also changed. The rules now are that-- certainly with minors, but also with victims, suspects-- we grant them anonymity. And when an older article where we did not grant them anonymity is questioned, and the actual guidelines would not prevent us now to mention the name, we erase the name using just a first letter of the family name, or delete place of interest and make sure that the position is anonymous.
Fourth case-- and that's the most difficult one-- is where the person concerned changed his view of life or her point of view philosophically or politically. For example, beauty queen who is hampered in her professional career by references to the beauty contests asked not to be mentioned any longer. Or someone does not want to be associated with a past relationship, or no longer adheres a past faith.
For example, we had recently a claim by someone who wrote an op ed article about sex. He was a monk in a Buddhist community. He left the community, became a civil servant, which implies a confessional neutrality. He agreed to our solution, to the solution we proposed, which is that we omitted the reference to his being a monk and his adherence to the Buddhist belief. On the other hand, we refuse to remove references to someone who stood for election and wants to now run out his political career.
And that brings us to the point of what is private and what is public, and what is a public person. What does make someone a public person? First of all, obviously, playing a role in public life-- politics, economy, cultural, social sphere. We are most strict with people assume or have assumed public offices, and especially if they stand for public office, and our politicians.
We are also strict with regard to persons who are expected to set an example of virtuous citizenship. And I know that it might be difficult, but I'll give you an example. The first time we were confronted with an international firm specializing in dealing with this case, [INAUDIBLE] International, it was an article in 2002-- the case came to us much later-- regarding the conviction of a well-known university professor and lawyer, [INAUDIBLE] in Belgium for forgery and other financial crimes.
We have answered that the combination of the severe effects for which he was sentenced and his status do not allow us to withdraw the article. We would come back, or we proposed that eventually we could come back to the ruling if his fine would have been paid and all his victims would have been compensated. We did not hear from that case again.
So finally, being known, being important in itself is not so sufficient. Let me tell you one, because I see that I'm running out of time. Criminal cases are the most important and most passionate. We already dealt with that. Let me give you three reasons why we do not erase information from criminal courts.
And it's first of all, when serious crimes moving public opinion. Secondly, when the cases have not been entirely dealt with, or third, when we're dealing with people who are swindlers, people who commit numerous crimes and will likely be committing new crimes. But-- and that is an important one-- once a case is closed, once someone has served prison sentence, we are willing to anonymize cases to a large degree.
Let me come to the quick solution. And that is that both Google and newspaper media, we share a concern, which is safeguard the right to information. It's obvious. But the ruling of the European Court obliged Google to block certain gateways. And up until now, the information in itself has hardly been banned. And we are aware of the fact that some are trying to defend the possibility to also remove content itself, not only from the internet, but also from the original sources. And that would mean the loss of integrity, of our journalistic integrity.
And that is why we are actually happy with the possibility that we disclose our archives to everybody, but that the entrance, then, as a result sometimes can be narrowed-- for example, with regard to Google. Thank you.
DAVID DRUMMOND: Thank you very much, Mr. Verhoeven. Do we have questions from the panel? Who wants to start? Frank, go ahead.
FRANK LA RUE: Thank you very much. This allows me to go back exactly to one of the questions that I was trying to ask in the past panel. I find that there's an argument to be made that it's only the link that is being eliminated to information, and not the sources of information. But I think it will be a matter of time.
If we allow the link and we make it more difficult to research information, when will this be applied to other factors of information given that the technology of internet is moving fast enough that everything will facilitate access to the original sources eventually, and the fine line between publisher and intermediaries will be less defined? Isn't there a feeling of a risk?
And secondly, the point you were making of the cases you accept for oblivion-- which I like the term-- seem to be correct. But again, I'm a bit worried about the definition of a right. What you were mentioning are for me permanent obligations. For instance, the non-mentioning of adolescents and children, adolescents, it's an obligation to protect their privacy, and it should be by law in most countries. And therefore, it's not a new right. It is something that has existed there all the time, or it's an ethical question to correct publicly erroneous information. And in some cases, there can be criminal intent in presenting false information or presenting false images.
So all of those issues are correct, but they fall into either criminal law, children protection, or the ethics of journalism. It doesn't seem to me that it falls into the definition of a new right, of something new, except, probably, the last one on whether handling publicly or not those cases in criminal law. I don't know if you would agree on that.
KAREL VERHOEVEN: Thank you very much for this challenging question. Because it goes to the heart of matter, I think, that we're dealing here with an extension of a previous obligation-- if I speak from my point of view, an obligation to report correctly and to minimize damage, unnecessary damage. And actually, that's the kind of spirit that we apply to these matters. And also, we often deal with them, our legal department together with editorial staff, in dealing-- what are we going to do with you? Because it's halfway also an editorial choice.
With regards to your first remark, the risk of removing links, I think there is no such thing as objectivity in Google. Change the algorithm, and you get different results. And it's often being manipulated. So often, about the right not to remove links, it's often talked about as if it's a fixed body and a fixed result. But Google changes all the time. And whereas your archive is something that is fixed and non-changing, I think there is a major difference between these two.
DAVID DRUMMOND: Jose-Luis?
JOSE-LUIS PINAR: Yes. Don't you think that it will be necessary to have a specific criteria regarding information on children or minors and handicapped persons or people? Minors, children, information about children, and information about handicapped persons, that specific area.
KAREL VERHOEVEN: Not more specifically in our archive than in our daily practice of reporting. So obviously, we deal with categories differently-- certainly minors in criminal cases, very much so. But not different with regards to the way we deal with our archive as the way we deal with it in our reporting daily in the newspaper.
SYLVIE KAUFFMAN: Have you had cases where requests that you rejected-- in which requests were rejected by you and the request went to the search engine, for instance, to have a link removed?
KAREL VERHOEVEN: We often see that applications are being made both to us and to Google.
SYLVIE KAUFFMAN: And do you know about the outcome?
KAREL VERHOEVEN: I haven't extensively researched it. We sometimes wonder about the criteria that Google uses to block URLs. But we haven't really gone into the matter systematically. Sometimes-- for example, during the last few weeks, Google, for example, blocked a URL to a small photograph on the regional page about a car crash. Non-identifiable subject, non-identifiable car-- it would not fall in our category of being blocked, yet Google did block it. So there certainly is a difference in practice between what Google does and what we do. And it will lead to more clashes, and Google will serve as a sort of courts of appeal to our decision, undoubtedly, or vice versa.
PEGGY VALCKE: Yes. Thank you, Mr. Verhoeven. Thank you very much for your interesting views. I would like to understand a little better how you deal with this in practice. So I have a couple of very concrete questions.
The first one, actually, goes back to the example you just gave. You said, well, we see in practice that sometimes links to certain websites have been removed from Google's search results, whereas under our policy, this would not be removed. But probably, the person has no-- that request, that is-- has no problem if people go to your archive and search about, let's say, car accidents in a particular area or region, but does have a problem with the fact that if his or her name is typed in, this comes up. And that person doesn't want to be confronted with that anymore. So do you agree that there could be different situations, different perspectives in which you're OK with information coming up in situations where you don't-- ie, in this case, when somebody types your name in Google search and this comes up, do you agree there can be different situations?
KAREL VERHOEVEN: [INAUDIBLE] point of view. I would certainly agree. Because conducting a search in Google is a different intention as conducting a such in an archive of a newspaper.
PEGGY VALCKE: So would you then also agree that you just can't copy paste criteria policies that the press have developed for their online new archives to the search context?
KAREL VERHOEVEN: I do agree. It's a different context, yet we could inspire each other much more than we do now.
PEGGY VALCKE: Yeah. I agree with that.
And if you say we anonymize, sometimes, articles, we remove certain information, how far do you go? Are you talking about a name, or also about Streets
KAREL VERHOEVEN: We're talking about a name. But recently, there was a case just last week where an entrepreneur tried to erase his name from at least 20 articles in both our website and our newspaper. Obviously, we did not go into that matter. But you see how for the more publicity around this new rule, this possibility to erase information, the more people would come up with requests. Even if they have been the media for five years, they will request that everything will be removed. So it will lead us very far, and that's a very worrying evolution.
PEGGY VALCKE: And if you remove information from the publicly-available pages, do you somewhere keep the original article?
KAREL VERHOEVEN: We always have our original, the PDF files of our original newspaper. What we do not do is remove links ourselves. We never propose it, and we never do it to remove links ourselves in our archive. Because then, basically, the information becomes unfindable, because we do not have the practice of 10 years ago where a historian would have many different skills and ways to look for information. Now you conduct a search. And if you remove your information from the search of your archive, it's unfindable and hence nonexistent information even though the original information will always appear in the PDF of the newspaper.
But for example, with web publications, you have a different situation.
PEGGY VALCKE: OK. And then about-- if I may continue-- about the relationship between the publisher and the search tool. If you are confronted, if you are noticed that certain links to some of your web pages have been removed, what was your internal reaction at first? Did you have the same reaction as many newspapers had, especially in the UK-- we have to be consulted and they can't do this without us?
KAREL VERHOEVEN: Well, as I said at the beginning, it's slightly more pragmatic. We would like to be consulted. But on the other hand, we try also to be pragmatic in the way we deal with extra committees and extra workload. It becomes a hefty workload for a newspaper.
And we try to go into the matter in each individual case. So often, I take about two, three, four cases a week that come to my desk. I'm not willing to take 25-30 cases a day. Because then I'm dealing more with history than I'm dealing with contemporary news. So we have to be quite pragmatic on that grounds.
Yet from a point of rights and also from the point of that we deal more or less on the same grounds with information, it would be a good thing that Google would ask us for advice. Because we know what we're dealing about. And often, it's impossible for Google to know what exactly is going on.
PEGGY VALCKE: And then a very final question-- your policies around anonymizing information in online user archives-- is this publicly available somewhere? Or is it internal policy?
KAREL VERHOEVEN: It's internal policy, and we try to make it publicly available, yes.
PEGGY VALCKE: OK, thank you.
KAREL VERHOEVEN: Yes. We try to communicate explicitly with the reader what the engagement precisely is with the information and how we deal with it.
PEGGY VALCKE: Mm-hm. Thank you so much.
DAVID DRUMMOND: Luciano, you have a quick question.
LUCIANO FLORIDI: I'll try to make it quick. Do you think that the ruling should apply to all kinds of search engines, including search engines, for example, that run on a single website-- in your particular case, on the search engine, say-- well, without mentioning your particular publication-- say, of the website of the BBC, for example?
KAREL VERHOEVEN: That's a very tough question. I think, as we said earlier, that conducting a search in Google or on another search engine is a different type of activity or a different intellectual quest from searching in a strict area where, for example, our archive is for those who pay for the newspaper. So you are already in a different, more secluded environment. And it would be very far reaching if search in a secluded environment, such as a newspaper, would fall under the same rules as a general search, such as Google.
DAVID DRUMMOND: OK. Well, thank you very much, Mr. Verhoeven. I think we're going to move, to make sure we stay on time, move to our final expert. I should note also that, Mr. Nemitz, we welcome you to the meeting. It's good to see you. Thanks. You'll be hearing from you in the afternoon.
Our final expert for this session is Mr. Robert Madelin. He is the director for the Digital Agenda for Europe, at the Directorate General for Communications Networks, Content & Technology of the European Commission. He's a British Civil Servant since 1979.
Robert served in the Commission since 1993. He first worked on trade and investment policy and from 2004 to 2010, served as director general for Health and Consumer Policies, and since has been working in his current portfolio. Robert, please, you have the floor. Thank you.
ROBERT MADELIN: Sir Chairman, thank you very much. I will leave to my more learned friend and colleague, Paul Nemitz, the detailed legal exegesis, as far as the European Commission is concerned. But I'd like to talk a bit out of practice, as Karel Verhoeven was saying, as a regulator but also thinking about internet governance. Because I think that what's most welcome about the initiative of your company is not just to talk about the ruling but to raise, I think, issues which are rather broader and of greater impact than only the ruling or only one company.
And I think, therefore, for me, it's important to think of the Google case as an example of what a responsible, as well as a law-abiding, internet actor should think about doing, and to think about also a bit as a locus of deliberation about the governance of knowledge in the digital age. I think if I laid out a couple of principles, and then I'll want to talk about governance. The principles, I think, that should govern a response to a ruling such as this are to be holistic, symmetrical, and future-proof.
If I take holistic and symmetrical together, the right to be forgotten has to relate somehow to the right to be found. And in Europe, Google will find as many people saying, as some have said in this hearing, why does the algorithm squeeze me out, as those who say, why does the algorithm find me. And the needle is moving in our societies as to where we want that balance to be struck.
Similarly, there are as many people complaining that they can't use pseudonyms in Facebook that there wish only to use a pseudonym is overwhelmed by having their real name in Wikipedia, et cetera. So the issue about identity, anonymity, pseudonymity, and naming is, I think, a much more open space than it's ever been in Western societies in the last 1,000 years. Similarly-- and this, I think, is a very important link-- what's the link between our response to a right to be forgotten, our management of terrorism and hate speech, and our management of access to indecent material? I personally think that we will get to a better place on internet governance when these issues are seen in a similar way. And you may well find during the five years that's just started of the Juncker Commission that when we come back to look at e-commerce, and mere conduits, and the legal framework for these things, we try to look at them together.
Future-proof, for me, means especially thinking about the governance of knowledge. And as a sort of partly amateur historian, it troubles me to think that databases are fit for purpose if they are more hidden than the average data in the world. I mean, I think there's a problem here that somehow if a book is locked away, as "In the Name of the Rose," it still exists. Yes, but I can't find it. If I can't find it, how does it exist?
The whole future of a big-data world is going to be confused, to say the least, if we don't understand what we find and what we don't find. So transparency of intention is going to be extremely important. Explicit statements, at least in this area, about what the applied policy will be, so that those knowing what they're looking for understand what the results don't tell them about available knowledge.
If I then say that future-proof is going to be difficult, because there are geographical and chronological variations in our expectations of the internet, firstly, the statute of limitations as a matter of law, varies jurisdiction by jurisdiction and in societies. We've gone all the way from the "Mark of Cain" and the "Scarlet Letter" to the right to be forgotten. The needle is moving.
And I think we are going to have to have a more stable, broader debate about where we want the needle to stop in the short term. It's never going to stop moving. But right at the moment, it's much less clear than it was even 5-10 years ago. And in the areas of data management that we get asked to look it, what's the difference between protecting a neighborhood and victimization of people? What's the difference between free speech and trolling on the internet? We actually don't have the answer.
And if, in the particular field of media freedom, where I couldn't say it better than my neighbor, I think that we're going to see a continued debate about whether functional media freedom means the right to print a newspaper and put the facts in the archive forever, or also some expectations that those facts will be accessible through the normal means of access, whereby citizens expect to find out what's going on in the world. And the meaning of freedom in this context is problematic.
If I then talk a bit about governance, I think that we're facing in this case what the G8 in Deauville in 2011 said we should try to avoid, which is a global issue being managed jurisdiction by jurisdiction. And so I feel the pain of companies who have to manage that. But we failed as governments to achieve any overarching framework of rules, which gives such companies clarity.
I think the minimum we expect is obedience to the law in every jurisdiction. I think in addition, if a company is law-abiding but also responsible, there must be some explicit known corporate standards below which the company will not go. So mere law-abiding behavior in each jurisdiction is not enough. And I think in the last 50 years, if you look at the OECD norms for multinational corporations, it's never been thought to be enough since the establishment of the UN.
So it won't be enough, in my view, for Google to say, as a company, we obey the law in every jurisdiction. Firstly, there will be an expectation, as under the MNC norms of the OECD, that if in jurisdiction A, you're setting high norms, you don't then set low norms in a weaker governance jurisdiction somewhere else in the world without at least making it very clear what your own bottom minimum standards are. I understand that's challenging. But I think that's the expectation against which multinationals are running.
I don't quite agree with Mr. de La Rue that state solutions are needed. I think in the area of journalism, in the area of advertising, there's well-established co-regulatory, self-regulatory set of mechanisms in the toolbox. And I think they are compatible with arbitration-like or consultative mechanisms.
And I think that one way of reconciling global operation with variable territorial expectations is an arbitral process with a consultative element, whereby disagreements or difficult cases go into a process that acquires expertise. But that expertise is rooted in the territorial context from which the complaint arises.
And the final point I would say is, how would you keep all of that honest? And I don't think you have to go to the G8 or the UN. I think the Internet Governance Forum is an admirable locus, where the collective of global internet actors could, from time to time, expose their current experience and behavior and seek feedback from the multi-stakeholder community. Thank you.
DAVID DRUMMOND: Well, thank you very much, Mr. Madelin. I'm sure we have questions. Who'd like to go first?
SYLVIE KAUFFMANN: Yeah, I can.
DAVID DRUMMOND: Sylvie.
SYLVIE KAUFFMANN: You mentioned the rules that exist in the world of journalism and the toolbox that's actually have been-- we've had a few examples before. Is this an avenue you would like to see us going to? Do you mean that publishers would definitely have a stronger say in this process? Or that search engines should take example on these rules or out of this toolbox?
ROBERT MADELIN: I think I rather meant the latter, that the ways in which journalists and advertisers self- and co-regulate in mature jurisdictions like European Member states can be an example that could be followed by a self-appointed or self-designated collective or big internet companies. That would be a very cost-effective way of creating a framework of accountability above corporate, autonomous responsibility but below any call from companies such as Google for more laws everywhere. But I did not mean that journalists, publishers, as a class, should particularly sit in judgment on search as an activity.
SYLVIE KAUFFMANN: Are you comfortable with the fact that search engines should have the role that has been given to it by the courts-- European Court of Justice?
ROBERT MADELIN: I think it must be a philosophical question. Because no high court could define the responsibilities of actors in the European jurisdiction. Philosophically, however, I believe that the court has done no more than something which thinkers around the internet economy have been calling for for some little time, which is an explicit strategy of responsible behavior on the part of those who own power in the internet space.
Because it's not just a philosophical vision. My profound policy conviction is that if the powerful non-state actors do not satisfy public opinion that they are essentially doing the right thing, there will be increasing calls for territorial legislative intervention, which will mess up the global internet economy. So that's a philosophical and a policy statement, because the law, of course, is the law, as Mr. Nemitz will explain at length after lunch.
FRANK LA RUE: Yeah. I'd like to clarify and to make a question. I didn't say that all forms of arbitration have to be institutional or state sponsored. The point was the independence from the search engines or from anyone who will be defined by the result of the arbitration.
And in that sense, will you still uphold the final possibility of the last resort to appeal to a court in any possibility? So my question would be, I think the possibility of a private arbitration is fine, as long as it is truly independent, which was a question. And therefore, how to build this arbitration process should be a multi-stakeholder procedure.
And I do believe in self-regulation. Even in the question of press issues and press ethics, many journals have an ombudsman, for instance, of their own or have their own form of verification as a private way. I think this is a good practice. But there has to be clear standards, and there has to be a clear procedure, and in this case, an independent procedure.
ROBERT MADELIN: So just to comment, I mean, I think that by definition, the law is the law and, therefore, provides the backstop if you can persuade the court that they have jurisdiction in your case. In this area, it seems to me-- though I'm not a lawyer, so I look nervously at Paul as I talk-- that the court has invited those who publish information to think a bit more than about defamation and libel and to look at something else. OK. So we now learn that there is a lower threshold of discomfort for the individual above which they can then go to court.
I think that the experience of self-regulation, at least as we've lived it around advertising in Europe-- so it's a more regulated area than journalism-- suggests that if one company says, trust as, we have an ombudsman, that creates a lower degree of objectivity-- I won't use the word "independence"-- than if you have several companies or in a territory you have a process which is set up and which is independently administered by the self-regulating organizations of the European advertising standards aligned. So we have independent ring-holders. But country by country, you will see-- and over time, you will see-- in Europe a trend towards increasing the numbers of non-industry insiders in the arbitration conversation. And I think that's a good thing.
If people are interested, there's an 11-point how to do it set of commandments which the outgoing European Commission adopted as what makes self-regulation trustworthy. So there's a good academic body of knowledge is all I'm saying, I think. And the notion of independence, which is proper for a judiciary may be a bridge too far in terms of feasibility and cost for a self- or co-regulatory mechanism.
PEGGY VALCKE: Mr. Madelin, I was very happy to hear you arguing for a holistic approach. And I was wondering to whom this was actually addressed? Was it addressed to market actors or towards the legislator? And if it's the latter case, are there any initiatives that are planned by the European Commission? And will they affect the ongoing data protection reform? Thank you.
ROBERT MADELIN: So your last question, Mr. Nemitz is the person who can answer. I mean, to be frank, this is day two of the first working week of the Juncker Commission. So we can, in practice, only speculate, and even if we knew the answers, couldn't tell you.
In part, yes, the bureaucrat is being solipsistic. So I hope that we will take a holistic approach. But I actually believe that it would be a big missed opportunity if a company such as Google got too much bedded down in the legal department's compliance framework and said, how can we show that we comply vibrantly with this ruling?
I think the questions about what are our global standards as a company and how does this interact with the other areas where we are being challenged to be responsible seems to me important. And I think it would-- I don't sit on the board, of course-- but it seems to me it would reduce the transaction costs of improving the internet world if this is taken as an opportunity to think about solutions which could be applied in a broader sense. If, for example, we invented an arbitral solution with consultation and territorial rootedness as a response to a right to be forgotten ruling.
It would work for beheading videos. It would work for indecency. It could work for other things.
DAVID DRUMMOND: Other questions? OK. Well, I guess that will conclude the first session. We were scheduled for a 30-minute break. Let's try to get back here, say, five after the hour, so about 25 minutes. And then we'll move forward.
For the audience, this is a good opportunity to submit some written questions for later after the second panel. Thank you very much.
DAVID DRUMMOND: Let's get started again. We have a second panel of experts. And the first expert is Mr. Hielke Hijmans. Mr. Hijmans has for many years worked at the office of the European Data Protection Supervisor in Brussels and, in that capacity, has been responsible for legislative opinions of the EDPS, the cooperation with national data protection authorities, as well as for working on the public access to documents, and the representation of the EDPS before the Court of Justice. He's also worked on the reform of European Data Protection legislation and on the relations between the EU and the US in these areas.
Mr. Hijmans is currently on sabbatical. And he's writing a thesis. And his research focuses on privacy and data protection and aims to encourage thinking on how to give governments better instruments to effectively protect constitutional values in this era of globalization and continuous innovation. So Mr. Hijmans, you have the floor for 10 minutes.
HIELKE HIJMANS: Thank you very much. I thank you very much to have the opportunity to say some words to this panel. I would like to mention nine issues, which I think could be useful, hopefully helpful, for this panel. I start quite general and then will come up with some more practical suggestions.
My first point-- privacy and freedom of expression remain equally important. This is background which should be clear in my view. I recall that this is what the Court of Human Rights in Strasbourg said in the Springer case a few years ago, declaring that the rights to privacy and the freedom of expression deserve equal respect.
In this case, the court states that, as a rule, in a specific situation, the right to privacy in data protection will overrule not only the economic interest of the search engine but also the interest of the general public to receive information. The argument I will develop is that despite these wording of the court, privacy and freedom of expression remain equally important. And I think that could be good guidance for the work in your council.
Second point, the changed reality. The court takes into account a changed reality in the information society that impacts on privacy and data protection. In particular, what the US report of Podesta Reports written for President Obama calls, "the persistence of data." Data, once created, is effectively permanent. Moreover, it's everywhere ubiquitously available so it can be accessed through other sources than the website or the publisher. Nothing new for you here.
But it's important that everything stays. And what I think should be addressed, re-addressed-- and that's for the Court as well-- that data can become more damaging in the course of time. Another thing, which I would like to emphasize as well, is that data protection law was never meant that data processing or publication, that that's for an indefinite time. It's explicitly said that data are published or will be processed as long they are relevant. If it's not relevant anymore, they shouldn't be there anymore. That's the idea. Of course, that doesn't work exactly anymore. But that was the idea.
This reality impacts the balance between the fundamental rights, my third point. This is clear in the case of Mr. Costeja. It's clear that his specific case, his privacy right was at stake. What I think more in general is that what the Court just does is basically trying to restore a perturbed balance in taking account of differences in reality, changed reality, a practical way of ensuring that privacy and freedom of expression can both be enjoyed. And that's what I do. It's not what I heard before today, a right to create the right to rewrite history.
My fourth point, the effect on the freedom of expression. We all know this freedom is guaranteed by the Charter of Fundamental Rights of Union, Article 11. And search engines have, as mention everywhere, an important task in developing an information society as a condition that its freedom of expression can really be exercised, can be enjoyed.
I argue that since they have this important task, they also have a task in ensuring that the limits of freedom of expression are respected. The removal of the link gives effect to this task. And how should we do it in a practical way? Not with great words talking about the right to be forgotten, but in a practical way in which all the different elements can be reconciled, protecting individuals where needed.
And a point I would like to make here also in reaction to what I heard before, this can go far beyond what [INAUDIBLE] are illegal. Also any other information must be deleted at a certain moment because it is indecent, the information, or simply because it's embarrassing for the person. If the information is embarrassing, that can, under certain circumstances, be enough to delete information.
Balancing is needed. And that's not so easy. We all know that. And that's why we are here. And I think this balancing is also far more difficult than the balancing which is foreseen in the Data Protection Directive in the famous Article 7f to which the Court refers. This is more difficult.
I would give you a few practical ideas. In first place, a public figure is mentioned but, in my view, should not always be exempted. We had an example here at the European Parliament which led to a court case, where the participation of members of the European Parliament to a private pension fund was at stake. And there, you could argue, that because there's such a clear link to what they do as members of Parliament and their participation in his pension fund, that there is a right of the public to know. But that could be completely different when it is about the health problem of a member of the European Parliament. I think, there, the balance is different.
The other thing is what are public figures? I think it could be good advice for your council if you limit a bit the group of public figures. Because in the current internet age, everyone is, to a certain extent, public. That's why if you google everyone, you can their names. That makes them already public to a certain extent. I exaggerate, but that's the point I would like to make.
I also saw in the questionnaire you sent the time limits for which data should be available. Well, that's a difficult question in general, I think. But maybe some guidance could be given-- one, two, three, five years. But I find it too difficult to really give an idea. But I think that is something you should look at as well.
A good point was made before about historical records. Historical records should remain, normally, but should not always necessarily directly be accessible through a search engine. And the last point, which I take from the earlier discussion on practical things, when it talks about criminal cases, acquitted persons, they really have a right to be forgotten.
[AUDIO OUT] although I don't want to mention that there's a right to be forgotten. That's what I want to say.
The next point we discussed is the balancing of is it a task of a search engine. To a certain extent, that's a hypothetical question because the Court said it is a task of the search engine. But I argue, in addition, that it is also a task for governments to give guidance. And I think their data protection authorities and maybe, at a certain moment, European Commission or others have a role to play. I think how much I like an idea of dispute resolution. I think a private system of dispute resolution in cases like this is not workable, is not sufficient because of the public element, which is part of this whole discussion.
My seventh point-- the transparency towards the web provider, another point which was raised-- the web publisher. If I understand well, the web publisher is by Google informed of every request. In general, I think that is a wise approach.
Also since in many cases a web publisher can express a view on the public interest of the link. Or alternatively, if you may decide because of this complaint to end publication itself, in general, I think a good thing. But it may be that there are circumstances that informing the data subject would jeopardize the interests of the person asking for the listing. So my advice would be not an automatism. But practically, maybe a way forward would be to ask in the request form to give the requester the possibility to give permission for forwarding his request to the web publisher. An idea.
I still have one minute, I think. My eighth point-- I'm almost there-- the territorial scope of the decision. That's another issue which has been discussed a lot. I would say it is reasonable to say that, in principle, only EU residents can ask for removal. But for different reason that otherwise the Data Protection Directive would not apply. Everyone can apply according to this directive. But normally, I would say that people outside of Europe, they don't use the service of an EU establishment of Google. And solely for that reason, they are outside the scope of that decision.
Another issue in this respect is the approach that a URL would not be deleted from Google.com. And here, I have serious doubts whether that's a good way forward. This approach, in my view, does not give effect sufficiently to the judgment. The " utile," as we call it in European law of the judgment, is not respected in this way, I think. To put it in other words, the aim of the judgment is circumvented by giving so limited scope to the delisting and where you can so easily find the names, too. For discussion, that's what I would like to mention.
My last point-- and that's in four seconds-- guidance. Yes, I think guidance should be given. And I hope that that will be done in a sufficient way, first, by that by the Article 29 Working Party, probably, but also in a later stage in law. If the new data protection legislation is the right point for that, maybe it is. But if we start discussing everything about right to be forgotten, in that framework, I'm afraid that the regulation will never be adopted. So maybe for proper pragmatic view, maybe don't mix these two things.
DAVID DRUMMOND: Great. Well, thank you very much, Mr. Hijmans. Do we have questions? Let's start with Sabine.
SABINE LEUTHEUSSER-SCHNARRENBERGER: Yes. Thank you very much for your interesting statement. And I can say probably I share a lot of your opinions. But I have a question.
You mentioned to [INAUDIBLE] the group of public figures. That is really a problem. How to limit the group of public figures? Have you any idea for criterias? Politicians, entrepreneurs, who else? I think it's difficult to have a list with yes and no. This is a public figure. This is not a public figure. But perhaps you can give us some ideas, some criterias.
HIELKE HIJMANS: Well, I think the first group which you should include is any case people who stand for public office, who stand for an elected office. That could be the politicians. The second group could be, possibly, persons who are known persons in the media in their country. That could make sense.
For other things, the problem is-- indeed, it's very difficult. Because normally, you would do this very context specific. You would say, someone is only a civil servant, so he's not a public figure. But a civil servant, at same time, can also be an important football player. Let's put it like that.
So it is quite complicated to make a list. But it should have to do, I think, with a right to know of the public. And there, you closely come to what I would say the big heads of states, et cetera, and people who have more important political functions but also in big enterprises. I would put it like that. The right to know should be decisive. But how to make this list, I would love to do it at a later stage. But I don't have a list ready.
SYLVIE KAUFFMANN: You mentioned the issue of acquitted persons. And you seemed to imply that they have a right to-- not to be forgotten, but to be--
HIELKE HIJMANS: Delisted.
SYLVIE KAUFFMANN: Erased. Of course, the various countries in the EU have various attitudes to this and even various legislation. So how would you suggest that this could be harmonized?
HIELKE HIJMANS: That's true. I think, in general, the differences are not so much about acquitted persons, I think. But it's more about the criminal records, how long they will say available, et cetera. There is a big difference in our harmonization in the member states.
When I talked about acquitted persons, I had in mind a case of the European Court of Human Rights in data protection on a different issue, on a DNA file in the United Kingdom, so-called Marper case, where it played a role that persons who had been charged for a crime but later acquitted were still kept into this DNA database. Something like this could play a role here. It's not so easy. But I think, in general, there should be a general presumption-- that's what I would like to say-- that people who are acquitted, who once have been charged with a crime but have been acquitted, that they can use this right to be delisted in general, of course.
DAVID DRUMMOND: Luciano and then Jose-Luis.
LUCIANO FLORIDI: Thank you. I'm trying to understand your position more carefully. I have the following question for you. Suppose tomorrow a new search engine is created by a European startup and the new search engine is such that allows all search but name and surname. It does whatever Google does today. But it does not allow, end of the story, if you enter Luciano Floridi, it will give you a blank. It doesn't allow that particular search. Would you find that a satisfactory solution for the European market, for the European perspective, or not? And whatever position you have, what would be your justification for that? Would that be a good solution or not?
HIELKE HIJMANS: Well, I think the question is if this is a satisfactory solution is one thing. The other thing is that if you do so, you would definitely give effect to the judgement of the Court. But of course, the issue will probably be that this particular search engine will not survive for very long because it will not be very competitive with Google, I'm afraid. So from that respect, it's not a very likely way forward.
But yes, if a search engine decides to do so and claims that by doing so they've fully respect privacy, I think nothing wrong with it. But I'm sure that the market will ensure that the media will use other and that people can use other search engines which allow search by name.
LUCIANO FLORIDI: I completely agree with you. Therefore, we could have that particular search engine in Europe and, say, the press could use other search engines around the world, say for example, from another country to do their business. So that we would basically create a market within Europe where there is competition. And there will be a fight.
HIELKE HIJMANS: Well, the thing is, of course, that a market, yes. Competition, I think, perfect. That really should be done. But what we talk in this context is the protection of individual. And the fact that one search engine protects the individual to the ultimate and gives enormously strong protection which goes far beyond what the Court asks but another search engine doesn't protect at all is that, in the end, result which is satisfactory. And there, I say, no.
DAVID DRUMMOND: Jose-Luis and then Peggy.
JOSE-LUIS PINAR: Yes. [INAUDIBLE], you talk about the necessary balance between fundamental rights, freedom of expression, privacy [INAUDIBLE], but what about the balance between not two fundamental rights but, on the one hand, a fundamental right like privacy and, on the other hand, just the interest of the general public? I mean, the decision makes considerations about the balance between the fundamental right of data protection and the preponderant interest of the general public. What did you think about this balance between not two fundamental rights, but one fundamental right and just general interest?
HIELKE HIJMANS: I think this is an extremely interesting question. Especially from the perspective of EU law, I would say, what the Court did, one could maybe say that the Court did it in this way because the case was brought to it as a case on the Data Protection Directive. It was not brought to it as a request by the National Judge to balance two fundamental rights. So that's what the Court did in practice.
In general, there's another point I would like to make. Indeed, a balancing between privacy or between two fundamental rights on one hand, and on the other hand, a balance between the fundamental right and in general interest like, for instance, economic interest does not go in exactly the same way. It's a balancing taking place. But I think the Court will be-- then I speak as your lawyer-- less strict in its balancing when it allows between fundamental right and a more general interest, it will allow certain limitations to the fundamental rights but not to the same extent as if the other interest accorded will also be a fundamental right.
I hope I'm clear, but there is a difference in the level of scrutiny by the Court. That would be my answer is a European lawyer. I hope that's satisfactory.
PEGGY VALCKE: Mr. Hijmans, thank you very much. I have two questions. The first one is a short one; the second one is a bit longer. Let me start with the more simple question.
You referred to the necessity to remove links, not only from the European websites of Google, but also google.com, and you said because if they don't remove links from google.com as well it might undermine the effet utile it might undermine.
Does that imply that you would suggest that search engines can wait and see if there's actual evidence that people massively turn to google.com if they don't find certain information anymore? Or is it your viewpoint that already today links should be removed also on google.com?
And what then does it imply for other regions in the world doing the opposites or also trying to impose standards here in Europe?
HIELKE HIJMANS: I argued that it indeed would be the best solution. I cannot say that's the only acceptable solution under EU law; I'm not a court. But that also links to google.com were removed for the reason that it's otherwise too easy to get the information itself. There must be a kind of-- if people automatically turn to google.com, then the effectiveness of this decision would be circumvented. That was the point I would like to make.
Also, because I think that people might automatically indeed go to google.com in that case if that ever becomes known. Second point, I think Google-- but I'm not Google-- is perfectly capable, to separate, to ensure, that this deletion only takes place where Google is accessed from European territory. I think that's technically not a problem. But I'm not Google.
PEGGY VALCKE: Something we can ask. The second question refers to the Court ruling. So basically the requirement and that the data protection directive is that you process data in such a way-- or fairly lawfully in such a way that the data are adequate, relevant, not excessive, et cetera. But the Court only looked at the situation-- or only referred to removing links.
Do you see it in line with our European rules if search engines would allow data subjects to add certain information to the search result in order to put data into perspective so that it's no longer to be considered as being processed in an excessive way or in an inaccurate way?
Because this is an idea that also comes strongly from the side of the press industry-- written press and audio-visual press. They say, we don't want to remove any information unless it's really, really necessary, but rather we are in favor of adding context or adding new information, updated information, to our previous articles so that it's, again, accurate or updated.
So could this be, in your view, also be a solution for search engines or is the only solution for them to remove links from search results? Thank you.
HIELKE HIJMANS: My view on this would be that I think it will be perfectly OK if a search engine would leave that option to the person requesting for removal of the link, but it cannot be presented as the alternative-- the only possible alternative for removal. So if you give, as a search engine, the possibility to choose, I would say that could work, but not if it's only adding information and no removal.
FRANK LA RUE: Yes. Three very quick questions. I keep on hearing the issue of privacy and freedom of expression, and I keep on asking-- and I ask you-- that I don't think that this is what we're dealing with this now. Privacy is a recognized right with recognized standards in the ICCPR and certainly in Europe, and freedom of expression as well.
And for me, they're not in contradiction. They actually go hand-in-hand. There cannot be full freedom of expression without privacy. I wrote a report on this to the Humanities Council. I think what we're dealing here is something different, which is not a breach of privacy, because no one would be in favor of a breach of privacy in that sense.
As I was commenting with Mr. [INAUDIBLE] before from the standard that certain issues are good journalism and are by law. The protection of children and their identity of the protection of people with disability or with any other form of vulnerability. The protection of witnesses, in some cases. That is normal. I mean, that is a given so we're not adding anything new.
What worries me here is the arbitrariness of anyone being able just to ask, because all of a sudden I feel something of my past becomes embarrassing to ask to be deleted. Don't you see-- and I correct in making this difference or not?
And secondly, with the acquittal, I fully agree. Anyone who was prosecuted and acquitted, the whole information should be erased. The acquittal is precisely the decision of a court that there was no evidence, or no sufficient evidence, to have a criminal trial. So perfect. That is, by law-- there's nothing there.
It is different though in the cases that people that were prosecuted, sentenced by the court, and fulfill their sentence, in which case there is a slight difference between criminal law and human rights. Because obviously we want these persons to be rehabilitated, to go back into society, but from a human rights perspective it depends on what the crime were. Because there's a principle non-repetition if the crimes had to deal with human rights violations or serious sort of victimization of individuals, like crimes against humanity, in some cases.
But other crimes could have been just systematic rape or sexual assault in which they may have fulfilled their term, but you still have an obligation-- to State has an obligation-- to make the information public to protect possible future victims. I don't see a problem with that, and I don't see that anyone could say, I want to have that the deleted.
And finally, to add to this whole thing with public figures, I agree that it is difficult to separate public-- but even public figures have some degree of privacy in the sense they have the right to protect their families and their children from offensive scrutiny that is not necessary. In their own life, they may not have any privacy. And there was a famous Mitterrand case about his operation and whether the press had a right to know what he was going to be operated on or not. And I remember this was a big legal case in front.
But the idea is that what becomes irrelevant today of an individual may be very relevant if this and he or she wants to be a candidate later, wants to take public office or play a public role. So we are dealing with history here. And the final comment came by saying that, yes, it is a search engine link, but in today's tradition, we search for information or historical information-- we do historical research with the internet.
We don't go, as before, to the libraries and go through the 100 books to see what we can find. We use electronic methods. Aren't we limiting research? Not only a question of individual perspective, but also limiting the rewriting of history or the documentation of history and the non-repetition of certain periods of history and crimes that were committed?
HIELKE HIJMANS: These are quite some questions. Let me try to be brief.
In the first place, I don't think we are rewriting history. We are talking here, and maybe it's best to stay where we are talking here at this moment, is the links provided by search engines. That's what we talk about. Not about taking away information from archives.
And I'm sure that there will be other methods to find this information. The public figure. The other point which I wanted to make in any case about the acquittal or people who have served their sentence. We have extensive law in all the member states about criminal records and criminal records that should be deleted after some time, especially if you allow people rehabilitation.
If then despite the fact that the criminal record is deleted if still through a search engine this information can just be found, then the deletion of the criminal record does not make sense anymore. Despite the fact that there is of course other circumstances. But that will be my direction to this.
In general I would say, there is not a natural contradiction between freedom of expression and privacy, even that's for another discussion probably. It would state that privacy is needed to have full freedom of expression, which people first must think about things before-- in the private sense-- before they can express themselves property.
So I think I hope that gives an answer. Not to everything, but a little bit.
DAVID DRUMMOND: OK. Thank you. Thank you very much, Mr. Hijmans. We're going to keep moving along here to stay on time.
Our next expert is Jodie Ginsburg. Jody's the CEO for Index for Censorship-- Index on Censorship, an international organization that promotes and defends the right to freedom of expression.
Jodie joined Index on Concentration from the think tank Demos. She's a former London bureau chief for Reuters and worked for more than a decade as a foreign correspondent and business journalist. She's previously Head of Communications for Camfed, it's a nonprofit organization that works in girls education.
Jodie, the floor is yours.
JODIE GINSBURG: Thank you very much. I feel slightly that Mr. La Rue in his last comments has stolen some of my speech. But nevertheless, I will plow on. Index on Censorship, as David has said, is an international organization that monitors reports on campaigns against censorship, not for censorship. And against threats to free expression globally.
We're funded by a number of public and private organizations, including the European Union and Google. It's important to state at the outset, as a number of others have done today, that Index is not anti-privacy. We believe-- as many others have said-- that the right to privacy is the vital corollary of the right to free expression, and that legislation is needed to protect privacy in the same way it is needed to protect free expression.
Nevertheless, we are concerned by elements of the Right to be Forgotten ruling, and I want to address three points related to that this afternoon. The first point concerns clarity, the second transparency and accountability, and the third, which is related to the second, is recourse.
Before I come to these three areas, I want to address the question that a number of panelists have asked this afternoon, which is whether the right be forgotten will then constitute censorship. Since it only involves removal of links to content, but doesn't demand that content be expunged altogether, and people argue that because of this it doesn't constitute censorship.
We at Index would disagree. Making information difficult to find is one of the hallmarks of a censorship regime, both in the online and offline world, that's actually incredibly difficult to remove source material altogether. So the majority of censorship works by making things harder to find by redacting content, banning or destroying copies of text, arresting or threatening authors of material.
With this in mind, we believe that this ruling, with its extremely broad definitions on the kind of material to which links should be removed and the kind of people who can request their removal, opens the door to censorship of material. And as such, is both a threat to the public's right to free expression, but also their right to information.
As I said, we have three areas of concern with the ruling. The first relates to the vagueness. Irrelevant, outdated, or excessive material is an impossibly wide definition and leaves far too great a scope for interpretation. And indeed, differing interpretations between search groups.
Others have suggested that detail can be extrapolated as cases come to light, but given the other major challenge in this area, the lack of an oversight and transparent appeals mechanism for a public interest challenge, this leaves a considerable period of uncertainty. We are encouraged that the European Data Protection Agency is a part of the Article 29 working group have said that they will publish guidelines soon.
But in general, we believe it will be extremely difficult, and indeed impractical, to arrive at exhaustive definitions. And it would be more effective to focus on questions, such as those of harm, which I'll come to shortly. There may also be ways to think about defining guidelines to protect media freedom using a number of cases that we've seen at the European level.
We accept the ruling makes some attempt to balance right to information and public interest, but, again, this is a particularly vague wording. Considerations of what in the internet age constitutes a figure is open to wide interpretation. This isn't just a practical issue, but also a broader philosophical one concerning the term "public interest."
Defining private individuals and public interest in the area of social media is extremely complex. Facts that an individual deems no longer relevant or outdated may indeed not seem so until that individual chooses later to apply for public office or applies to become a director of a company where such information may indeed be extremely relevant.
I was going to say something here about rehabilitation of offenders, and like a number of others here, I would say that the current ability of searchers to find information about individuals who have had their conviction spent is incompatible it seems to us with laws about rehabilitation, such as rehabilitation of offenders such as those in the UK.
And there is a potential clash here, but I would suggest tackling such questions through existing legislation, not through blanket laws that attempt to constrain the internet as a whole. In general, though, we believe that hiding the past is not the means through which societies flourish.
Could not have been dealt with under these agreements. Instead, by introducing this catch-all ruling, there was a risk that accurate, legally published content is whitewashed from history.
So I would conclude by saying that our main concerns is that increasingly laws that are intended to protect privacy, of which this ruling is one, can have a potentially damaging effect on free expression, even when free expression is seen to be encompassed by that ruling. The European Commission has argued quite strongly in its recent myth busters on this that the ruling doesn't allow for censorship.
But as I have suggested, there are many forms of censorship, and for us one of those is making information difficult, if not impossible, to find. As the media freedom representative for the OCEE has said, this ruling could make it increasingly difficult for journalists, for example, to perform investigative journalism if you essentially lock away information that would otherwise publicly be available.
Those are the most general points. I'm very happy to take questions on other specific areas.
DAVID DRUMMOND: Great. Thank you very much. Who'd like to ask the first question? Luciano.
LUCIANO FLORIDI: Thank you. Just two quick comments, and a question. The first comment is that I completely agree with you about the difficulty of identifying public interest. And my concern for sometime has been that there's a kind of naive logic that has been adopted here whereby first comes public interest, and then comes the information that satisfies the public interest.
Now, as we all know, in life it doesn't work that way. The two things are interactive. There is public interest because there is information there that makes something publicly interesting as well.
So if I captured your point correctly, I think there's a bit of a more sophisticated way of understanding what public interest is here, which we should take into account. But for the sake of time, let me just jump my second comment and go to the question.
As you may imagine, we have had some discussion with Jimmy Wales, who is not here and therefore cannot defend himself and so therefore we can attack easily, about censorship. No, more seriously.
There's a bit of a concern sometimes that censorship is a quite loaded word. As in, as soon as you speak about any action that has got to do with information, making it more opaque or stratified, censorship. At that point, the game is lost because censorship is bad in and of itself. We cannot argue any further.
Is there a way of talking about some way of modulating access to information in a more historical way or stratified so that we don't have a single page that is being accessed throughout without falling into the censorship vocabulary? Because I'm totally with you, as you may imagine, but I find it difficult to frame the question of handling information carefully without falling into, oh, that's censorship. Let's be careful.
JODIE GINSBURG: I think I would answer that by saying rather than talking about censorship per se-- and as I say, we would define making things more difficult to find as a form of censorship-- is to take a more nuanced approach so that essentially what you wouldn't be doing is deleting links. You might choose to display them in a different way, but I think there's a fundamental difference between removing the link altogether or removing content, for example, which we're not talking about here.
But there's a difference between that and altering your algorithm so that things appear in a different way. I think that's a slightly different conversation. But when you're specifically talking about hiding information, that to us is a form of censorship.
So yes, I think it's possible to have a more nuanced approach to the way in which information is presented that isn't necessarily censorship.
LUCIANO FLORIDI: I was asking for more. I was asking for a way of treating some delinking as not censorship. That'd be like talking about good cholesterol. No? No?
JODIE GINSBURG: Well, I think it comes back to my point about guidance around the kinds of content that you might want to delink and the reasons that you might want to delink that content. So we've already talked, for example, about content that might relate to young people or offenders or so on, but that's more about the kind of content you're talking about, rather than the principal itself.
DAVID DRUMMOND: Other questions? Peggy.
PEGGY VALCKE: Thank you very much, Ms. Ginsberg. I would like to continue a bit further on this harm test. How would you operationalize that? Is it a kind of objective test you have in mind, or more subjective? And I'm thinking about the situation that we're confronted with at the Flemish Media Regulator, of which I'm a member. There we can receive complaints about commercial communication on television, just because you're a viewer. It's assumed that you're harmed if you see too much-- if a certain product placement or advertising spot bothers you, you can file a complaint with the Regulator, who then, of course, does an [INAUDIBLE] hearing at that moment starting.
Here in this Court ruling, the Court also said, there's no need for the data subject to demonstrate any harm. I assume because you are the data subject, you are bothered by seeing certain search results showing up when people google your name. The harm is somehow assumed.
So if you talk about the harm test, what does that mean? Do you want the data subject to show certain harm, how information is affecting his or her life? And who's the judge on that? Is it the data subject? I mean, views of data subjects on what harms them, what affects their life, are so different.
JODIE GINSBERG: So I think the bar has to be set high. And I would agree with Patrick that at the moment it's far too easy simply to fill in a form and say this has upset me.
In the UK, for example, there's a great deal of debate around right to be offended on Twitter, for example. And actually the test for whether or not something has caused you harm is very high. So it would be a sense that you were being harassed, or you were being libeled or defamed, or it was making it impossible for you to do your job.
And I would agree with Mr. La Rue that simply saying, I didn't like this review of my latest book, please would you remove the link, is not adequate. And at the moment, the way in which the ruling is defined would potentially allow you as Google or any other search engine to say, that's legitimate. It's perfectly legitimate for an individual to decide that they didn't really like the review of their book or the review of their concert, or the way somebody criticized their hairdo, and therefore we should remove that link. And I just simply think that that's an insufficient level of test.
DAVID DRUMMOND: Great. I think I wasn't speaking into the mic, thank you. We'll move along. The next expert is Mr. Paul Nemitz. Mr. Nemitz is the Director for Fundamental rights and Union citizenship at the Directorate-General for Justice of the European Commission. Data protection is one of the key responsibilities of the Directorate.
Before joining the Directorate-General for Justice, Mr. Nemitz had held posts in the legal service of the Commission, the cabinet of the Commissioner for Development and Cooperation, and in the Directorates-General for Trade, Transport, and Maritime Affairs. He's got broad experience as an agent of the Commission in litigation before European courts, and has published extensively on EU law. He's also a visiting professor at the College of Europe in Bruges. So, Mr. Nemitz, please proceed.
PAUL NEMITZ: Thank you, David. And let me say, first of all, a few words about this panel here and its nature, to understand why we are here. Second, I would like to put in a few good words for the judgment of the ECJ, because I do believe we owe the same respect to the European Court of Justice as we all respect the European Court of Human Rights, or for that matter, highest constitutional courts in member states, or Americans owe respect to the Supreme Court. And third, I would like to address a few specific questions and finally challenge Google to take some of the debate to America.
Now first, this is of course an unusual situation in the sense that we are discussing here about big questions of the public good. And normally it is parliaments or governments which hold a hearing, and the likes of Google or companies in civil society are the experts we listen to. In fact, a number of public authorities have refused, for that reason, to appear here. And I would just like to say why I came here today.
In Brussels, of course, we are used to big time lobbying activities. And as some have commented, these panels, in part, may be a good faith effort to find practical solutions to a problem. But in part, of course, also, they may be an element of passive aggressiveness towards our data protection rules and our jurisprudence in this context.
And so from that point of view, I would like to make a compliment to Google, in terms of taking the lobbying activity public. It's actually good. It's better than what we have seen in the beginning of data protection proposals-- a lot of back room, very coordinated activity. Here at least we see what's happening publicly, and the public can make up their mind on the different arguments to be made.
So we at the European Commission believe, of course, that even when we are here at this panel, and just talking as experts to the ones who are called upon to make, it seems, decisions on the public good, this panel doesn't change the rules of the game, which is that in democracies, legislators legislate, and judges decide on how to interpret legislation. So with this in mind, let me put in a few good words for the judgment.
First, this judgment of course has to be read in context. One cannot just take this judgment and read it twice and then say, that's it. Let me give you an example.
When in Paragraph 97 of this judgment, reference is made to privacy and data protectional concerns not overriding the right to information for particular reasons, such as the role played by the data subject in public life, this "such as" signals what? It means that the Court here has in mind the great existing jurisprudence on reconciling the right to privacy and data protection on the one hand, with the right of freedom of expression, which is so essential in our democracy.
And let me just mention one judgment which I believe the Court probably had in mind here. And there are many. There are thousands. There is a casuistic of jurisprudence in Europe and probably also in the United States on reconciling individual rights with freedom of expression.
But one of the judgments, certainly, which comes to mind here is the case of Von Hannover against Germany, judgment of the European Court of Human Rights on the 7th of February, 2002, where the court says, where the right to freedom of expression is being balanced against the right to respectful private life, the criteria laid down in the case law that are relevant to such a situation are set out below.
And then comes a long development of criteria. What is the contribution to the debate of general interest of the document in question? How well known is the person concerned? What is the subject of the reporting? What are content, form, and consequences of the publication? What are the circumstances in which the text has been produced, or for that matter, the picture has been taken?
So what I'm saying is normally, if a company like Google is confronted with a difficult question of law, it goes to the good law firms which we have. It goes to the professors. And it seeks professional advice.
If Google would have done this, it would have received a list of all the codes of practices and all the jurisprudence which already exists on this issue of reconciliation. And my bet is, if you would have outsourced dealing with this issue to some law firms in member states-- some good, midsized law firms, you would have found practical solutions very quickly. But you chose not to do so for your own reasons, part of which, I think, pertain to other aims. But everybody is free to judge on this.
Second, now, let me move on to some of the questions relating to the importance of the freedom of speech and freedom of information in the jurisprudence the Court of Justice. Just a few weeks before the Google judgment was taken by the ECJ, the court came down on the 8th of April with the Digital Rights Ireland case. And I do believe one has to read these two cases together.
Digital Rights Ireland was a judgment which [INAUDIBLE] the data retention rules in Europe which are very much alike to Article 215 of the Patriot Act in the United States. This is a converse situation. The Court here said, we don't want people's telephone behavior to be remembered eternally, or for a long time, for reasons of law enforcement.
And in this judgement, the Court said, imagine a world in which everybody who was using electronic communications had to think always that there was data retention, and what effect this might have on the use by the subscribers of the means of communication covered, and on their exercise of the freedom of expression, guaranteed by Article 11 of the Charter of Fundamental Rights. Paragraph 28 of the judgement. So here, the court [INAUDIBLE] a collection of data by the state-- admittedly a different situation-- for concerns of freedom of expression.
And when one looks at these two judgments-- the concerns of freedom-- they resonate also in the Google judgment, where the Court talks about the new quality of ingerence into private life. And as Hielke rightly said, being able to develop thoughts in private-- that's part of democracy. That's part of freedom.
We see that the Court also has this in mind. One needs to read Paragraphs 97 together with 37 and 38, where the Court says that the aggregation and organization of information published on the internet is a new impingement on privacy, beyond the original publication, because it brings so many sources so quickly together.
And again, here the Court explains to us the impacts this may have on the behavior of people if in the future, nothing disappears anymore. So I think it is honestly not correct reading of this judgment to say it does not understand the importance of freedom of speech and access to information in democratic society.
Let me challenge, finally, at the end, Google to address the key challenge to freedom of speech in free societies. That is mass surveillance. That is what the NSA is doing. And if you want to credibly ensure freedom of speech, we see that the ranking in the United States on the global rankings of freedom of speech is much lower than in Europe.
And I do believe that the right to deletion of information, the right not to be surveyed, whether by private or by public parties, is key to freedom and fundamental rights and democratic life. So I would hope very much that after the tour of Europe and its capitals, with a lot of PR effort and a lot of money spent, you go back to the US, and you do the same on NSA activities. Thank you very much.
DAVID DRUMMOND: Thank you very much, Mr. Nemitz. In response to your challenge, I'll just say that it's a very worthy one. And in fact, I've spent a great deal of my time trying to push for the reform of our government's surveillance practices. In fact, we're very much trying to get legislation to reform that. We've spent a great deal of effort, and a lot of my personal time. This is something on which we feel very, very deeply.
Of course, we can't do it alone. We require folks in Congress and in the administration to do it. They have failed so far. But we will keep up the fight, believe me.
SABINE LEUTHEUSSER-SCHNARRENBERGER: Thank you very much, also, for raising the great challenge regarding mass surveillance. And you know, I was very much engaged in the process against data retention. And perhaps you know, I took the first step in Germany to have a judgement of our constitutional court in this direction. But from the ECJ it was a better decision.
But now back to our ruling. We heard from Mr. Van Eecke of the proposal to establish an independent arbitration body. You are well known dealing with all of these rulings of the courts, and now you are working on the European Data Protection Directive and so on.
Would such a mechanism, in your opinion, be in compliance with the ruling of the ECJ? In which way, perhaps? But such a mechanism that not the search engine decides, but instead of the search engine an independent body in which way.
Or could it be another way-- that the search engine has to decide, but before the editors or other affected persons have the possibility to give a statement, so there are more facts on the table before the decision of the search engine? Could you recommend one of these mechanisms to us or give us some arguments in favor or against?
PAUL NEMITZ: The Court said that Google is a data controller, and therefore it has the duties under the data protection law. And it cannot discharge these duties by referring to an obligatory alternative dispute settlement mechanism. Now of course if the complainants who come to Google and say, I would like my link to be taken down, agree with Google voluntarily, then one might think about this.
On the question of asking information to those of whom links will be deleted, I don't think that there can be a mechanical rule. But this is a contentious issue. My personal view on this is that if the case is difficult, and not easily decided after a first analysis, I think it would be good for Google to equip itself with the information it needs to take a good decision.
Now this should not amount to a denial of rights of the data subject. This could not be something which puts in question the effectiveness of the right to deletion. But I think that an element of dialogue-- either with the data subject, again, asking questions for reasons, and maybe also in a consensual way, then saying, well, maybe we should also ask the other side-- I think on some selected cases of great difficulty, where the public interest may be at stake, I would have some sympathy for that.
DAVID DRUMMOND: Other questions? Luciano, then Jose-Luis.
LUCIANO FLORIDI: Thank you. The question refers to something that we have already discussed today, the effectiveness of the ruling, which seems to be limited by the reality of the law.
Now some are suggesting, in terms of logical consequence-- and just in case anyone has any doubt, I mean logical consequence, I don't mean to endorse that logical consequence-- would be to require-- force, expect-- that a search engine removes the link also from other non-European search engines. And I think I understood today at some point, but I might be mistaken, that short of that, we could actually start making sure that European citizens do not have access to some kind of search engines, which of course remove only the links for those other nations.
So we would have a double solution-- force Google to remove-- or ask or pretend or make sure that Google removes-- links also from, say, Google.com, and short of that, to make sure that Europeans don't have access to Google.com. I would like to have your comment on that.
PAUL NEMITZ: I think the huge mistake in the debate to assume that Google defines geography. Domains are not countries. So the word extraterritorial, I think, is here misplaced. It's a general principle of law for all states, that when there are impacts of something happening under the jurisdiction of a country, then the country has a responsibility to deal with the matter in law. We apply this, and the US applies this in competition law, for example. If Google merges tomorrow with Microsoft in the United States, it's all happening in the US. But it has an impact on our market. We apply what is called the Effects Doctrine. We will decide on that under our rules.
And the same applies in press law. Please read "Forbes" magazine-- just yesterday, a fantastic article explaining, that under normal press law, when a book enters the United Kingdom. And the book comes from somewhere else. British judges have the right to decide the libel case. And it doesn't matter where the book came from.
And so I must say, for me, I would be very surprised if the court of justice would decide in a future case that it is OK not want to delete a link from Google Chrome. And this case, of course, will come to the court of justice. Because under our rule of law, where everybody has access to the courts, not only Europeans, we don't discriminate by nationality like is the case in some areas of data protection in the United States, where Europeans don't have access to court. Here, everybody can go to court, even non-Europeans.
So if someone in Europe lives here, has residence here, has an issue with Google going on taking down the link and Google says, we'll do it for you but not Google Chrome. This person has a right to go directly to a court, or alternatively, to a data protection authority. And then this will go up the chain. And probably one day, it will come back to the European Court of Justice.
The original judgment already mentions Google Chrome. The original judgment says, the law must have full effect. And there must be effective and complete protection of data subjects. So I don't see how the court of justice in the second case would say, but it's fine if Google maintains everything under the Google Chrome, no way.
JOSE-LUIS PINAR: Yes, first of all, thank you very much for being here. I think it's very important for us to have you here in this meeting. I agree with you that the ruling on data partitioning is very important. And perhaps these two decision are one or two of the most important decisions in data partitioning in the history of the decisions of the European Court.
I have two or three technical questions. First, the ruling doesn't make any reference to the website to editor, because since the Spanish court didn't ask the court about the position of the "Vanguardia," there isn't any reference to the editor in the ruling. But it doesn't mean that the editor hasn't any kind of participation in all of this debate. How did you think about the relationship between the editor and the search engine, as to that controllers, despite the ruling. And what we must do the data subject? Thus, the subject must go before just the search engine or also the editor.
Second, I agree with the [INAUDIBLE] application of the ruling. But as a former that Data Partition Commissioner, I have a lot of doubt about the practical application of the ruling, in terms of enforcement. I don't know if it's necessary in a sort of international corporations, even in Article 29, work [INAUDIBLE], there is a very important point about the cooperation, the internal European cooperation so between all the DPAs. If we talk about the extraterritorial application of the ruling outside Europe, it's perhaps more important to have this kind of cooperation. I don't know how can this [INAUDIBLE] apply the decision in the states, from an enforcement point of view.
And third, since the editor is the controller and also the search engine, what about the transference of information from the editor to the search engines? Is it necessarily the constant of the data subject to allow this transference of information?
And finally, what did you think about the last text of the draft regulation, Article 17, because I think it's different, the text of the regulation, the first text of [INAUDIBLE] 2012 and the last. Because in the first text, the regulation talks about just that the controller, but in the new text, there's a mention of the third party. Is the current text of the draft regulation according to the ruling? Thank you very much.
PAUL NEMITZ: OK that's quite a lot. First, on the difference between the search engine and the publisher, the ruling actually does refer to this difference and in a very explicit, in paragraphs 85, 86, and 87. And it makes clear that both are controllers. They have duties, but that the publisher can avail itself of other rights than Google.
And the judge, I think, also in a very pedagogical way-- because the Court of Justice has understood what the digital age is about. it has understood it in the data retention judgment. And here, it, in a very pedagogical way, makes clear that there's a new infringement a new ingerence into privacy and data protection by the search engine, because the quality of the profiling of the person is completely different to what the data archive does, or for that matter, what a search engine would do only on the BBC side. That's a different matter.
So I think this difference is very well explained here. Both have obligations. So the data subject has the choice to turn to one and the other. But one may have to do delete and the other not, because the other one may be able to say, freedom of the press.
Second, enforcement-- I do not believe that there are any specific challenges to enforcement of data protection decisions, compared to any other decisions of member states, or for that matter, of the commission. I wouldn't worry about it.
A company like Google, which at the stock market, has to comply with the law-- and the same way that Microsoft pays the fines, when the European Competition Authorities take a decision, Google must comply. So we do not need a special enforcement regime for data protection. The enforcement will be like any other decision of government addressed to a company. And there are many instances in which governments address decisions also to companies established outside the jurisdiction of the authority taking the decision.
Third question-- I did not quite understand it. But if I understood rightly, you said, could Google only talk to a publisher with the consent of the complainant who wants to have the link taken down.
JOSE-LUIS PINAR: The relationship between the two that are controllers--
PAUL NEMITZ: Yes. Use the translator, if you need to.
JOSE-LUIS PINAR: There's a transmission of data from the editors to the search engine. Since we are talking about two data controllers, there's a transmission. There's no relationship between data [INAUDIBLE] and data controller. So is it necessary for the website, the editor, to have the consent or another type of legal [INAUDIBLE], to permit the access from the search engine to the data that are at the website, at the web master, at the editor?
PAUL NEMITZ: I would think, not. There are other legal bases, which would make it legal for Google, in the first place, to process the data. So thank you. Your last question?
JOSE-LUIS PINAR: The draft regulation.
PAUL NEMITZ: The draft regulation, yes. Ministers discussed the matter in council. And I think there's a large consensus not now to try to codify in detail this judgment, because it would be premature, or for that matter, to correct it and to go more into detail. So I think, on this issue, the concretization of matters will be in the hand of the Article 29 working group and the Data Protection authorities.
And then, like in press law, it would be also up for the instance courts, normal courts, to develop their case practice on this.
DAVID DRUMMOND: Frank, you have a question quickly.
FRANK LA RUE: Well, I'm not so sure that quickly.
DAVID DRUMMOND: Wishful thinking.
FRANK LA RUE: Yeah. Two things-- I think it's very important to establish and to agree with you on two things. Number one is that the court decision must be obeyed. And there's no question of that, even if the court decision is weak, and we may disagree with some of the arguments it has, as [INAUDIBLE] on censorship. But I think it's very important to uphold the rule of law. And we must insist that a court decision is a court decision. And there's no way out.
And secondly, I'd like to repeat that I think we're dealing with something different than freedom of expression and privacy. I fully agree with what you said. The biggest challenge to privacy today is surveillance and mass surveillance. I wrote a report on that presented to the Human Rights Council on June 3 last year. And that's still being debated today.
Yes, and that is a very direct and aggressive form of breaking privacy, a breach of privacy of everyone. And there are many other forms. So we all agree on that.
But I'm not so sure what we're dealing here is exactly what we understand, again, as privacy. I think privacy and freedom of expression go hand in hand, perfectly. And I say that in my report, that a lack of privacy is a chilling effect on freedom of expression. So effectively, it limits freedom of expression. We cannot have one without the other. Up to there, we all agree.
But privacy is the protection of those facts and information and the data that should not necessarily be public and is only of the concern of the individual. And that, for me, becomes dangerous, if events have already been made public by the publishers and then are trying to be erased in the future, because that is a challenge.
And maybe my logic-- I'm coming from Latin America. And the logic for us is we're trying to reconstruct history after military dictatorships. And we're trying to recognize the victims, honor the victims memories, recognize who the culprits were, established who the dictators and all the line of command. So our logic is precisely to document history and make it public, which is why I wrote a report on the right to truth.
Because in the UN as well, in the Human Rights Council, there is the whole question that began with Louis Joinet and ended with Van Erlanger, the Principles on Impunity. And basically, the principle for is that truth is a fundamental element against impunity, even if there is no justice. So even if there is no trial, establishing truth about horrible events or about mistaken events is important. So those are the things that we cannot necessary change.
Now, I understand that a search engine-- and I'm not defending Google. This would be valid for any search engine. I don't care who it is. But I understand a search engine will probably use their-- you mentioned profiling-- editorializing of their priorities.
But so does a publisher. This is why all newspapers have different accounts of the same event. And one could say, a newspaper has one track on it. Another newspaper has another track on it. But I would quiver if anyone would come to the decision that then we have to erase newspapers, because they have a different opinion. And the editorial opinion on one newspaper is more subjective than the other.
Freedom of the press and establishing history is based on the fact that they could all give their opinion of the events that happened and confirm those events. So I'm a bit worried about the fact-- defending privacy, yes; the erasing malicious information, of course. Protecting those sectors that already have a protection, like children or other-- all that is already in human rights doctrine.
But the fact that someone changed their mind in the course of their life, that's a different matter. Aren't we calling that privacy, and it really is not privacy?
PAUL NEMITZ: Well, I can only repeat, when you read the judgment, the court has no narrow notion of public interest. It says, "such as the role played by the data subject in public life." So you know, there are many permutations where there would be a general interest. And I must say, the European Court of Human Rights, when it's said, first thing to look at is, is the publication a contribution to the debate of general interest. It's really helpful.
Let me just give you the most recent example, the "Washington Post" yesterday. The huge right to be forgotten bashing, because an artist said, I want the critics to be deleted. And the "Washington Post," they see what it leads to.
Well, "Washington Post," hello, read the jurisprudence. The jurisprudence say, one of he case groups which fall under contribution to debate in the public interest is when it pertains to performing artists. So of course the request is not justified.
So what I'm saying is, before one starts the bashing, read the judgement, in the context of the great case history, which in Europe already exists, which rebalances freedom of expression on the one hand and respect for private life on the other. And honestly, that's my last word, because Jimmy West always does it. And he's unfortunately not here, but please transmit my greetings.
It is fundamentally unfair to say that if these three lines from the judgment here would be applied by Chinese, that would justify-- that would be dictatorship. That is unfair. You have to read the judgment in the context of the large, established body of jurisprudence, which already exists. And I think, if you do that, you will find very reasonable solutions.
Final word on the fair argument of the censorship colleagues, on the imbalance of judicial protection-- it is true that only the data subject who wants to have to link taken down can go to a court and can go to a Data Protection Agency or authority. But if Google takes down easily, too quickly, the publishers cannot. But again, is that so different to the situation of the press?
If "The Standard" today decides not to report about this event, what judicial protection do you have? You have none. There is no right to be reported about. But what will happen then is that civil society and the press-- call it the fourth power. The press is the fourth power. They will say, "The Standard" has not understood the importance of this.
And it is good that NGOs and the press are after Google, and that they're checking very closely what take down decisions are being taken and that intransparency is criticized. But in this mechanism which disciplines what Google has to do, we cannot only look at the three powers. We must look also at the fourth power. And I think, if we then have all of this together in a holistic approach, it is a good balance in the judgment.
DAVID DRUMMOND: OK. Well, excellent discussion, and I think we'll move on to the final expert, Mr. Phillipe Nothomb. He's a corporate lawyer and legal adviser to the Rosso Group, which is a francophone Belgian media group. He's also the Vice President of Copy Press. It's a copyright collective which represents francophone press publishers, also the President of the Accreditation Commission for Professional Journalists. So Mr. Nothomb, please proceed with your presentation. Thanks.
PHILLIPE NOTHOMB: [SPEAKING FRENCH]
INTERPRETER: Thank you, Chair. I would like to continue this debate with you, of course, it's indeed really interesting what we had. But I would like to maybe come back to some of the elements, concrete elements. And maybe I would like to conclude on a prospective for the future.
Now, concrete elements, in our press group-- since 2009 actually, we have about 1,000 requests for the right to be forgotten. 95% have to do with old data found in archives and found through machines, searching machines with contrasts with the present-day situation-- embarrassing contrast. So 95% are found by Google and other search machines. And so far, we have always tried to come up with positive answers. But only 3% have been given this. And we have also tried to refuse any anonymous requests when it comes to do deletion of [INAUDIBLE] number.
The Article 9 of the regulation of 2002 on the protection of private life in the media that we try to apply. Because it allows us-- and it allows the press-- to look specifically at purposes for journalistic applications. And when it comes to press archives, and notably the obligation to keep archives public, archives that need to remain intact and which ought to be provided to the services of European citizens.
Therefore, we've always found that it was up to us to decide on the management for questions that we have been dealing with, that may be global questions. And due to the fact that we have this function the search engine gives us our information more distance, more widespread impact. And therefore, it is logical that the European Court has also been reflecting on this and given us a specific responsibility, a responsibility that needs to be balanced between, on the one hand, the right to free expression and the protection of private life on the other hand.
We have also been working with a principle when it comes to the internet communication that is for us just the beginning, the start of something. We have old archives which are also being used. And the suppression of information is not a solution. We feel that one needs to add new data, which may be correcting or giving more precision to already existing data.
And that is why my legal experts in the Belgian press, the Francophone press-- we have tried to develop a kind of right to answer electronic, right to answer that used to be done on paper and letter motives. But now, it's an electronic approach, which is directly impacting in order for people to allow that if they want to modify information, or also correct incomplete or imperfect information that we have gathered in the past.
So there's quite a procedure that needs to be followed. And we have also-- that is, for legal problems-- foreseen a right to electronic communication, which is open to people, individuals who were acquitted or had a trial revised, got amnesty [INAUDIBLE], and the press, in general, deals with decisions by the higher courts. But does not always give information on the final outcome.
Now, it is difficult, of course, to do this properly. And to us, we feel that if search engines cause problems because the information is clearly non-justified in the eyes of the claimant, then in this case, we will examine if there is some kind of clear prejudice, and if we need to apply specific criteria. The criteria which ranges is to refuse the request or to accept the deletion or the changing. We rely on a historic role that we need to play.
Sometimes we cannot just delete this. Sometimes it's another person who asks for the deletion. Maybe it's a government body. We know that they're strongly protected. And they will always be protected when it comes to information. Or maybe it's about a person who has made data public. Maybe a number of people are involved. And when it comes to dis-indexing, it may affect the past. Maybe that information can also be found in Facebook and other social sources that the person may have given his approval to publish this. Maybe what he said is not entirely credible, or maybe it could also lead to the indemnification of certain victims.
But if we feel that society induces us to accept this change because we have this responsibility, and if a person is felt to be a victim of the facts, and if the author of specific facts has been reintroduced into society, for instance, all those elements are to be taken into account together. And these are the elements that allow us to decide. And we need to use them in a kind of combined way. And if a person comes back in the news and we have not deleted elements properly, then we could, indeed, come back to this changing of the information.
Now another question was about the relationship between the editors, the search engines, and the body governments involved. But we all have to obey the law. And indeed, we also have to play our own role. The editor of the press needs to preserve his integrity. And according to us, he needs to be the final master of the decisions on deleting or not, or changing that information. Because they know the circumstances in which the information has been gathered, not just the contents.
Remastering-- well you have to go into the structures, also talk to legal experts to maybe do this properly. And the search engines too also have to respect the principles formulated by the European Court, and if it's in the general interest, if it's about public organizations. And in order to do this, there are no 36 solutions.
The editors and the search engines have to work together or at least have to talk to each other. They have to exchange views. The editors need to send back some information to Google when it comes to indexation. And our experience has shown us that the only ones who can do this properly is the search engine. It's Google. We have a number of technical problems if we try to do that ourselves. But the commissions for protection of private life have to verify that we do our work properly.
They do play a real-- not a global role. They do not intervene in the procedure. But they have to explain that the engines or the editors have played a role properly. And the final control rests with the Courts. So the editors have to be implied in all requests concerning them. They have to come up with an advice if Google needs to do something. And then we have to look at it, not through the legal ways. But we have to do this internally first. Thank you.
DAVID DRUMMOND: Thank you very much for that. Do we have questions? Peggy.
PEGGY VALCKE: [SPEAKING FRENCH]
INTERPRETER: Thank you, Mr. Nothomb. Thank you for your presence, and especially for having shared your guidelines that you have developed with this group. If you'd allow me, I'd like to maybe formulate my questions in English. Because it's a fairly relevant question, also for Mr. Nemitz.
PEGGY VALCKE: You want to stay in charge of the integrity of your archive? I understand that fully. But the court ruling also made it very clear that there is a separate responsibility of the search engine. And we need to read the case, indeed, in the context of other case law and also in the context of the underlying facts, which were in this case that the information at the source has been found to be legal. So basically, the problem that had arisen had to do with the profile that was created in the search result list.
What if the problem results from the inaccuracy or the outdatedness of the information at the source? Does the search engine, in your view, then has a kind of more secondary liability, meaning it to turn the data subject to the content publisher first. But then how do you align that with the ruling explicitly saying it's the search engine provider's responsibility. Do you have an idea on that? And doesn't a problem result from the fact that our current data protection rules only talk about the data controller as such?
And there's no subsidiary mechanism, a cascade mechanism, as we know, for instance under our Belgian constitution that if the author is known and resides in Belgium, you need to turn to the author. If that's not the case, you can turn to the publisher. If that's not, and then the bookshop, and so on. So there's this cascade of persons you need to turn to. And we don't have that mechanism yet in data protection law, and should that then change. So that's also a question addressed to you. Thank you.
PHILIPPE NOTHOMB: [SPEAKING FRENCH]
INTERPRETER: Thank you. And in fact, there's two responsibilities here. And they're different ones. And so that's why I insisted on our journalistic objectives. And it is something that we need to try and safeguard for the future as well, even when looking at the European directive for next year. It is something that is extremely important-- the freedom of the press. But if it's about this issue, we see that we may lose this protection.
And then it comes to who controls the contents? And people who use the internet can address Google or the editors. There is no prescribed track. If they address Google and the other search engines, there will be a study to be made in those bodies. And decisions will have to be taken in view of their own views. But if there's just problems, the protection is there. Then we need to ask for advice to the editor. That's how we see things.
But on the other hand, we may get a request ourselves, which does not concern ourselves or the contents. And then we can send it back to Google. So in this kind of framework, a kind of daily management are tailor-made on the requests that we may get. And indeed, procedures will need to be rolled out at the end of the day. Informatics will have to help us to streamline this and to make it easier to deal with.
DAVID DRUMMOND: OK. Any other questions? Yes? I'm sorry. Sorry. Mr. Nemitz, yes please. Sorry. I forgot that you'd asked him the question. I'm trying to speed it up.
PAUL NEMITZ: I think what is at issue here, in the end, is the trust in the internet and in the digital. And we have seen successive efforts by Google to get out of our law completely. In the case, Google argued the search engine comes from California. Therefore, California only applies. Only it's the California judges responsibility. We are not a controller. So the idea in the beginning of the case was Google operates worldwide-- no rights given to Europeans whatsoever-- only American law.
Now we have here to hear at least two questions where I would say the saga of trying to get out continues. One is but please, not Google Com. That would be extraterritorial. Well, that's-- you have heard my answer. And the other one is let's refer to others first. I don't believe that's right. I think it would be good if Google fully assumes the responsibility which resides from the business model and to capacities of the do technology ask the Court has described them in detail.
And legally, the situation is anyway totally clear. There is a joint and several liability of both individually. The individual can choose where it goes. It can go to both. It can go to one of them. But certainly, we should not allow-- under present law, I think it would be illegal for Google to say but first go to the archive. And we should also not introduce such a right. Why? Because the court has very well explained to us this additional impact on privacy and personal data residing from the huge capacity of Google to pull information together. So I think it's perfectly OK that Google has its own full responsibility in this context.
DAVID DRUMMOND: OK. So I think that ends the presentations-- and Q&A on specific presentations. We had allotted 30 minutes for questions. And we are just about at our end time, which was to be 4:00. We have a number of questions. We'd like to cover some of them. We know that at least one of our panelists has to leave. So we understand that. And any other panelists that have to leave, we understand that as well. But we'd love to take some of the questions from the audience and therefore, have us run over a bit, if that's OK with everyone.
So let me begin with the first question. Now the questions we've asked people to provide their name, although they are not required to. And so many of them are anonymous. And they are typically directed to someone specific, or in some cases, to the entire advisory council and the panel of experts. The first one, I think, is a quick one. And it's addressed to me, which is should Google only modify search results for queries limited to a person's name? For example, if I search for Mr. Smith plus bankruptcy, should results be modified as well?
That's an easy one to take care of. We do. We are modifying the search results if the name appears, even if other things appear in the name. So that's a straightforward one.
The next question, also addressed to me and also Luciano, is removing the link between a specific name search and a specific website on a commercial search engine equivalent to altering history?
Somewhat philosophical, I think. I would agree with Jodie Ginsburg that making something hard to find has an impact on access to information. But I think as to whether it's altering history, I would defer that question to Luciano.
LUCIANO FLORIDI: That doesn't sound like an easy question. But since I'm not an historian, but a philosopher, I would like to say that, unfortunately, it depends on what you are delinking. If you are delinking something fundamentally important for the future, obviously events, that is modifying history. If you're delinking what you had for breakfast yesterday morning, I'm afraid that is not that important for history, no matter how you care about that [INAUDIBLE]. So it is not single answer.
And as usual, when absolute questions are asked, there is just absolute mess to receive in return. The question should be framed, I'm afraid, more in terms of within that framework, within that particular historical absolute. Does it or does it not make a difference to remove that particular link? And since nobody knows and since, despite some views, the concept or relevance is philosophically, absolutely opaque, then some caution should be exercised. And I would be more cautious towards being careful about delinking too much, rather than vice versa. I'm afraid that's all you're going to get from a philosopher.
DAVID DRUMMOND: OK. Something slightly more practical, although I'm not sure-- Another anonymous question-- and this is a jump ball, if you will, to the panel-- whoever wants to take it. Should we be able to use Google to conduct a background check on our fellow citizens? For example, should the criminal history of a person be immediately accessible through Google? Or should such information only be made available through formal channels? I know we have a number of people who probably have studied this question. Anyone care to take it? Sure.
ROBERT MADELIN: My view on this-- and others on the panel-- I think Stephane mentioned it during the meeting-- is the availability of data may require us to change substantive laws. And for example, prohibit as abusive, in particular vulnerable consumers, certain uses of data. So I think that the answer to this question may go to the implementation of the right to be forgotten. Or it may be a signal to legislators to expand the consumer protection against an abuse of available data. And in either case, the answer then could be no.
DAVID DRUMMOND: OK. Next question-- also anonymous. And this one's directed also to you, Mr. Madelin and also to Mr. Nemitz. Question. Currently there's a very limited role for self-regulation and codes of conduct in the draft data protection regulation. Do you believe that there should be a stronger role for codes of conduct in the application of the regulation? If so, how?
PAUL NEMITZ: That would be another seminar of a few days. But let's say we do believe, because the regulation is about fundamental rights, that the law is important, and not only as a baseline law. That is the American approach. But then if one wants to add on to it, there's space for self-regulation. And I would say in the present text, the space for self-regulation is very well calibrated.
ROBERT MADELIN: I would only say that there are many places in Europe here in substantive law. And in my own field of responsibility, the audio/visual regularity directive where specific encouragement is given to regulators or, in that case, to national authorities to encourage self-regulatory solutions to things which otherwise might either be a cost to society but not really big enough to be litigated, or big enough to be litigated but then a source of inefficiency in the courts. So I think it's no dilution of substantive law to look to soft law solutions as means of clearing the way for the big, difficult cases that you can't fix in this easier way.
DAVID DRUMMOND: OK. Next question from-- looks like we couldn't read the name of the person. But here's the question, and it's to the panel at large. Provided that the right to Oblivion, or Right to Be Forgotten, provided that survives, how will you guarantee that citizens trust in metadata-- read, search engines and databases such as websites and archives-- how will you guarantee that trust if it's certain that they're incomplete and tampered with? Since this is about a right, who's shielded against whom, and why? Is it the individual versus society, or the society versus the individual? Any takers? Phillipe.
PHILIPPE NOTHOMB: [SPEAKING FRENCH]
INTERPRETER: No. Yes, indeed. That's a question I have also been dealing with. It is obviously one of the main problems we can think of ways of preserving data that we could have consulted by the happy few or who might pay for this or by scientists or historians. But that's not what is being asked.
We have been asked to provide the maximum of information, collect that-- and not just on paper. No, we're talking about the electronic collection of data. And when it comes to paper, well, and that's what we have. But I'm sure that you could see it as a Swiss cheese. The next generations may not have the information available for the future. That will be catastrophic.
So I feel that, to the extent that data will be censored, then we need to clearly indicate that. And we also need to indicate a way you can find the full information. But I hope this won't happen because the objective of the press, of course, and also for changing search engines, is to come up with complete integration-- integration which has not been tampered with or censored. Thank you.
ROBERT MADELIN: So just two thoughts. The first is media literacy-- which again, our colleague Stephane mentioned-- should include the message that the absence of a signal is not a signal of absence. We shouldn't be naive and expect that somehow everything knowable will be within reach.
But the second point is what I said at the beginning about ISPs, Google, the whole world being more transparent in their attention, in their intentions. One popular newspaper of my youth declared its intention on the banner-- "All the news that's fit to print." I think what's very important going forward is that everybody who's applying this law educates the users of their services as to what exactly they're doing. That, I think, is crucial.
PAUL NEMITZ: I think the question starts from the wrong presumption, which is that we as individuals have the right to know everything about the other individual. And this is the mentality of people who say data is the oil of the future, data is the currency of the future, because personal data, when it's about a human being, has a different nature than all other data. You have no right to see me naked. Sorry. Yes? And just to give one example, when it's about the human being, there is the right of the other which is at question, and that's what we have to respect. So the starting point of the question, which is that the world is better if I can see everybody naked is wrong.
DAVID DRUMMOND: OK, next question is from Garrett van der [INAUDIBLE]. And it's to me. It's a somewhat technical question so I'll try to be quick. Will Google take down a URL if the name of the person is not on the website but in a meta tag?
Interesting you should ask that. Meta tags are something that's basically information on a web page that a computer, search engine can see but the user doesn't normally see. And normally, we ignore those because they are typically used by people who are trying to get their search results up in higher. So I don't actually think this Right to Be Forgotten problem will come into play because we generally ignore the information that's in these meta tags. Interesting technical question, though.
Next question. An anonymous one to the panel at large. Let's see if I can read this. If the it's the Right to Be Forgotten is not a fundamental right and it cannot be by default, what should be the status of the right to be remembered? If stronger requirements and procedures are required to implement the right to reach decisions, shouldn't there be equally stronger requirements and procedures for gathering and combining information in a search result? I'm not sure. Anybody care to take a stab at that? I think it's going at the difference.
HIELKE HIMANS: [INAUDIBLE] said before, this is the wrong perception. Right to Be Forgotten is a right which is a separate fundamental right that gives you full Right to Be Forgotten, then this makes sense. But if you see the Right to Be Forgotten as what we are actually doing, that only certain circumstances, certain information may be deleted, then this is not the relevant issue. And right to be remembered is also something-- if I [INAUDIBLE] heard Paul Nemitz just right now-- is that really needed? Most everything about you always be remembered? So the question doesn't work. It extends to context where there is no context basically.
DAVID DRUMMOND: Fair enough. Make that the last word. Robert, you want to?
ROBERT MADELIN: Just very briefly. I'm not an expert in social discourse in this continent. The right to know is not trivial. It's not the right to know is not the right to see Paul Nemitz naked-- heaven forbid. But it's rather that the right to know, we shouldn't go all the way down the slope and say because our individual rights to intimacy are protected, somehow the public doesn't need to know. I think that right to know is an important glue of our civilizations as well.
FRANK LA RUE: In two words, the right actually should be called the right to access information, which is part of freedom of expression and it's two dimensions. So we have a right to access all information of any kind, except that that should be protected for particular reasons, including privacy sometimes or protections of certain groups.
DAVID DRUMMOND: OK, go.
JOSE-LUIS PINAR: Just a comment. I think the Right to Be Forgotten is a fundamental part of a fundamental right. It's a fundamental part of the fundamental right to privacy. So there's no right to privacy without Right to Be Forgotten.
DAVID DRUMMOND: So I think that--
JOSE-LUIS PINAR: Perhaps itself is not a fundamental right, but it's a fundamental part of a fundamental right.
DAVID DRUMMOND: Another anonymous question to me. Can we ask to be forgotten by the photographer who's taking pictures of us sitting in the audience? The answer is yes, and please speak to the staff and the registration desk outside and they'll make sure that your picture is not included in anything that goes out for this proceeding.
Next question, from Robert Romaine. This is directed Patrick, Peggy, and Philippe. Sort of a two-part question.
Do you think that asking to remove a search result from a search engine affects freedom of speech the same way as filtering content by an internet provider or the same way as removing information from a hosting service? That's part one. Part two is, do you think the court actually performed a balancing of interests? The court does not mention any competitor right or the right to protect data. Not sure what that's getting at. The court only mentions the public interest to receive information. I know others have addressed some of this before. Anybody want to take a stab at this. Either the addressees or others. Go ahead.
PEGGY VALCKE: Yes. With regard to the first question, my personal opinion is that removing links from search results does not have the same effect or impact like filtering content on a constant basis or removing the information at the source. And the reason is that the information is still out there and normally you can still find it on the basis of other queries. We're still discussing internally with the group what the court meant by searching on the basis of a name.
So currently the policy of Google is that also name plus other qualifiers will result in removing certain links. This is something that we still discuss internally. So if you're still allowed to find information on the basis of name plus other qualifiers or on the basis of only the qualifiers, the information can still be found, so you can't really say there's censorship to the same extent as if you would remove the information at the source or if it would be filtered out systematically even without individual asking to have the path to that information removed. The second question, could you please--
DAVID DRUMMOND: Sure. It had to do with-- sorry-- the balancing interests.
PEGGY VALCKE: Oh, whether the Court indeed struck the right balance. When I first read the Court's ruling, it struck me that there was no explicit reference to articles protecting freedom of expression, Article 10 of the European Convention, Article 11 of the EU Charter. But as Mr Nemitz explained, there are references to the public's right to information. And the court also stresses that the directive itself should be read in the light of all human rights that have to be respected in the European context.
So yes, there is a less explicit reference to other interests and rights in this court's ruling compared to other court rulings, and that [INAUDIBLE]. But that has also to do with the constraints in which the court had to rule, meaning it had to judge on the arguments that were put on its table, judging in the context of the underlying facts. And as my colleague pointed out, the newspaper itself was not a party before the court in Luxembourg. And there were no journalist associations intervening on the voluntary basis.
So this argument has not been made in this case, which probably explains why there's no such explicit reference. Does it mean we don't have to take it into account when implementing the ruling? Of course, we have to take into account all human rights. I mean, in everything we do, we have to respect all of them.
That's my personal opinion. So that's also something we will take into account in the report and make that very clear so that there's no misunderstanding about that. That's my two cents.
DAVID DRUMMOND: Very good answer. Another question, anonymous. It's directed to me and the Advisory Council. On what basis has the advisory council to Google on the Right to Be Forgotten been chosen? And what are the guidelines that have been given to the council?
I'll first just say that we were looking to set up a panel that was representative of a number of sectors-- geographic, experience, sort of different areas of expertise. And so we had a list of experts with expertise on privacy, free expression, fundamental rights, civil society, journalism, et cetera. And although there were some people who declined an invitation, we are thrilled to have the group that we have and that they've been so dedicated to working on this matter.
But I'll let others chime in if you'd like. To the extend that that's getting at interests and so forth, you should know that the Advisory Counselors are not being paid for their service. Obviously cover some expenses, but this is truly work that they're doing because they're truly interested in the question and believe that implementing this the right way in a good way is a very worthy effort.
OK, next question. Let's see. I think we have time for maybe perhaps one. Perhaps one or two. Trying to get one that hasn't been covered already.
Well, here's one. This has been touched on but I'll bring this one up anyway. Does the statistic of only 5% of users ever bother to switch to google.com concern all Google search queries originating from the EU, or does it relate specifically to name searches? That's a narrow question, and the answer to that is it's all queries. But I do think it raises this question, that google.com question that it might be worth me commenting a little bit on.
I think that Mr. Nemitz in particular made some very strong points about the effects test and how you implement laws based and their effect on citizens in various jurisdictions. I think what the internet has challenged is it's made it difficult to figure out how you do this on a worldwide basis. When there are ways to do it-- either as in the domain approach, which you can say, look, we have a domain in this country that's targeted to that country, therefore we implement on that domain. That turns out to be the way that in search, with regulation of search, it's actually happened in over the past 15 years for the most part, has been accepted in most places as the way to-- so in most places, in Europe and many European countries, when we have a removal request, we take it down on the local domain, and then it's not taken down on google.com.
You can understand a different way of doing it. For instance, another way of doing it is to say that it's removed from all domains. And you put what's called an IP block. So in either case, what you're doing is you're saying we know the person is in Europe, in a European country.
In the first case we're saying we're going to show that person by default the local site, which has all the removals associated with that site. Another way of doing it is saying whatever domain the user is on, knowing where they're from, we will then do what's called an IP block so that that person will not see it. I think that's what you were alluding to, Mr Hijmans.
In both cases, they're imperfect. In other words, so it's very clear that someone can go to google.com and not see the-- the evidence is that very, very few people do it, which is why I think many courts, regulators sort of have accepted this limitation. With IP blocking, it's also possible to get around that. And in many cases, computers don't actually know where you are because of the service that you use and so forth.
So I think the real question is, in this global world, how do we do it? What's the best way to do it? And also acknowledging that there are authoritarian governments that I think none of us want to sort of let their laws apply to the rest of world.
If a government wants their corruption or something that they've done not to be seen in the West, we certainly don't want to allow that. So the question is just how you do it. And I think one advantage of the domain approach is that you at least set up this idea that there's a possibility in those authoritarian countries that their citizens have a chance to go see the actual information.
And you put the onus on those authoritarian countries to block it if they want to block it. But again, they're all imperfect, so I think the question is what's the right way. And certainly maybe IP blocking is one way that you could go. But I just wanted to make sure we had a full discussion of that question.
Well, with that, I think that we've gone over time quite a bit, so I thank everybody for your patience. This has been a very lively and I think extremely informative conversation, and ends the consultations on the Right to Be Forgotten and the Advisory Council. Now the council will get to the real work and figure out a set of public report where we can hopefully crystallize some of these recommendations and some of the feedback.
So thank you, our experts, very much contributing and making these presentations. And thanks to all of you who came and participated and asked your questions as well. Thanks very much, and good evening.