Advisory Council to Google on the RTBF - Warsaw Meeting 30th September 2014

DAVID DRUMMOND: Well, welcome, everyone. Thank you for coming. Welcome to the Warsaw meeting of the advisory council to Google on the right to be forgotten. This is the fourth stop on the counsel's seven city [INAUDIBLE]. I'm David Drummond. I'm the chief legal officer and a senior vice president of Google.

I want to start by saying that at Google, we've always seen search as kind of a library card catalog for the web. When you search on Google, there's an unwritten assumption that you'll get the information that you're looking for. So you can imagine that when the European Court of Justice handed down the ruling in May, obliging us to deliberately omit information from search results for a person's name, we didn't exactly welcomed the decision. The tests set up by the court for what can be removed, they're vague and somewhat subjective. The web page has to be inadequate, irrelevant-- or no longer relevant-- or excessive to be eligible for removal, and that's about all we were left with. And we're required to balance an individual's right to privacy against the public's right to information. All of that feels a little bit counter to this card index idea.

But, at the same time, we respect the decision. We respect the court's authority. And it was clear to us very quickly that we needed to knuckle down and get to work complying with the decision. And that's what we've been doing-- and to do that in a serious and conscientious way. So we quickly put in place a process to enable people to submit a request to us for removal of search results or links and for our teams to review and take action on those requests in line with the court's guidance. To give you a sense of the scale of all of us, to date we've had more than 135,000 requests across Europe, involving 475,000-- about half a million-- individual URLs or web pages. And each of those has to be reviewed individually.

Now in practice, quite a few of the decisions we have to take are fairly straightforward. For example a victim of physical assault who's asking for results describing the assault to be removed for queries against her name; or a request to remove results detailing a patient's medical history; or someone incidentally mentioned in a story, a news report, but not the actual subject of the reporting. These are somewhat clear cases in which we remove the links from the search results that are related to the person's name.

Similarly, there are some pretty clear cases in which we've decided not to remove the link. A convicted pedophile requesting removal of links to recent news articles about his conviction. Or an elected politician requesting removal of links to news articles about a political scandal he was associated with.

So there are clear cases, but many cases where taking the right decision is actually quite difficult. For example, requests that involve convictions of past crimes. When is a conviction spent? And what about someone who's still in prison today? Requests that involve sensitive information that in the past may have been willingly offered in a public forum or in the media. So there's definitely some difficult questions that we have to address. And indeed they raise tricky legal and ethical issues.

So it's for these gray areas, these tough questions, that we're seeking help. Both from Europe's data protection regulators and from the members of our advisory council. We hope that they will sketch out principles and guidelines that will help us take the right decisions in line with both the letter and the spirit of the court's ruling.

So all of the council's meetings are being live cast, and the full proceedings are being made available on the counsel's website, which is google.com/advisorycouncil. and those will be made available after the event. The console invites anyone and everyone to submit their views and recommendations through the website, and we will definitely read all of your input and put it before the council for discussion.

So, at the end of this process, after we've visited all seven cities, there will be a final public report on the recommendations based on the meetings and the input that we get from the website. We're shooting to publish this by early 2015. Hopefully we can meet the deadline. And the council members will have the ability to descent from any of the conclusions, if they wish.

So with that introduction, let me introduce the council members who are here today. Their expertise speaks for itself. So joining me, we have, from right to left, Jimmy Wales, who's a co-founder of Wikipedia; Sylvie Kauffmann, editor of the French newspaper Le Monde; Lidia Kolucka-Zuk; and we have Peggy Valcke, who-- Lidia I'm sure many of you know, who's hear an Warsaw. Peggy Valcke is a Professor of Law at the University of Leuven, and Professor Luciano Floridi, Professor of Information Ethics at Oxford University. Eric Schmidt, my colleague, Frank La Rue, and Sabina Leutheusser-Schnarrenberger and Jose-Luis Pinar, who are all members the counsel, weren't able to make it today. But they are definitely tracking the proceedings online. So we have eight great experts today and we thank you for participating in the meeting.

I'm going to introduce them briefly, and I apologize in advance if I mangle some of the pronunciations, as my Polish is not where it should be. So, on my left, we have Mr. Igor Ostrowski, Ms. Magdalena Piech, we have Jedrzej Niklas, and Dorota Glowacka. And then on my right, we have Krzysztof Izdebski, we have Jacek Szczytki, and then we have Anna Giza-Poleszczuk, and we have Edwin Bendyk. So thank you for joining us.

I'm told that today's proceedings will be in English, thankfully for me. With presentations in Polish and English. So the experts will choose. So if you need a headset, they're available, and you can listen in in Polish and English. And as I said, they're being streamed live via the Google channel-- Google Video's channel-- on YouTube.

So our first session will be-- for the first four experts-- will last until about 2:30 or until we're done. And then we'll have a 30-minute break. And then the remaining four cases presented in the second session with a target of ending about 5 o'clock. We'll ask the experts-- we have a little timer here-- we'll ask the experts to keep to the 10 minutes so that we could stay on schedule. And if we're successful in that, we'll be able to take questions from the audience. The best way to ask the question is to take I think they are cards being distributed. You can write your question on the card, and hopefully we'll be able to get to them.

So, with all of that, let's go to the first expert. Mr. Igor Ostrowski will make his presentation. Mr. Ostrowski is a lawyer in telecommunications and also a new technologies market expert. He's a graduate of the University of Warsaw and co-founder of the Poland Association and Digital Center Foundation Project. Former Advisor to the Prime Minister. Former Deputy Minister of Administration and Digitization. He's been involved in all these things that we care a lot about with new media and regulation and law for at least 20 years. And has advised television networks, cable operators, radio broadcasters, in Poland and other European states on regulatory and intellectual property issues. He's also the only Polish member of the multi-stakeholder advisory group at the Internet Governance Forum-- an important UN agency that deals with governance and development of the internet. So, Mr. Ostrowski, if you could proceed? Thank you.

IGOR OSTROWSKI: Thank you so much, and thank you so much for the invitation. Now, being a lawyer and having 10 minutes to talk about so many important issues is just such a nightmare. So I needed to choose one question that I'd like to address. And I decided on the scope of the removal, and what should be the scope of the removal? Should it apply to all the localized version of a search engine, globally? Or only to a local given version at issue?

And the reason I chose this was when I read these questions, the first thing that came to my mind was, I asked myself, are we still living in 1648? And, it wasn't 1984, it was 1648. Because this was the year that a number of treaties were signed, as I'm sure a lot of you know. This was the year the 30 Year War was ended and the Peace of Westphalia was set. It was then that the power which was gained by Ferdinand III against the Holy Roman Empire was reclaimed and returned to what we call "imperial states" who then gained the ability to independently decide on their religious worship. Each country decided on how they want to govern on issues of religious worship in their own territory. Well, some historians claim that this was the date that the idea of sovereign national states was actually fully set in stone.

I'm not going to go into a whole historical review here. A lot of things have changed since that date of 1648. The creation of the internet, just to name one. But one thing is certain: that the idea of sovereign national states is still uncontested. And speaking of the internet, it may well be the first invention that ever challenged the concepts of Westphalia as national sovereign states begin to understand that the web actually knows no boundaries. And countries-- or these national sovereign states-- now have the choice either to embrace it or try to challenge it.

And my personal view on the ruling of the ECJ, of the European Court of Justice, was that the ECJ took the latter approach. And unfortunately the consequences of this approach may be damaging to the internet as we know it. And during the term, as a member of the multi-stakeholders advisory group that Mr. Drummond mentioned-- at the UN Internet Governance Forum-- I learned to appreciate the immense benefits of a single world wide web. And greatly fearful of its disintegration into several regional or national nets with different filters or different firewalls that are guarding the boundaries of these national entities or regional entities.

Well the Internet Governance Forum takes great effort to safeguard the idea of a single internet. And I would urge everyone here to consider their call. And one way to ensure that this disintegration of the internet does not happen is to limit the scope of the removal request to a local version at issue. And let me explain why. By determining that Google is a controller of personal data within the meaning of EU directives-- which, by the way, as we understand a number of experts and legislators, including the UK House of Lords, consider that to be a mistake. But, the comments aside, the ECJ judgment does create a new reality for search engines in general: a system for processing personal data based on the location of the person who uses the search engine, not, for example, location of the actual servers.

It may lead to exclusions. So, for example, European citizens may no longer be able to access certain web resources. This means that a portion of the internet will be masked for us, for European citizens, with possible chilling effects on free speech. Well, this approach is definitely in line with the Westphalia Treaties. Everyone determines their own worship. But, what's worse, is one may reasonably assume that other countries will continue to embrace the Westphalia principles and determine that they have the sole power to rule over the activities of internet users in their jurisdiction. And Russia is just a perfect recent example with the draft legislation passed last week.

Hence, in order to ensure these decisions are contained and that fragmentation of the internet does not spread, I believe, in a way, paradoxically it is necessary to limit implementation of requests to local versions at issue. Because, unfortunately, we are still living in the world of 1648. What the Westphalians fail to understand is that the internet is not that easy to manage. I'm asking myself a question, is it at all possible to capture this task in form of an equation or an algorithm? I believe no, as there is no single word for inadequate, irrelevant, no longer relevant, excessive for the purposes for which they were collected or processed. These terms are broad and vague as Mr. Drummond has mentioned.

The additional problem lies also in national differences across laws in different countries in Europe. We don't have a single civil law system in Europe. Neither do we have a uniform concept of privacy under national legislations. Each country develops their own system. So, just to give you an example, in Poland, one can claim their privacy rights in a civil lawsuit. After this ECJ judgment, it may well be that Poles will face a change, in that respect. Instead of a civil law judge, our requests may be processed by companies. Some of them might be completely inexperienced. And then by national data authorities who apply administrative rules, administrative procedures, to deal with our case.

Well, for all these issues, I'm so eager to see what will be the outcomes of the so-called Article 29 Working Group because-- that's the informal group of national data protection authorities. They intend to issue guidelines which help to deal with these new realities. So hopefully on your next meeting in Berlin, you may be able to advance a bit forward with that discussion. But what's on my wish list for Article 29 Working Group is for that group to answer a question of relating to the context of data processing in a given search result. When a request is filed. Is that context global or local. Again, coming back to the question of how a request should be treated. The contextual link that was made by ECJ related to an operation of a given EU entity.

So, for example, in the case of the recent ruling, it was a subsidiary of Google in Spain, which deals with marketing and sales duties, that was the contextual link to Google. Well, then, should we follow the same line of thinking when a request is filed? Paradoxically, the ECJ ruling-- if we read it literally-- requires all global companies to apply EU law globally. The only reason for this is probably all global players have EU presence, that way or another, of companies establishment, whether it's for sales or marketing purposes. And that means that contrary to the interpretations that we had up to now, this given establishment, a subsidiary, a representative office, does not need to actually process the data. And so the degree of involvement of that local entity, the factor should be taken into consideration when a request is filed if we are supposed to apply this ruling on the basis it was issued. And that's something which I hope the 29 group will determine and be able to provide us with guidelines.

The other, of course, dilemma is if we-- we definitely don't want EU law to govern all companies globally. But we also don't want to end up with a ruling that is not actually implementable. So I see that as one of the biggest dilemmas that we now face.

But looking forward to the bright side, just to end on a positive note, we have the data protection regulation, which is to be voted in the near future. And I think we need to carefully observe the results of testing of the right to be forgotten in practice. And if it is still at all possible, apply the results of this test before creation of the new law and before the new law goes into effect. Well, we also need to take into account the rulings of the European Court of Human Rights, where emphasis is made on the balance of right to privacy and the right of information. Not supremacy of one or the other, but a balance. And a recent ruling of the Court of Amsterdam in the preliminary relief proceedings is a sign of hope. The Amsterdam ruling, I think, shows a perfect example of where we should be going with it the balanced approach. So hopefully the future's looking bright. Thank you very much.

DAVID DRUMMOND: Well, thank you, Mr. Ostrowski. Do we have some questions from the panel? Counsel? Who wants to go first? Peggy, why don't you go first?

PEGGY VALCKE: Thank you very much, Mr. Ostrowski, for your very interesting inputs. I have a question with regard to what you said about limiting the scope of the removal of links to the local versions. Do you consider that as a-- do you see that as a general principle? Also, what about persons who are known also outside their national boundaries, like EU celebrities, EU politicians? Do you think we should consider that those requests-- or requests coming from those persons-- differently or not? Thank you.

IGOR OSTROWSKI: Thank you. Should I [AUDIO OUT]?

DAVID DRUMMOND: No, one by one, please.

IGOR OSTROWSKI: There's several, I would say, broad rights and issues at hand that we need to balance. So, of course, as the European Court of Human Rights mentioned, we have the right to privacy. We also have the right of information, where a balance has to be found. But we also have to look at the great tool we have created, the internet, which I think, has never faced as much of a danger as it does today from fragmentation. And we see it coming out on different angles. We see it happening with different suggestions made by member states of the ITU to resolve the ITRs. We see it happening with national legislation closing boundaries. And I think that is also a very important issue that we need to balance out. And, for that reason, and in order to keep one internet and not 160 internets, I would opt for local view. Even if a given request is made by someone who is known beyond the European Union, simply because then that ECJ ruling becomes extraterritorial. It can be used outside the EU, it becomes a global ruling.

DAVID DRUMMOND: So, as a point of information, Google's current approach is to remove European wide or EU wide, so obviously we're listening to feedback about whether that's the right one place to go going forward. Kind of a middle ground. Next question. Luciano?

LUCIANO FLORIDI: Thank you very much, especially for the reference to the Westphalian system, which I think is very appropriate, to be honest. The question requires a small premise, which is we tend to reason in terms of physical space and how that causes problems when we have to apply territorial laws to a place where there is no physical space I mean, the internet. But I wonder whether we could just consider the possibility, and I'd like to have your opinion about this, of moving into the logical space, which is represented by the internet.

And with a small analogy for people who may not immediately know what the difference is between physical and logical space, a chessboard is a physical space made of so many squares, but it doesn't matter how big it is. You can play on a small one or a big one. It's a logical space. All those squares. A huge one or small, that's the physics. But the number of squares. And how a pawn or queen moves is a logical issue, not a physical issue. So if we change as you would chess board and we start moving toward an internet-- logical space. For example, just to be concrete, blocking IP addresses, which is no longer a physical thing. That's getting a logical issue. Would that be a step forward with respect to the kind of problem that you were rightly highlighting? How do we not balance the locality of law Westphalian system versus the logical globality of the internet?

IGOR OSTROWSKI: Thank you, and I fully agree. I think that is the way to go. I don't actually see any other alternative. Especially as Lawrence Lessig once I think said, "code is law." And that is going to apply more and more in the future. So the logical space may need to take into account other factors than just the traditional 1648, let's say, tools that we still use today when we set legal norms, when we set new regulations. But I would just make one condition-- that there has to be some point of focus within that logical space. And I would very much urge for that focus to be based on a multi-stakeholder approach. We have just recently amended our laws in Poland. We created the Digitization Council, which I'm proud to chair, which is a multi-stakeholder approach to digitization and digital issues. Five stakeholder groups-- government, business, NGOs, tech community, academia-- coming together to decide on issues. Since I don't believe that each particular stakeholder group has enough capacity, knowledge, or background to actually decide on these issues. Thank you.

DAVID DRUMMOND: Thank you. Other questions from the panel? OK. So with that, thank you very much, Mr. Ostrowski. Let's move on to our second expert. And that will be Mr. Edwin Bendyk on this side. Mr. Bendyk is a journalist who currently works for the weekly magazine "Polityka." He deals with civilizational problems, issues of modernization, ecology, and digital revolution. He also publishes in the weekly magazines "Computer World," "Res Publica Nowa," "Political Overview," "Mobile Internet," and "Political Critique." He teaches at Collegium Civitas, where he heads the Center for Research on the Future. He also teaches at the Center of Social Sciences at the Polish Academy of Sciences. He's a member of the Council of Modern Poland Foundation and a member of the Council of the Green Institute. Mr. Bendyk, please proceed.

EDWIN BENDYK: Thanks a lot. Good afternoon, everybody. But I have decided to speak in Polish to be more precise with my arguments, so please take on your headsets.

[SPEAKING POLISH]

INTEPRETER: The first point which seems to me the key to mentioning this discussion is the fact that the problem, on one side, of the right to forget and, on the other hand, the right to access the information and freedom of information. We cannot solve this by legal regulations and technical solutions. They are necessary but not sufficient. And I've got only two illustrations here from local illustrations. The first one, we've got to remember that we are having this debate in a country where the question of memory is extremely important for private individual and collective identities. And, also, due to that weight, it's also very political. And this politicization, it also concerns the debate regarding the quality of sources of this memory. And this discussion, whether something is false, true, right, or wrong, [INAUDIBLE] and internet is also the space where this debate is taking place. This shows that the stake at this debate is high.

But I wouldn't like to concentrate here on the local issues. We better look at our neighboring country where in the autumn last year, political revolution exploded. I have Ukraine and events in Kiev in mind. And at the beginning, this was another Facebook revolution received with euphoria. It was a new kind of revolution where internet served a very important function. Every moment of this revolution was accessible and visible in the internet. On YouTube, on Facebook, everybody was able to see nearly each moment, each debate. And this euphoria expired in general when revolutionaries discovered that they become public figures forever. And the memory of the fact that they played a public role couldn't be removed.

And the authorities who sent them texts were reminding them that they were participating in something illegal. That led to escalation of events and use of force. And on the guarantee of what's-- according to the protesters was the victory. So they had to fight to the very victory, the very end. And so the memory made it impossible for other solutions to apply.

And similar situations happened in other countries where we have this sort of mass protest movement. So that's the context we have to remember if we think about the right to forget against the right to information. The right to forget, to be forgotten, understood not only as a technical issue as well as a right to delete information about myself but also the right to be forgotten in a cultural sense. This is one of the basic rights invented by humanity, and it's written in the most important texts of cultures. Everybody in the religious systems had the right to reinvent himself, to start life anew. Even if somebody was a criminal, he had the right to end this chapter and to start as a new man. I would just recall here the story of St. Paul who changed on his road to Damascus. And these examples could be many more.

So this is imprinted in our culture, this possibility of change and of starting something anew. And this was guaranteed not only by technology, which was very poor at the time. Because we remember who St. Paul was before he became a saint. But there were cultural social norms which enabled that reinvention, that starting anew. And they were coming from a different function, different situation of a human person in the social structure. It was enough 100 years ago to change the place in that structure. For instance, to be drafted in the French army, engineers in France. And then it was enough to start a life as a citizen of France as a new person.

Such a solution in the Polish legal system-- the punishment is removed or deleted after 10 years. We have the right after 10 years to start a new life and be free of that sentence. And this situation radically changed not for technological reasons, but because of cultural and social changes. The sociologists call it that the culture is detached from the structure, meaning that we are deciding about ourselves. And it's less and less that our place in a social structure decides about our place in that structure. We create our identity on our own, making our own decisions. And the cultural facts also influence the constant modification of our identity. And striving for its control and for change when the time comes, this is good for us.

On the other hand, we've got the technical infrastructure which, paradoxically, helps us in that process to construe our identity and to communicate our identity in the way that the Ukraine revolutionaries were doing. On the other hand, it creates the infrastructure of perfect memory, which is total memory about identity of a human being. And the question is, how in this-- well, we have this paradox between the potentially eternal memory off technical systems and, on the other hand, the striving of human beings to control, to create, to escape the limits of one's identity from the past, and how this dichotomy could be resolved in the moment. And we see it in discussions in our country and elsewhere in a situation where cultural norms are weaker. And it's very much, much more difficult to use the service of forgetting in a cultural sense and to receive social acceptance for the sinful past. We can remind even 90 year old people now what they were doing in their past, in their youth, in order to go get some political effect.

So what can we do here? I think that there's interesting solutions in this debate and regulation of this reality. Introducing the social and cultural norms, are the proposals formulated by Viktor Mayer-Shonberger in 1909 in a very important book "delete." He showed that between the legal and solutions we've also built certain mechanisms which would limit the brutality of technology. And that way we would not lose completely the memory. And what's important for us-- facts from history, for instance-- would remain, but they wouldn't be with all technological force made accessible to everybody. There would be some barriers, some clashes in between which was cause that if there is a strong social will to access that information that information would be there to be accessed. But the barriers would be effective enough so not everybody at anytime would make use of it. There will be a mediation between the use of information and access to information.

And a different suggestion of the same author was that people who decide to publish information about themselves, they could use their right to define the terms of validity of this information. They will be able to say, OK, I'm sharing this information. But for a specific time. Snapshot has got this 24 minutes, 24 seconds. So, similarly, Facebook people don't quite realize that after 10 seconds sometimes stuff disappears from Facebook. So this is a solution that negotiates between technology and social norms. But how would these social norms be developed? I think that here the thesis that we are mentioning all the time is still valid. And that is a modification of an educational and media education programs so that we, ourselves, as users would be using this media in a more aware way and we know the consequences of our decisions and our web presence in the media space. Thank you.

DAVID DRUMMOND: Thank you, Mr. Bendyk. Do we have questions? Sylvie, why don't you start.

SYLVIE KAUFFMANN: Thank you very much. I would like to ask you, who do you think would be qualified to conduct this mediation or negotiation that you mentioned between technological and social norms? If we go back to the cultural link, the church is say that it's first to the search engines to do it. In our previous hearings, this has been often discussed in various aspects. And this is quite a controversial element of the ruling. So I would like to have your opinion about this.

EDWIN BENDYK: [SPEAKING POLISH]

INTERPRETER: Unfortunately, there is no simple answer to that. The last results, obviously, are the legal institutions, courts, which would have to take the decisions from the point of view of the legal solutions. But this is not sufficient. We have to-- there's a question like with normal, norms, they cannot be expressed through designed institutions.

It's rather an issue how we can, in a given culture, build a consensus around certain values through, for instance, education, schools, introducing certain subjects into educational system, so something can become a norm in a given society. Not an accord, in the sense of accord of some company that controls flow of information, but it will become an institution controlling the behavior of us as a society.

I know that I'm speaking here in a very unspecific way, but the problem is the desire to come out of this unspecificity and to propose concrete solutions, it would be like Norman Lessing said, that obsession about transparency now-- to make the maximum information accessible-- leads to erosion of trust. And trust is the most important glue that binds the society. But we don't trust, we want to know as much as possible. But the more we know, the less we trust. So it's a vicious circle. So we have to apply different methods that would restore trust and not expect full transparency in similar situations here.

DAVID DRUMMOND: Luciano?

LUCIANO FLORIDI: Thank you. I am a bit confused, so help me out. You had the reference to St. Paul of Tarsus. And I just couldn't quite remember where we read about the conversion. So I had to Google it for a second, and I found that it was Chapter 9, Acts of the Apostles. It's there as a record. We haven't removed it, there's no right to be forgotten as far as Paul is concerned. He was very proud of that.

You also mentioned the recent tragic events of our European country. And I had the luck of being in Rome yesterday, where the Yanukovychleaks National Project was presented. On February 22, volunteer divers found nearly 200 folders of documents at a lake at the residence of former president of Ukraine. They had been thrown in the lake to destroy them as people escaping the compound. They've been saved, they've been scanned, they are online, available to everybody to show exactly what went on and try to understand history.

So from the very past-- Paul of Tarsus, which you referred to-- the very recent, Ukraine, we're actually working towards memory, not towards removal. In favor of remembering. In favor of availability and accessibility. But you seem to be referring to these two cases as in favor of deleting and removing. So I'm confused. Help me out.

EDWIN BENDYK: There's too much of-- [SPEAKING POLISH]

INTEPRETER: I'm explaining, I said very clearly that we had different mechanisms of forgetting. A cultural mechanism, and the example of St. Paul was an example of that. Did we remember who he was before he converted? And this is not removed from the sources, we can still find it even in Google when the conversion took place. But culturally speaking, we accepted his change and we accepted Paul in his new role.

And the second example, this happened after the 22nd of February, after the victory of the revolution. If we didn't have that victory, the other side would use different memory in different purpose. So my conclusion was that we have to-- in common, joining forces between all the social sciences-- and we have to invent how we can restore the institution of cultural forgetting. Of working with memory as a function of culture, and not only low end technology. How could we accept the fact that people can change and have the right to change although we know perfectly well who they used to be? And our local Polish experience shows us that we have a serious issue with that.

LIDIA KOLUCKA-ZUK: I'm a little bit confused whether it's just Polish language, but during the previous hearings we heard from our experts that somehow we should distinguish the right to be forgiven and the right to be forgotten. So maybe this is the correct term, how to basically name the situation you just described. However I would like you to-- encourage you to somehow be more concrete.

I know it's very difficult, but I think that this is the place and this is the time when we can philosophize for a moment. And if you could somehow try to use this opportunity and provide us with a very concrete-- maybe not recommendations, but your thoughts. What Google as a company can do, when you talk about your recommendations or your ideas about this mediation between technology and-- what Google, with its position and power and strength, can do. And maybe this is a dream, but let's dream for a moment. If you could somehow provide us with a concrete tree or steps that Google can take in this regard.

EDWIN BENDYK: [SPEAKING POLISH]

INTERPRETER: Certainly one of the elements would be what we are doing now. The analysis of a specific example. Of this ECJ decision, which in some way interferes within this mediation process, offering certain solutions which makes a situation that the information doesn't disappear but the access to it is made slower. The information is accessible, but not as easily as it used to be. This is one of the attempts, I don't want to assess this ruling.

Another example would be what I said referring to Schonberger, Mayer-Schonberger, the question for instance of the possibility to control, to manage the information about myself by introducing the use-by date, as it were. The automatic deleting of that information. It's an interesting idea. This is known from the legal system, where the sentences, for instance, expire. Sometimes automatically after 10 years, if I don't decide otherwise, an information stops to be valid, which I have introduced to the system.

And this kind of solution, Google or other companies could introduce. Another thing is, the strongest stress on a media cultural education thanks to which we would understand better the consequences of these changes mentioned by Professor Luciano. Maybe we have to oscillate between the logical and physical space, and we don't quite understand it. We have to understand it, though, and then many problems may be quite easy to resolve by introducing the new cultural norms, which cannot be planned but could be an effect of our search.

DAVID DRUMMOND: Well thank you very much, Mr. Bendyk. Let's move to our next expert, Ms. Magdalena Piech is a lawyer, a lawyer who works in the Polish Confederation Lewiatan, which represents interests of almost 4,000 member companies from all sectors and sizes. She coordinates activities of the working group that is established within the organization that discusses developments of the proposed central-- sorry-- general data protection regulation in Europe and its impact on companies.

She was involved in the debate on this proposal at both the national and the European levels. She's also a Ph.D. Candidate at the Institute of Law Studies at the Polish Academy of Sciences, where her thesis focuses on ISP liability, and exemptions for that under the e-commerce directive. The floor is yours.

MAGDALENA PIECH: Thank you. In my presentation, I will try to translate the conclusions that we reached in Lewiatan into answers to questions that Google asked prior to this meeting. I understand from those questions that, taking into account the nature of a search engine and the number of requests that it might face, Google is trying to establish some set of general rules that would allow to examine the requests to some extent automatically, at least at the stage of initial evaluation.

And the question that I would [INAUDIBLE] on first is, is it possible to evaluate a request based on the content of the website that the link refers to. I can imagine that for certain types of content, for example public administration websites or online newspapers, we could presume that the general interest prevails, and therefore the answer to requests should be negative. It would be quite easy to say that, for example in case of hate speech or child abuse, it is obvious that removal of a link will not harm the public interest. However, in most cases the answer to this question would be, it depends, which I understand is not very helpful.

Sometimes public person's privacy should be protected. Comments made by a non-journalist may be significant for public interest, and information about a private person may be important for a community if it's related to individual dangerous behaviors. Therefore, in most cases it is not the content but the context of information that will be decisive.

And this means that data subject requests must be examined on a case by case basis. They require an assessment made by a person who is not only familiar with regulation and national case law, but also know the reasons of publication, knows the circumstances of the publication and time context. Because how can you otherwise estimate if the information is still relevant in relation to the purposes of processing? And, it's up to date.

What's more, and what was mentioned before, what should be also taken into account is something that we could call social or national perception of certain issues like privacy or freedom of speech since they should also influence the final decision. Taking this into account, I believe that in general a search engine is not in a position to evaluate the context, especially in the case of legal-- I mean legally published-- information, it is not the right entity to evaluate if private interest prevails over a public one, and therefore if removal of a link to the information is justified.

In principle, it is the publisher-- understood broadly as the source website-- and competent authorities that can evaluate the context. I think that a publisher's role, in this broad understanding, is very important here, since he's the one who knows the context of the publication. What's more, he's the one who bears the responsibility of the publication, both under press law but also under general rules of civil or criminal law.

Thirdly-- and I believe this is very important to stress-- in principle, by making the information public on their website, a publisher wants to reach a general public. Not only they agree for indexation, but also they pay for being visible in the search results. Therefore, the removal of a link to their content would be against their interest. In fact, removal of links equals to removing their content from the point where readers-- especially if we talk about journals-- will start to look for information. And of course, this will very often start with the name of a person that we want to find out more about.

Moreover, what I have a problem to distinguish if I think about erasing links, is that if we talk about freedom of information and expression, or so-called journalistic exemption, how can we say that it should only apply to websites on which the information was published? I believe that protection of freedom of expression should cover also the websites which-- websites and tools-- that facilitate the access to information. And this would include search engines.

I believe for that reason that it is necessary to involve the publisher into links removal procedure. I think it will protect its interests that I mentioned before, and will eliminate at least a part of cases where search engine would or should decide about the removal of links. This would also safeguard the principle that access to legally published content should not be limited. And that it is the publisher who's responsible for the publication. Upon receiving such information, the publisher could decide to remove the information from its website. Thus, the issue of removing links from search results would be solved, because as far as I understand the technicalities of search engine the link would disappear from the search results accordingly.

What is very important I think to underline is that, upon receiving information that the content of the website is questioned, the publisher could also decide to update or complement the information at the source website. It could also, as far as I understand, choose the option not to index it. And I think that it should be stressed that, with regard to online publications, courts seem to follow this direction, and consider that it is more appropriate to update or complement the information than simply to delete it as if we were to cut information out from the newspaper.

And this, I think, would also solve the problem of removing links, because if the information at the source website is updated or complemented, maybe the search engine wouldn't have to decide about removal because the source information would be up to date and adequate. On the other hand, I think that if the publisher objects the removal and we could say that the objection is well-founded, in principle the search engine should refuse the link to be removed. In such a case, it should of course inform data subject about the possibility to refer the case to data protection authorities.

And one thing that I would like to mention is that, during our internal discussion, someone mentioned that maybe a fast-track procedure could be invented upon the let's say the data protection authority-- but probably not only-- for that purpose.

Of course, I'm not answering all the question that could appear in this procedure. But I believe that this is rather the role that the search engine, as an intermediary service provider, should play. And this doesn't mean, of course, that search engines shouldn't play any role in removing links to illegal content, which in practice-- especially at the current Google market position-- very often means, and can be the only effective way, to disable access to some illegal information.

I clearly understand that, but on the other hand I think that, to keep the search engine position in line to other legislation, such as e-commerce directive, and not to put too much responsibility but on the other hand to protect other entities from arbitrary decision, I think the final decision should be left to authorities. And probably we could say to simplify that Google should remove links to information that we can say is illegal, and it's illegal because it's manifestly illegal. For example, child abuse. Or because of a decision of either an authority or a court was issued. And the decision might vary.

It can be the decision that no one is allowed to publish specific information or distribute it, or it can be decision as it was considered in the Costeja case, where the court-- and I would like to draw your attention to the fact the court still hasn't decided if the information should be removed, if the link should be removed from search results. As far as I know, the Spanish court is still struggling with the decision.

So I believe that the burden of making a decision should be put either on the publisher or the legal authorities. Thank you.

DAVID DRUMMOND: Thank you very much, Ms. Piech. Do we have questions? Jimmy, why don't you go.

JIMMY WALES: Yeah, so I mainly agree with your position and have argued in favor of a notice and counter-notice type of system, which we've been doing for copyright removals for a long time. But one of the frequent objections that I hear to that approach is the question of what is sometimes called the Streisand Effect, the idea that if you notify publishers, they're very likely to run a story about being notified, thus bringing the information to even more prominence.

And therefore people argue that this kind of thing, they work very well for copyright removals, but where the issue is privacy it actually may cause more trouble than not. I don't agree with that argument, but I'd love to hear what your answer is to that argument.

MAGDALENA PIECH: First of all, I'd still argue that however copyright issue and notice and action procedure is very complicated and in Poland we've been struggling I think for more than two years to fix it. I understand that this is very difficult. But on the other hand, data protection rules are even more difficult because they are-- I mean, it's even harder to decide whether the publication of a data as a publication or distribution of a copyright-protected content is lawful or is not lawful.

But here we also have the journalistic exception at stake. And I think that, as it was mentioned before, we should all learn that we are responsible for what we are publishing. And it's not only for companies. It's also for us. So I think that maybe the awareness that we may be notified at some point for what we published would make us think twice before we put something illegitimate, defamatory or simply illegal on the internet. Maybe I'm naive or idealistic, but I think that--

JIMMY WALES: But I mean I think the problem is not just about illegal or defamatory content, but content that's irrelevant, out-of-date, et cetera, which strikes me as a much more problematic issue for free speech reasons. So I just-- I don't think it's about that.

MAGDALENA PIECH: I don't have a clear answer, but I think that we should learn to argue for what we published. If I don't want my-- let's say, in the example that I gave, let's say that a publisher is notified. He may say, I don't want to remove this information from my website because I believe it's fair, adequate and in accordance with the purposes that I publish it for, but I understand that it may be against your data protection rules or maybe so-called personal-- in Poland we call it personal good-- maybe endangered or your public opinion, public opinion about you.

So I agree to choose this do not index option, or I agree to give you the right to put your comment on the-- I think that there should be more debate in the issue than simply putting the burden of solving all the issues that are so complicated that are related with the internet on Google. This is tempting, but probably too simple.

DAVID DRUMMOND: Thanks. Other questions? Sylvie.

SYLVIE KAUFFMANN: Your point of departure, which is that the burden shouldn't be put on the search engine to-- can you explain why, I mean there's the burden of over-burdening the search engine with a task that originally they are not supposed to deal with, or they had not planned to deal with. But do you see other reasons why, contrary to what the court has ruled, that the search engines are not qualified to do this?

MAGDALENA PIECH: So just to specify, I don't think that we should completely remove this burden from Google. Because it's obvious now that, at some point, Google will face the challenge and will have to make a decision. As it is the case, if we talk about for example foreign moderators, they also face those issues as Google will. I only think that we should limit the number of cases and the number of cases where Google will make a decision. If it can be solved let's say internally, between the publisher and data subject, of course there can be many scenarios where Google eventually will have to make a decision. And the simplest one is, what if the publishers simply do not reply? Does it mean that the data should be deleted automatically?

I think that of course it depends. If Google thinks that-- but then I was really struggling to give you the answer about this one, because I thought that you would ask about it. But I think that this is kind of each search engine decision, in a way. I mean, you can always say that, I'm sorry, the publisher didn't reply, I will not remove the content, go to the data protection authority. But in this way, maybe the search engine will be seen as more liable or the search results will be more liable.

But on the other hand, if a search engine decides to be more-- let's say data subject friendly and remove all the requests, then I think that the product that it will eventually give will not be very attractive, because people tend to think that a search engine however the result may be-- may depend on our behavior online, so on and so forth-- that we get more or less objective information. And I think that this is something that we cannot solve. I think that maybe if search engine market would evaluate, it would be up to each company to decide what are the services of search engine. And up to consumers and internet users to decide which of the search engine that they want to use. But this is all that I can give you as the answer.

DAVID DRUMMOND: OK. Other questions?

PEGGY VALCKE: Yes, I have a question actually to both lawyers, so also to Mr. Ostrowski. What do you consider as a search on the basis of the name, as the court has called it? I would like to hear your views on that. Thank you.

IGOR OSTROWSKI: Thank you, because that just absolutely proves my point. That can only be viewed locally. The term and the concept may be different in different countries from a cultural perspective, from a cultural context. For example, in certain countries, especially in Western Europe I would say, the term name has very precise definitions, has a precise use, can-- probably, there's a number of laws to define it. But in other territories, it's a very vague term that can actually mean a number of things, just looking at the example of Ukraine. So I would say that really does depend on territory, and there is no way of applying such a-- this and many other vague terms of the judgment to a pan-European context, because unfortunately we are culturally differentiated. Thank you.

PEGGY VALCKE: If I may add before you get the floor, Costeja debt-- Is that a search on the basis of the name or not, in your view?

IGOR OSTROWSKI: In-- I'm putting my Spanish shoes on. And I'm saying yes, Costeja has a name. But I'm not a Spanish lawyer, so it would be very difficult for me to determine whether that is enough to determine-- I would have to look at the local legislation to ensure that that actually fulfills the definition set under the local legislation. Unfortunately I didn't bring my Spanish shoes with me, so I can't do that.

MAGDALENA PIECH: It's a difficult question. And I would say it depends. Of course we could say that, yes, the court's ruling is limited to name and surname, and as such we can simply delete links based on a search named Costeja or Magdalena Piech, or whichever you like. And this-- I agree that this wouldn't cause too much harm probably to a search engines as such, and the search results and so on.

But on the other hand, I think that if we want the ruling to be workable and not only go within the first line that I mentioned, I think that in most cases we don't look at only the name and the surname but we know some additional information. And let's say if I know Jan Kowalski-- that's a very, very common name in Poland-- and I know that he is the richest man in City A, for example. And it's not Warsaw, it's a smaller community. And if I look for this person, I understand that upon-- under this reasoning, this link should be removed. But probably if I only search for the richest person in the City A, I would find this same website anyway.

So this is not workable. And the question is that, if we want to simply follow the ruling and try to argue that we are on the safe side, or we're trying really to take into account the consequences and the broader context. And I think that this example shows that this is not the solution to simply say that name and surname can be deleted and we're fine with such a request. Thank you.

DAVID DRUMMOND: OK. I think we need to move to the next expert. And that is Ana Giza-Poleszczuk, who's a sociologist combining a scientific career with work in the private sector and with non-governmental organizations. She's supported Polish transformation, working for international companies as a market research head, advisor to the board, and as a brand communication specialist. Since 2000, she's been engaged in the Advancement of Citizens organization. She wants to bring her knowledge and experience to the university, and she's working on cooperation and communication between academia and business and society. So Ms. Poleszczuk, please proceed.

ANNA GIZA-POLESZCZUK: Thank you. The more I listen to other experts, the less I feel secure about what I'm going to say, but let me give a try. I would like to address two points or talk on two points in my presentation. The first one is the general one, philosophical one, and it will concern kind of the essence of the problem, which is the notion of privacy and the right to privacy.

And then I will go to the very concrete problem, which is the problem of Google knowing what to do and how to make it in a neutral and simple way. Because we all understand that the issue is quite complicated.

So let me start with privacy and the notion of being a private person or living in privacy. I will refer here to the monumental work of French historians under the title The History of the Private Life, which is five thick volumes and gives really a lot of insights into what the concept of privacy means. So the first point that I would like to make is that private and public is not the opposition but is two sides of the same coin. So each person is at the same time private and public depending in which realm she operates on this present.

Both private and public-- and this is the second point that I want to make-- refer to the rules or code of conduct that guide human behavior. But the difference is that the concept of private refers to the social circles and social milieus. So to the notion of morality, good taste-- called self-conduct and so on-- while the notion of public, being public or being in public, refers to the general law.

So in this sense, each collectivity, let me say, is composed of a variety of social circles and milieus and communities that have specific codes of concept, codes of dress, specific rules of behavior and so on. But as long as they do not enter this field that is regulated by the general law, this is their choice whether they dress this type of dress or another one.

What is worth saying is that private does not mean purely individual. It means social in a sense that it refers to certain habits, modes, and morality of the group of people. And the third point that I would like is that, especially looking into all the dictionaries of French language, the word private, or prive, means in general domesticated, like [INAUDIBLE], which means that this is the creature that has been introduced into a certain kind of culture. So I would say that altruism says-- historical wisdom says-- that we need both law and morality, that we need both public and private, because no law can guide us or can shape us into being kind and generous and sympathetic and helping other people. No will can force us into it. This is only being domesticated in a certain culture of relating to other people that can make us this way.

So just to summarize, each person is at the same time the citizen that has certain rights and certain obligations and in this respect is public, and at the same time he's or she's private person belonging to value circles like family, neighborhood, community, and there being guided into value-specific codes that are not subject to law.

And what is also important is that, historically speaking, this relation between what is public and what is private is changing, in both ways. More and more behaviors that in historical times belonged to private sphere are now regulated by law. And let me give you a very simple example. Together with children gaining rights, a lot of traditional habits and old behaviors are starting to become public issues. Like for example, beating a children or-- you may be surprised that stealing-- in the times of the French Revolution and in Poland even later on, parents could send their child to the jail to become imprisoned just because they believed it would be good for it. Yeah? So now it becomes the issue of public general law, and nobody can behave like that.

So more and more of our conduct, of our behaviors, are now public issues. But at the same time, a lot of behaviors that were once regulated by general law now are moved to the private sphere. Like, for example, dress code or confession or religion or our opinions and so on. So we can't say that we are more and more under control of the public or the private sector, I just wanted to say it changes. So now the key question is, what web changes and what search engine change.

And I think that the essence of the problem is that in the old days, all our misbehavior that were subject to social control, so took place in our social circle, was known to a very narrow group of people, and we had a lot of means to become forgiven. If I was bad to my husband, I could do a lot of things to be forgotten. And the same to my neighbors and to my community and so on, like in the case of St. Paul.

The issue with the internet is not only that now everybody can know that I-- I don't know, I've beaten my husband, let me be very brave on that-- this is not even the essence of the problem. The essence of the problem is that internet in the constant continuous present time. So even if my husband forgave me, even if my neighbors accepted my conversion, each person that clicks my name to internet gets maybe as the first information that I just have beaten my husband. And there is no sign or forgiveness.

So in a sense, I think that the essence of the problem is that we can't be-- we can be, you know, but then I will be back to that-- so namely, in the majority of cases our misbehavior stays forever, but our expiation does not, because this is only valued in this small circle of people. So whatever happens after my misbehavior, it may stay on the notice. And this is why the information about the misbehavior jumps out readily at whoever clicks. So I think that this is the issue, and this is the essence of the problem. That this-- yeah.

So then, if this is the essence of the problem then let me now go to specific issues concerning what to do being search engine. I first would like to start with the scope of the problem. Because my grandpa always repeated to me, Anna, whoever tells you there is the problem, ask yourself the question how big and for whom this is the problem and then you will know what to do next. So basically I would say that 145,000 requests from all Europe for three months is not huge. I understand that this is huge for one company, but taking into account the level of hate in internet, this is not really that big. But I understand that however small it may be from that point of view, this is a huge problem for Google to cater for.

Then I've checked another thing, namely I've checked our former Prime Minister Donald Tusk, and what jumped out was 16, almost 17, million links. Yeah? Which is quite a problem, and I wonder whether anybody could manage trying to see which link is to be deleted and which one can stay, yeah? But this is Donald Tusk.

Then I clicked Jan Kowalski, mentioned by Magdalena, which is a kind of John Smith for Polish language, and want jumped out was close to 8 million links. Most probably this is not only that there are many different people under the same name, but also that we have the richest citizen of Town A, which means that it's also semantic identity under different names.

So basically then the question is, to really be-- how to say-- [INAUDIBLE] about the group of people that we may really think about being affected by too long memory or constant present time in internet. So my perspective and my assumptions are as follows.

First of all, search engine should never evaluate whether content-- whether neither content nor the person, because search engine is not about judgment, is not about making judgments-- my first point, that this is not the thing that belongs to search engine. In the sense sort of for this, search engine should stay stupid. Stupid and without making judgment.

Which means that the principle should be really very simple, clear, and defendable on the ground to law, and only facts should be taken into account. So my suggestion would be, first of all, as to subject. And I will be referring to your questions that I've got. I would not distinguish between private and public persons, based on the fact that each of us is both private and public. Yeah. OK.

So I would rather distinguish whether the fact that is referred to is public issue, so regulated by law, or a private issue, which means regulated maybe bu morality but not by law. I would never go into any subtleties like public interest or things like that. In short, everybody can require the indexing, but only concerning public issues she has been linked to.

And now, the criteria. The first one is whether it is still valued, namely if the--

DAVID DRUMMOND: We're going to have to actually-- and I'm sorry to interrupt, but we are running out of time. So if you could just finalize your remarks and we'll have a couple questions. Thanks.

ANNA GIZA-POLESZCZUK: So these are three very simply rules. So first, if the content is still valued, if it not valued, like for example the person was in the solvent but is not anymore, then it can be the indexed. Whether its full, namely the information is about that the person was accused or arrested but there is no information about her being proved innocent, then it should be the indexed. And whether it's relevant for public information, namely then the rule should be whether it concerns the public part of this person or not. For example, the information about Shoemaker having cosmetic surgery, is irrelevant. But if the information is about using the leather from exotic animals under protection, then it's relevant and should not be the indexed.

And of course, we should do more about morality, [INAUDIBLE], and this kind of stuff. Sorry for-- I really tried to--

DAVID DRUMMOND: No, no. Understood, understood. Thank you. Thank you so much for your presentation. We're coming up on a break, but if the panel has a couple of quick questions let's do those. Peggy. Quickly.

PEGGY VALCKE: Yes, I would like to come back to what you said at the very end. You said, it is a matter of public interest or not? Do you see this as something that might evolve over time?

ANNA GIZA-POLESZCZUK: Yes, definitely, together with as I said this turning more and more things into public light.

DAVID DRUMMOND: One more. Sylvie.

SYVLIE KAUFMANN: A quick question on minors. You touched briefly on the issue of children and children's rights. What would be your position about information put online by minors or concerning minors?

ANNA GIZA-POLESZCZUK: I believe that this is more the question of education than search engine. So we should be more careful about educating our children, instead of trying to cover for them later on.

DAVID DRUMMOND: Yes, Luciano. Last question.

LUCIANO FLORIDI: Quickly. You began by saying that private and public depend on the realm in which someone operates, which I really like. I'd like to add that-- something that will probably resound in the future-- everybody will be world public for 15 minutes. Not quite any world, but almost. So how do we decide, when something's about someone, in that how public and not public that figure is? When for example it's about bankruptcy or repossession legally published?

ANNA GIZA-POLESZCZUK: Well, I would say that it's not about being the visible by everybody, being public, but it's about entering the relational, interactional behavior that is public issue and disregulated by general law. So for example, when I walk in the park it's fine. Yeah? But when I attack somebody with baseball stick, then I enter the realm of public issue that is regulated by general law.

So it's again about whether the behavior touches upon some legal principles or not.

DAVID DRUMMOND: OK. Well thank you very much, Ms. Giza-Poleszczuk and the other panelists this morning. We're going to take a break. Let's try to reconvene say about 3 o'clock or shortly thereafter. So the break will be a little shorter than we expected so we can stay on time. Also now is a good time to submit some questions if you haven't already done that. Thank you.

DAVID DRUMMOND: Well good afternoon, everyone. I thought we'd reconvene and keep going on our next four cases. The second session will begin with our fifth expert, which is "Jen-day" Niklas. Where are you? There you are. [LAUGHS] There you are.

SYLVIE KAUFFMAN: Jedrzej.

DAVID DRUMMOND: Jedrzej. Sorry. Jedrzej. There, Jedrzej. Thank you. [SNAPS FINGERS] I was going so well there for a while. Jedrzej Niklas is a lawyer, activist, and member of the Panopticon. Foundation team. He's a PhD candidate at the faculty of law and administration of Warsaw University and an alumnus of the Warsaw School of Economics.

As a member of the Panopticon legal team, he's responsible for advocacy around EU institutions. And his work focuses on data protection, social consequences of state surveillance, and fundamental rights in the context of poverty. So 10 minutes-- floor is yours.

c Thank you very much. I will start at saying that I'm very close to what Edwin Bendyk had said before. And when we mentioned St. Paul, I will also refer that in our culture, the statement that "and forgive our sins for ourself, also forgive everyone who is into debt to us" is also enforced. But there's a question, of course. Can we forget without forgotting?

Nowadays, when information is so important and information can bring stigma to particular person, I think that we cannot forgive without forgotting, and also in this context what famous Canadian author Margaret Atwood has said, "Without the memory, there is no depths. Without the memory there is no sins." So in this context it is important-- in the context of forgetting, it is also important to, forgiving is also to forgot.

But moving to the right to be forgotten case, from my perspective this more practicable and correct, in terms of existing legal framework of course, to refer to the existing and unquestionable right to data collection and assure the right to be forgotten, the whole debate on right to be forgotten should rather be seen as a concerning implementation and interpretation of this existing right with regard to the internet intermediaries.

It should also be kept separate from the discussion on article 17, on the draft data protection regulating which offers yet another understanding on the right to be forgotten. I'd also like to stress, I think it's also important that we understand that the purpose of this public debate started by Google is not to question the decision of the recent judgment that search engines should be treated as data controllers. Nevertheless, it seems important to clarify the implications of said decision. All the more, the same actors in the debate seem to suggest that as a result of this decision, search engines become legally liable for all personal data processed and published online, which is not in this case.

And on the other hand, of course, it is quite obvious that search engines are free to determine the process of indexing online context, the algorithms used for selecting it is in response to search queries, and the way it is presented to their websites. Therefore, they should be treated as data controllers as long as these activities involve data processing. This also important to stress that there is a difference between relation of the content versus and multiplication of search results.

As I understand, and as we discuss in Panopticon search engines are not expected to delete content or remove pages from the index. They're only required to modify search results so the personal data are no longer processed if and only if such processing would infringe the existing law and only in relation to search based on the individual's name. And moving to more specific issues and answering the question, how to avoid for example arbitrary decision and what procedural safeguards should be introduced, it is in the first place it should be noted that nothing in the recent judgment suggests that search engines should react automatically to data subject requests, and this is important.

On the contrary, data controllers should always verify whether its conditions of exercising data subject's right to concrete or erase personal data [INAUDIBLE]. In the context of online publication of course, the scope of the so-called journalistic exemption will always come into play and Dorota probably will talk about it more. In practice-- but in practice, this means that data controllers has to verify on case by case of course-- on the case by case basis whether the right to free expression or other rights of other individuals may prevent data subject from exercising his or her right to erase personal data.

This correction, assuring obligation exist in the current data protection regime, and therefore there's nothing new about this. What adds more complexity although, is that in the case of Google and other search engines however, is the fact that as a matter of rule search engines possess personal data that were made publicly by different entities. Taking into account this nature of this relationship, we have two situations.

The primary data controller-- this is the first situation-- receives a request to erase data from the data subject and complies with it. So we have simple situation, and search engines should always follow this decision. The other situation is more complex. The primary controller does not receive data erasure request or refused to comply with it, but the secondary controller also receives the request in this case.

This is the case also for like Costeja case. So in this situation, secondary controller should always carry out its independent assessment and apply it to data protection law accordingly. Such independence assessment is needed because the purpose of data processing by secondary controller are different. Therefore, it might be the case that the same exemptions as relied on by primary controller will not apply to the processing by the secondary controller. In the Costeja case, the original article did not breach the law. However at the search results being generated by searches on his or her name were out of the date and prejudicial. Because the only the second scenario seems to be problematic, I'll also focus more on that.

First, we can in this situation refer case to data protection authority. Data controllers can refer their disputes with data subjects to the data protection authority and seek advice, interpretation, or even wait with their own actions towards binding decision. In any case of interpretative doubts, this route should be followed. The second is for example using experience from notice and take down procedure, and to be more precise, Google and other search engines could use their experience from dealing with requests to receive under legally binding notice and take down. It is for example-- it is still existing, for example, in the US law, which requires search engines to completely index links to contact that infringe copyright law.

In fact, this obligation can go considerably further than what this required in accordance with recent judgment. I will also give you some more detailed recommendation, for example about what in our imagination procedure can look like. And also, this is also as a matter of good practice not only as a binding law. When the secondary controller, if the secondary controller should consult-- now I will say that. That secondary controller should always consult the primary data controller, publisher, in order to determine whether there is a public interest in the further processing of personal data.

If the secondary controller, on the basis of its own assessment decides not to modify search result, and this is very important and crucial for us, it should give the data subject detailed guidelines on how to refer the case to relevant data protection authority. The crucial information should be given to the data subject, as I said. If the secondary controller, search engine, on the basis of its own assessment decides not to modify its search results that is being further processed by the primary controller, publisher, the primary controller should be notified and given the right to respond within a given time limit.

If the primary controller, publisher, objects to erasure of personal data, the secondary controller of course can take into account the decision or can still modified its search results. If the primary control does not agree with this decision, he should go to do the independence judicial body and ask to change the decision of the secondary controller. So what Edwin said, this dialogue or mediation should be in-- we think it should be done in independent courts.

There's also some facts that whether data subjects-- this is another thought-- data subject is in question to private or public person is certainly irrelevant for determining the scope of journalistic exemption, but should not be seen as the only relevant factor. Because for example public persons have their right to erase their personal data if the processing of such data does not serve public interests.

Legitimate interests of the society as such also should concern national security issues or access to scientific knowledge. This should also take into account in determining the scope of journalistic exemption other legitimate interests of the third parties. Thank you very much.

DAVID DRUMMOND: Well thank you very much, Jedrzej. So do we have any questions from the panel? You can go first this time, Luciano.

LUCIANO FLORIDI:  Thank you. This is a question to the expert. The formulation is simple but I need to apologize for the complexity of the question. Could you give us an example something that is not data processing? Any process, anything that you do with data that is not data processing, or as you may guess anything you do whatever data processing?

JEDRZEJ NIKLAS: Which is not processing of personal data, as I espoused? download

LUDIANO FLORIDI:  Yeah, any data. I make it as simple for you.

JEDRZEJ NIKLAS: No, I don't think. Now I cannot imagine this kind of process.

LUCIANO FLORIDI: That is exactly my fear. So if I make a backup copy of data, it's data processing. If I read some data, it's data processing . If I list this data, it's data-- whatever I do with data is data processing. Now it's a classic problem in the logic that a key that opens all doors is not a magic key. The doors are broken. OK? There's no such thing as a definition of x which covers every x. It means you got it wrong.

If in other words data processing is anything whatsoever that you do with data, it will be equivalent to say, if you fish, that's food processing. If you show that fish on the market, that's food processing. If you put that fish with a lot of garnish and-- that's food processing. And you know, someone sooner or later is going to say, that's just fishing. Food processing is something else. So don't we have a problem here about understanding better what data processing-- that's my question. Should we be able to discriminate between different kinds of activities that involve data and at some point to realize as the general advocates did that a search engine is not processing data?

JEDRZEJ NIKLAS: But the court said other thing. Now all things that we do with data, with personal data, is processing data. It's a simple answer and of course we can try to measure different, or try to make a distinction between search engines and processing data, for example, for only my purpose. Because I have, for example, your information here on this sheets, and I'm also in some kind of processing of personal data, but there is still a distinction, so I don't have a simple answer to your question. It's can be matter of the scale, for example.

DAVID DRUMMOND: Other questions? Yes, Peggy.

PEGGY VALCKE: Yes, you mentioned that the primary data controller has to be notified. To what extent should the response of that primary data controller, the publisher, be determinative for rejecting or accepting the request? Is it-- should the search engine provider then always follow what the publisher states, or thinks, or what's your view on that?

JEDRZEJ NIKLAS: I  understand. I don't think that secondary data controller should always follow what primary data controller said. You know, it is as I said a matter of case by case, and the secondary data controller should measure case by case what can or cannot take into account to erase data. So there is no simple and automatic way to assess those kind of situations, and I'm really against this kind of automaticazation because it's lead us to the wrong dimension, I think.

PEGGY VALCKE: And sorry to come back to what you just said. Is it erase data, but in your presentation you were very clear that indeed it's important to distinguish deleting links on the one hand and deleting information on the other hand. And what the courts requires indeed is not removing content from the internet, as such, but to remove certain links. And I thought that was very important that you pointed that out. Thank you.

DAVID DRUMMOND: Other questions? I actually have one follow up. You sounded like you were proposing a sort of a forum for the appeal by the primary data processor, which we don't have under the ruling as I understand it. How would you propose that that be done? Is that a legislative fix, or what do you-- How do you--

JEDRZEJ NIKLAS: Maybe legislative. Probably I don't have exact answer to that, but I strongly believe that if primary data controller is not happy with the decision of the secondary data controller, he should go to the independent court. Of course, probably everybody say that going back to the court will take time, will take resources, will take money. But really, you know, democracy takes time and we need-- if we would like to have democratic values within our society, we need to time for them also. And automatic-- making some automatic decision would be great problem for I think a society as a whole.

DAVID DRUMMOND: OK. Sylvie?

SYLVIE KAUFFMANN: It's kind of follow up question also. Do you see any role for the data protection agencies in this regard?

JEDRZEJ NIKLAS: Yeah, of course. The data protection authority, yes of course. This is a case when data subject, the man or a woman who is not happy with the decision of search engine, he should always have, or she, should also always have right to go to data protection authority. And in this case data protection authority should decide on is it OK to erase data or whatever should be done with this.

SYLVIE KAUFFMAN: But in-- sorry. How do you see this role compared to the role of the churches or the courts.

JEDRZEJ NIKLAS: There is a diff-- of course, you know, there is at least at Polish [INAUDIBLE] there is a procedure when you are not happy with a decision of the data protection authority, you can always go to courts other way. But the important thing is for the primary data controller, for publisher, for newspaper, if they do not like the decision of the search engine, they should always go to the court. This is simple as that, I think.

PEGGY VALCKE: Yes, if I may I had another question. I was wondering whether you agree with what your first colleague said, Mr. Ostrovsky, about limiting the removal to the national version of the service, or whether you think that might undermine the coherence of, well, search in Europe? Thank you.

JEDRZEJ NIKLAS: I don't agree. I think it can undermine the coherence and really it can cause harm to a person. And it is what we are thinking about, not such abstract thing as the internet as a whole or distinct jurisdiction. We think harms and what people and what this kind of situation can cause in people's lives, so this is a crucial part for me.

LUCIANO FLORIDI:  Actually it follows from that question there, your reply. So are you suggesting that we should go global, as in EU domain?

JEDRZEJ NIKLAS: Now after this decision of the courts, we are going in some way globally. So maybe yes. But I don't have-- I was not thinking about this problem, so I don't have a good question, good answer for that.

LUCIANO FLORIDI: Let me grasp this point. You said we should not go local, but when asked whether we should go global you say you're not quite sure. You said we don't go local. You don't like that.

JEDRZEJ NIKLAS: I don't like that. I don't like that, no I don't like that.

LUCIANO FLORIDI: So do we go global?

JEDRZEJ NIKLAS: Probably yes, but I don't have solution how.

LUCIANO FLORIDI:  Thank you.

DAVID DRUMMOND:  OK. Well thank you Mr. Niklas. We'll move on to our next expert who is Dr. Jacek Szczytko. All right, close enough. Dr. Szczytko graduated from the faculty of Physics at the University of Warsaw and in the late '80s he was a training manager and a member of the Warsaw University project Internet For Schools-- or late '90s, I think. He's author of one of the first internet handbooks published in Poland and completed a three year post-doctoral position in Lausanne in Switzerland, and he's now on the faculty of Physics at the University Warsaw.

Passionate about technology, especially-- and this is very interesting-- the use of quantum mechanics and nanotechnology in everyday life. I'd love to hear more about that. And in addition to his own scientific research, he's interested in the popularization of science and the promotion of new technologies. Dr. Szczytko.

JACEK SZCZYTKO: Thank you very much. And actually in the subject of the right to be forgotten when in very privileged position, because nobody can spell and Google my name Szczytko outside Poland. So, thank  you very much for inviting me, and I'm very happy and honored of course to be here today and take part in this wonderful discussion. Let me first introduce my background. As you said, I am a physicist, and usually I work on data, not on important and difficult to define ethical and philosophical problems and legal issues.

But the notion of right to be forgotten, yes, is very romantic as was said by Professor del Corral in Madrid. But I need to elaborate data first, so I tried to find the data, what is the scale of the problem? How many person have such need to be forgotten, et cetera? And I know you already mentioned it during the previous sessions that we have about 100,000-- OK.

I requested and Bernard Girin from Paris presented data easy to manage. Most of them deal with personal data, [INAUDIBLE] right to imagine and identity [INAUDIBLE] procedure for assumption of innocence and so on. And the majority of problems are that the search results appears at the first page, it was as I understood. So the proposed solution you discussed in Madrid, Roma, and Paris, and hear in Warsaw was the de-indexing. The right to not be forgotten, but the right to be de-indexing. But what if the cure is more dangerous than the disease?

Actually, I think we are not looking for one same solution for applying right to be forgotten in the real life because I'd like to know in what sense the situation of Mario Costeja Gonzalez versus Google Spain is the universal situation. The most common one, the typical. Do you perform a research on this subject? I'm asking because I do not believe that there is only one and only right response to the [INAUDIBLE] of justice.

In fact, you need and you can react differently. So they previous cannot exist within ultimate freedom. Everybody does what you wants, total anarchist [INAUDIBLE] pornography, fraud, violence, ultimate freedom. But also it cannot exist in perfect security, so you do only what others allow you to do, so this is totalitarianism. [INAUDIBLE]. Sorry, in Polish it is totalitarism, not totalitarianism.

Personal privacy is just in between, and one has to keep an equal distance between the anarchy of "Mad Max" and order of Orwell's "1984." I believe that this balance is different in different countries because it's historically experiences and social awareness are different. For instance, Polish should remember what the anarchy means since the period of 200 years Golden Liberty, and that in 17th century the anarchy, corruption, and country division. And we, and personally me, also remember the policy regime of so-called Communism and so-called socialist democracy.

I mean socialist democracy means democracy with an objective, because it was over-regulated system with many complex and sometimes conflicting regulations, constant planning, setting up in advance on future successful results, reports, paperwork, bureaucracy, and news speak terminology contrary to the European Union. Therefore, with all this historical balance as my background, I would like to appeal-- Google, do not become Winston Smith from Orwell novel "1984" who was a tool for creation memory hole. I quote from Wikipedia, "memory hole is any mechanism for alteration or disappearance of inconvenient or embarrassing documents, photographs, transcripts, or other records, such as from a website or other archive, particularly as part of an attempt to give the impression that something never happened."

You do not change source documents, but the court of justice of the European Union forces you to implement a regulation, a tool, to de-index chosen document. And in my maybe over-sensitive opinion, this step is going too far. I'm going to explain it in three steps. First, that data are not in majority-- or inverse. Data are in majority fragile, and became obsolete very fast. Second, is de-indexing the best solution? And my proposition.

OK. There is a common belief that a thing put once into the internet will stay there forever and that data are very robust. But in my humble opinion, this statement is false. Data are fragile, elusive, and working with the internet from 1995 where as it was mentioned, [INAUDIBLE] started program Internet For Schools to connect Polish schools to the internet using the first permanent link from the faculty of Physics. There's was the first IP address to the rest of the civilization. And I prepared the lecture new technologies and how does it work on, quantum device. The first lecture will be on Thursday, so I invite you for the lectures.

And part of the links I used two or three years ago are obsolete. Images disappear, people change their affiliation, companies merge or go bankrupt. Nobody archives, maybe except the National Security Agency, but they have no access to this database. Even if you made stupid picture with two or three years you-- it means your picture, not you-- will be forgotten. There are billion of stupid photos uploaded each day. Only few of them last for a long time. There is a decay process in human interest of something.

Only few movies or photos survives. If every picture survives, being celebrity person would be so tough job. I do not [INAUDIBLE] Milagros del Corral from Madrid to be a historician. There will be nothing left from the 21st century. Most of my photos I keep in digital format, and disk, CDs and USB memories, and those media are less and less robust. The same state for internet.

I repeat, data are not robust and we should not be too hysteric about the data availability. You do not de-index data forever, yes? In more cases there will become obsolete within a certain period of time, maybe three or four years. Internet is also a real life. It's not a game with several savings, saving points and several lives to live. If you said something stupid, made a porno movie and you regret it, why you would think that you have a second chance, like in a game? Because it is digital?

I agree with Gianni Riotta from Rome and [INAUDIBLE] from Paris that de-indexing changes attitude of young person to take seriously consequences. De-indexing is not a time for fixing problems. However, I agree that some data are robust. There are the data which are archived. They belong to newspapers, broadcasting, [INAUDIBLE], internet forums, or bloggers, or Wikipedia, which belongs to nobody. Wikipedia!

And then the Google has to be fair also with editors. And it was already addressed by Vincent [INAUDIBLE] in Rome that obsolete news have been updated and publishers should have been informed. [INAUDIBLE] in Physics, [INAUDIBLE]-- I won't cite all of the person. Otherwise, your practice of blocking links can be sent to the Court of Justice of the European Union, because it violates the Charter of fundamental rights of the European Union, the law of freedom of expression, as was said by Professor Oreste Pollicino during his great presentation in Europe.

So the indexing is a form of censorship. Do you remember what happened when the planet Kamino disappeared from the Jedi archives in the galactic library? Kamino was there, but the index to Kamino was hidden. Therefore, hiding the context, which you call now indexing, is going to change one's action. This is true-- I ask-- according to my knowledge about this is incomplete, so somebody has hidden the context-- my acting is incomplete.

This kind of hiding of information can influence social life, politics, and even science. And I would like to share my own example. I'm a physicist. Many articles are digitized now. And with my university, I have access to many papers. But not everything is digitized. Some papers are in the library.

I'm in a very lucky position, because the University of Warsaw has one of the biggest libraries in Europe. But nowadays, I have to teach my students that there is another world behind the computer screen, where people really use papers to print papers. But what happened-- if the new institution or laboratory were established somewhere in Europe and don't have this background, they rediscover things which were published in '50s or '60s, because they have no access. So there are plenty of examples that we rediscover science because we don't have access to the digital media.

Google already changes the indexation, because this is a business. And the algorithm has two important parameters, numbers of clicks and value of advertising, as I understand. So Google now is like a librarian, but maybe a little corrupted. She shows me what I need, but the first page may be linked to the author who paid more. And I accept it. I know this is a business model. I can use your service for free, and you provide me with information, the personalized content, wishing that I will click your adverts. It's fair.

But if you hide something from my eyes on [INAUDIBLE] you are no longer a librarian, and I'm no longer in the open library, because now the librarian acts on somebody's will and does not want me to see all the work. It's like the librarian from the Umberto Eco book, "The Name of the Rose."

So the solution-- I'm going to finish. So on the one hand, you have the right to your own privacy, and the other, transparency in society is the issue. And they are reciprocal. It means inversely proportional. The solution has to respect the right of the individual and the impact on society, as Professor Del Corral said, there's a right to be forgotten, but also a right to be remembered. And [INAUDIBLE] said, deleting, removing, or the indexing-- the index of information is an appeal to the right to be forgotten runs contrary, to the rights of citizens to access information and contrary to transparency.

So now the solution that there is a data, let's block data, is very radical. It's like a court martial, if not duty [INAUDIBLE], duty kills him. And so your interpretation of the judgment of the Court of Justice is like the rising, almost infinite barrier between me and the information, infinite when you remove data. Such a high barrier is not transparent for society, and a barrier in physics has several parameters, height, length, time, and so on.

So solution-- easy case, I don't want to be on the first page of Google search, which was mentioned in Paris. They index me to the random. They choose another place. You can do it automatically. This option can be for free and was discussed-- it's enough for the majority off users. A harder case, if you don't want those data to be seen, we can hide it for one, two, or three years. This should be enough for your algorithm to forget this page. Eventually, it will not appear on the first page. If you want more time, you can apply for several years. But maybe you have paid for it, and there is an interesting issue, how to collect money from 100,000 requests each year.

But the third part, you want to be removed forever, tells them go to the justice and bring us the decision. And ask the literature to put in the note that the data are obsolete. So big Google basically should have the literature and act like-- also examples from the forefront of physics world--

DAVID DRUMMOND: [INAUDIBLE]

JACEK SZCZYTKO: --there are several-- OK, no examples.

DAVID DRUMMOND: You sure now? Maybe examples, but a closing statement would be great.

JACEK SZCZYTKO: Oh, the closing statement? One last thing, between the freedom and security do not create the mechanism for the implementation of memory called-- at least a permanent memory hold, maybe a temporary memory hold-- should be enough for society. Thank you very much--

DAVID DRUMMOND: Great.

JACEK SZCZYTKO: --for your attention. Thank you for the time.

DAVID DRUMMOND: Thank you very much. We have questions? Peggy.

PEGGY VALCKE: Thank you. You raised quite some provocative ideas I think. But I have the impression that you see it quite black and white. So it's like removing all links, which will undermine the memory of society. Is there no in-between? Perhaps with your technical experience, do you see a technical solution in that, perhaps, links are not removed but lowered? So they're not appearing on the first page anymore, which indeed is often where the problem lies. We Google a person's name, because it will become a new neighbor or a new colleague and we want to know. And we just check the first page.

Have you thought any thoughts on that? On intermediary solutions? Thank you.

JACEK SZCZYTKO: That's right. Actually, my first solution was that there is a form. And you can ask any user to apply for not putting a certain link on the first page. And then Google will choose randomly on which page it will appear. So everybody's happy-- the person, because it's not on the first page, and society because the data are accessible.

MALE SPEAKER 1: What about if it's legal information? What about the publisher? Won't the publisher be unhappy?

JACEK SZCZYTKO: The publisher probably will be unhappy. But then you have-- there are not so many-- OK, no, that's not true. I just wanted to say that there are not so many publishers, but there are a lot of publishers. Sorry.

But then if you introduce you mechanism and you explain how it works, then you can easily send the information to the user that your link was moved somewhere, and the publisher that somebody asked his link to be hidden. And if the information is obsolete, the publisher won't be against it, I believe.

DAVID DRUMMOND: Other questions from the panel? Once, twice--

LUCIANO FLORIDI: [INAUDIBLE]

DAVID DRUMMOND: Sorry?

LUCIANO FLORIDI: [INAUDIBLE]

DAVID DRUMMOND: Yes, please.

LUCIANO FLORIDI: Towards the end-- thank you, first of all-- manners. I bet your pardon. Thank you.

Towards the end, you seem to have made a recommendation about monetarizing the process, as in making people pay for the removal of those links. Did I get that right? Was that just a sort of a way of provoking the debate? Or were you seriously suggesting that-- I mean, the panel will take your advice, and we'll discuss it. So are you really suggesting that we should charge? And also, if I got it right, charge in terms of how many years you want to have that removed as a service?

JACEK SZCZYTKO: That's a very good idea. I agree. You can. Yes, you can.

From my experience-- let's face first that I propose not to make an infinite barrier, yes? So they index forever. Because in such a sense it is an infinite barrier to access the information. So you can now discuss what is a possible barrier to access the information.

One is that, for instance, you delay with time the access to information. The other, you make the mess in the Google search. And the second is, of course, you can propose if you want to be delayed more-- but let's say first year is for free-- then you can ask the editor to face the problem with the judgment. But then, for the next, you have pay. You have-- I don't know-0 more clicks on your name, because you're a politician, you pay $1 for each click you have to remove, and so on and so forth. So the business is great.

But of course, I don't know, because it's like a blackmail.net, yes? Then you can have an idea of where that would be good business for Google to put all stupid photos of Jacek Szczytko first, in order to make him pay to not show his students.

But nevertheless, no. This is an example. Of course, it is for debate. Being serious here, it's for debate, if it should be paid or not, if it's fair or not. But there is another buyer, yes? And I suppose we can invent many possible buyers with our group. We can decide what this doable. What can we do automatically, because it's not very difficult to apply? And this is like a sieve, yes? You sieve the hardest problems, which require a lot of human power, lawyers and so on to the bottom, yes? So this is the proposition.

JIMMY WALES: I actually have just more of a comment in relation to this, rather than a question for you. I think there is-- of all that I've heard just now, there's one really valuable kernel of an idea here, which is that for least some of the types of requests that Google is getting, particularly if it's the question of is it excessive relative to the purpose. It actually is an interesting thought to say, rather than de-indexing completely, does it just need to be moved down? In other words, if the link is the number two link for a person, that might be quite astonishing if it's an obscure detail of their life. But if it's on the third page of results it's really in proportion to their overall biography. So I just--

DAVID DRUMMOND: It's a quality issue.

JIMMY WALES: --think it's an interesting-- it's a search quality issue.

DAVID DRUMMOND: If it's no longer a quality result, it should get moved down as the editor. That's interesting.

SYVLIE KAUFMANN: But then you have a problem of how-- you mentioned that Google could do randomly, yeah? You said choose on which page of the search. But this is a problem. I mean to--

JACEK SZCZYTKO: No, OK, if you--

SYVLIE KAUFMANN: How would you organize this?

JACEK SZCZYTKO: If your name is very rare, then it's a problem, because your name will be on the first page. But if there are several pages, then you cannot state that this information will be on the fifth or sixth pages, because then everybody knows that they have to check the first and sixth page. Yes? So you have to random it. And that's all, this idea, simple and doable.

DAVID DRUMMOND: Great. Any other questions from the panel? Well thank you, Mr. Szczytko, for a very provocative presentation. Some very interesting things there. And maybe we'll have some more conversation with the audience. Let's move to our next expert, Dorota Glowacka. She's a lawyer at the Helsinki Foundation for Human Rights. She deals with freedom of expression, the right to privacy, among other things. She's a coordinator of the Helsinki Foundations Observatory of Media Freedom in Poland project, which is focusing on increasing protection related to freedom of expression in traditional media and on the internet. She's also a Ph.D. candidate in international law at the University of Lodz. So you have your ten minutes. Thank you.

DORATA GLOWACKA: Thank you very much. Good afternoon, ladies and gentlemen. Thank you for providing me with this opportunity to present before the council. As Mr. Drummond mentioned, I represent the Helsinki Foundation for Human Rights, which is a Polish human rights NGO. And I work for a project that deals with freedom of expression.

So one of the main aims of our mission is to promote the standards developed by the European Court of Human Rights with regard to Article 10, so freedom of expression, but also Article 8, so the right to privacy. And we would like to embed those standards to the national ecosystem and our domestic judicial practice. So that's the perspective from which I'll talk to you today.

I realize that's implementing the Court of Justice's decision is a very challenging task. It's very hard to do it in a balanced way. Still, I believe that the decision leaves some space for the search engine to strike this balance. I understand that right now we are actually not dealing with the question of whether the search engine should or should not assess requests addressed by users. But we are right now actually trying to figure out how to do it in practice, because the court has already obliged Google to make this initial assessment, anyway.

So I believe that the ECHR jurisprudence may be helpful with that. It might provide some hints on how to design this process. The ECHR jurisprudence, I think, is very relevant, at least with regards to some parts of this process, as many questions that you identified as crucial for the proper implementation of the CJEU decision have been already addressed by the ECHR in the past.

The big advantage of that ECHR jurisprudence is also that it provides some kind of a common, uniform standard for all the EU member states. So not only is it binding for all the EU member states, but also it has primacy over national legal systems. That's why I think it's a good idea to look at the standards and think, which of the standards can be actually applied with regard to the right to be forgotten.

According to the Court of Justice, one of the crucial issues is the status of the requestor, namely whether he or she is a public figure or private individual. This question is also central to the ECHR jurisprudence, when it comes to resolving conflicts between Article 10 and 8. So obviously, the general rule that was developed by the ECHR is that obviously public figures have to accept wider limits of criticism. And they have to be more open to critique. And they also have to accept deeper interference with their private lives.

So who is a public figure according to the ECHR? I think that might be an interesting question to tackle. The ECHR doesn't make a very clear distinction between private figures and public figures. The ECHR often refers to the definition of a public figure which was developed by Parliamentary Assembly of the Council of Europe in one of its resolutions. Let me quote it. "Public figures are persons holding public office and/or using public resources, and more broadly speaking, all those who play a role in the public life, whether in politics, the economy, the arts, the social sphere, sports, or in any other domain."

So obviously, this definition doesn't make a bright-line distinction. But it suggests, for sure, the definition of public figure should not be limited to public officials in the constitutional or political sense, but it should also extend to basically anyone who may be subject to a legitimate interest of public opinion. Obviously, not everyone-- not every public figure is entitled to the same level of protection, according to the ECHR.

The ECHR actually-- well, when you look at its jurisprudence, you can see that it makes kind of a hierarchy and different categories of public figures. These categories are, first of all, democratically elected politicians and candidates, so these are the most public of the public figures. Then there are other people in public positions, so that means other civil servants, policemen, academics also. The third category are people that play an important role in different aspects of public life, so artists, celebrities, but also advocates, journalists, sportsmen.

And the fourth category are people whose conduct attracts legitimate attention of the public opinion. So with regard to that category, ECHR often refers to as people who knowingly enter public arena. So the last category is the vaguest one, because it's not based on the professional or social status of the person, but it's based on his or her conduct. So that basically means that practically everyone can get engaged in a situation which will attract public opinion's interest and therefore, become a public figure, even involuntarily.

So obviously, the general principle is that the higher the category, the lower the expectation of privacy. It's also tricky, because it's not like people who belong to the highest category-- so democratically elected politicians-- they are not deprived of the right to privacy. There are situations in which these people's right to privacy can be actually invoked. That is why the ECHR came up with another set of criteria, which should be used in specific cases, in order to assess whether-- in order to strike the balance between the freedom of expression and the right to privacy.

And these criteria are, first of all, whether there is a contribution to the debate on the matter of public interest. The second criterion that has to be taken into account is the sphere of privacy that was targeted. And according to the ECHR, the most sensitive spheres are intimate life and sexual life, medical data, and also criminal past. The third criterion is the prior conduct of the person concerned. The fourth criterion, level of intrusiveness of obtaining information, so for example, whether it was obtained for covert surveillance, that was legitimate, the veracity of information, and lastly, the form of information.

So sometimes it may be legitimate to spread, or to publish, certain information, for example, regarding the intimate life of a politician. But it's not legitimate to add, for example, photos to that information. In general, also, the ECHR considers photos to be more intrusive than written information.

Why might that be important for search engines, all those criteria? Perhaps when assessing a concrete request, the search engine could just think first, to what category the certain person qualifies to. And then also try to go through other criteria just to try to assess-- that's not easy, I'm aware of that. If now we're trying to create transparent criteria that people may refer to, but also the search engine refers to, when, for example, it rejects a request. Then I think that these are criteria that should be taken into consideration.

Also, a practical example of that in the ECHR jurisprudence, just to see how all these criteria work with each other-- and I'm going to tackle a subject that probably is very relevant to the discussion about the right to be forgotten, that is disclosing information about spent convictions. There was a British case, M.M. versus UK, in which a court found that in disclosing information about spent conviction by a state institution to an employer about the person was a violation of Article 8 of the ECHR. Whereas in the case Schwabe versus Austria, the court found that disclosing information about the spent conviction of a politician was legitimate, did not interfere with his right to privacy. So that's my first point. I was going to--

DAVID DRUMMOND: Uh-oh.

DORATA GLOWACKA: I was going to talk a little bit more about some other implications that come from the ECHR jurisprudence, with regards to its recent case law that concerns protection of internet archives. But I've just run out of time.

DAVID DRUMMOND: OK, well thank you for that.

DORATA GLOWACKA: Thank you very much.

DAVID DRUMMOND: Perhaps we'll be able to consider that separately. So please send us, if you can, some other comments about that, because I think we'd be keenly interested in that. So questions from the panel? Sylvie.

SYVLIE KAUFMANN: You spoke about the criteria, but not, if I'm not mistaken, about who would judge the criteria or decide about the criteria. So in your opinion, who would be better qualified to do this?

DORATA GLOWACKA: I believe it was already decided by the court that it's the search engine that has to make the initial assessment. So I was trying to bring up those criteria, so perhaps search engine operators could take them in to account when they create their code of conduct, because I guess that this is the most urgent challenge at the moment. If I can add something to that, so one challenge is creating transparent criteria. But I also believe that another challenge is creating an independent oversight of the activity of the search engine. And that is an extremely difficult question-- I've just provoked a question from you, like how that should be done.

DAVID DRUMMOND: Indeed.

DORATA GLOWACKA: Which is very difficult, but my opinion is that, first of all, both parties should have the right to appeal the decision of the search engines. So both the publisher and the subject should have the right to-- should have access to an effective remedy and should be able to question the decision of the search engine. This also implies that I believe that publishers should be informed whenever a particular link to their content is removed.

I'm not really yet quite sure of how this independent oversight should look. I'm not quite sure if actually-- I know that like the Costeja case all about data protection. And it's based on the personal data protection regime. So the natural body of appeal is obviously the data protection authority. But I'm not quite sure if that's the best person to actually assess and resolve the conflict between the right to privacy and the freedom of expression.

Also because, as Michael mentioned before, in Poland, you can also apply the decision of the data protection authority to the court, but it's an administrative court. And the administrative courts are not really used to resolving that kind of conflicts. In the Polish legal system, these are civil courts that deal with the protection of personal goods. That's how we call it anyway. So they have the best expertise, actually, to deal with that kind of questions.

So I guess that the procedure is something that I think is-- it's challenging. But fortunately, it's not that much of a problem, right, who's going to do the oversight of your decisions. I think that your task at the moment is to create the criteria that would make people understand why you remove certain links and why you do not remove others.

DAVID DRUMMOND: Thank you. Lidia?

LIDIA KOLUCKA-ZUK: Thank you very much. Actually, you just already answered my question. But I would be-- first of all, thank you very much. And I really would like to read your paper. If you could just send it to me, that would be great. But if you could comment-- I really would like to hear your comments on Igor's speech, on Igo's statement, locally versus globally. If you could somehow try to interpret it, your paper in the light of equal speech, that would be very useful. I would like to hear your comments on that.

DOROTA GLOWACKA: I'm going to say something that will not be very popular. But to me, restricting this procedure only to local versions kind of makes the whole protection [INAUDIBLE], because it's so easy to go around it. So I would be-- I would be up for global version.

LYDIA KOLUCKA-ZUK: I expect that will be answer. But you know, again, we are back with the question how to do it. But that's for different discussion probably.

PEGGY VALCKE: I immediately add to that? Do you consider the protection illusionary today? Or do you think we should wait to see some results. So let's say in the first phase, you limit removal to local versions, and if you see that's the reaction is that people turn to other versions, then you can consider broadening the removal. Because I remember that David said in, was it in Paris, that Google's experience with taking down content, for instance, in the context of Nazi memorabilia, is that people do not go to other versions of the Google service but stick to their, in this case, the German variant.

DAVID DRUMMOND: Yeah, the power of defaults is they're very powerful. So typically people do stay on the local, the default, domain. But again that's our history tells us that at least.

PEGGY VALCKE: So the concrete question is would it be OK two wait for empirical evidence that people turn to nonlocal versions or do you consider that, well, that would from the start be, well, rendering the protection illusionary? Thank you.

DOROTA GLOWACKA: Maybe I shouldn't judge by myself. But after the CGU decision, if I really wanted to verify something, or check somebody on the internet now, I would immediately use both versions. Probably the default, that you mentioned, is well, it's working right now because the people are used to the two versions show the same kind of results. The more we talk about the judgment and the more we are aware of its implications and the more it becomes a subject of a public debate, I believe that people will soon-- they may some change its behavior.

Although I also believe in empirical facts. And I also believe that your experience, like the experience so far also, can be also very useful in order to create like certain criteria or to categorize cases. So definitely you should refer to your experience. But I'm a little bit worried that the more publicity, the more judgment is getting popular, let's say, the more people may just turn to the Global version. Because they just will be aware that it might actually different results.

PEGGY VALCKE: And perhaps turning back to the internet archives. Do you consider that-- so you're undoubtedly familiar with the Polish case in that context, do you consider that removing links to certain pages on the internet is less intrusive measure that the court refer to in that Polish case compared to removing certain data from the online archives. So if you have to choose between removing links and removing certain personal data from the internet archives themselves, what would you choose?

DOROTA GLOWACKA: I think these two measures are kind of independent. If we look at the Polish case, [? Wajnowski ?] and [? Smoltevski ?] versus Poland.

PEGGY VALCKE: I didn't write about that.

DOROTA GLOWACKA: But also times newspapers, which is pretty similar judgment that was passed before of [? Wajnowski ?] and [? Smoltevski ?] then you can see that the court found that adding an explanatory note to a defamatory article is sufficient to protect the privacy for the defamed person. So deleting article, blocking it, would be too much. But adding an explanatory note is sufficient. So if the ECHR are set that it's sufficient to do that in terms of, well, availability of this article, I think that it's also sufficient in terms of its accessibility.

So if, for example, there is a request that concerns an article that was already rectified, according to what the ECHR said, so you see that the link refers to a content, which was-- and there is this explanatory note, for example, already, in that situation, I don't think it's reasonable to block the link, like to remove the link. Because as the court said, adding exclamatory note is enough to protect the defamed person privacy.

PEGGY VALCKE: And would you go that far as saying that if newspapers would have done their job better, we might have not seen this court ruling, or that's probably very provocative towards the print media or the press. And I'm very big fan of the print media. So I don't want to be seen as the enemy. But it's just an intriguing question, I think, that if LaVanguardia would have added such a note, then maybe, I don't know, what would have  been the outcome of the court ruling. Do you have any ideas on that?

DOROTA GLOWACKA: I'm very up for good quality media. But I think that in the context of this case, this is not really relevant. Because that was not the journalistic content. It was kind of a public advert published on the request of the state institution. So I wouldn't consider that journalistic content. And I think this distinction is actually very, very important. Because the aim of the journalistic content, at least by presumption, is that press media, as a public watchdog, they publish something, because it's a matter of public interest.

So the aim of this information is to fulfill people's right to know. Whereas when we deal with that kind of public advertisements published on LaVanguardia website or, for example, public-- sorry, arrest warrants. That's another thing that's published on the request of the police. We have a pretty interesting Polish case law on publishing arrest warrants in the online newspapers. So whenever it's that kind of content, that in my opinion, has a very, very specific aim and like the aim of the, as I understood it, the aim of the advert in the Costeja case was just to inform people who would be interested to participate in the auction proceedings.

Or in case of the arrest warrants, the aim is to help apprehending the suspect. So whenever this aim is fulfilled, whenever the purpose is fulfilled, I believe it's both reasonable to take down the content from the website but independently, like regardless of that, it's also reasonable to remove the link to it. Also it'll like one-- if I can make one more comment, I said before that if the article was rectified, there is no need to remove the link. So the only exception to that rule that I would suggest is that when the content has already spread widely, like when the content already expanded, so when in practice it's kind of difficult to rectify it because it's all over the internet.

So in this case, I would nevertheless, even if it was rectified in its original source, I would nevertheless consider removing links to the content when it meets the criteria of [INAUDIBLE].

DAVID DRUMMOND: OK, do we have any other questions? Thanks very much for that presentation. Good Q&A session as well. So we have one more expert to testify, Mr. Krzysztof Izdebski. That was the one I was told would be the easiest, and I still mangled it. My apologies. So Krzysztof is a lawyer and expert of the Citizens Network, Watchdog in Poland.

He specializes in access to public information, administrative law, and legal aspects of measures countering corruption. He advises citizens on cases relating to access to public information. And he represents the citizens that work in proceedings before the courts. He's authored a number of publications on the legal aspects of social control and access to public information and is a member of the Civics Legislative Forum. So you have the floor for your 10 minutes. Thanks.

KRZYSZTOF IZDEBSKI: Good afternoon counselors and ladies and gentleman of the audience. From this short introduction, you can imagine that I will protect the freedom of information and access to public information, which I will do. But after listening to some debates that we had today and before in the other session, I can see that the outcome of the ruling of the court is also can provoke harm to privacy of the people.

Because, for example, when there is someone want to cancel the link, sending the request, to cancel the link, we are talking that, OK, it should be decided case by case. OK, how can we say, without profiling the person, like what the context is? I mean, I will use a very stupid example. Like let's say that Barack Obama is contacting Google and saying, hey, please delete the link of the information that I'm corrupted.

OK, We have to check. I mean it's a public figure or not. We have to check whether he was convinced, he was sentenced, he was accused of that. So it's-s the power that actually the Google have now, or it's actually not a Google. It's John Smith. It's Jose Gonzales, whether it works dealing of the person. The big responsibility to like facing also the possible harm of privacy of the people.

But coming back to the Freedom of Information, I mean, the Freedom of Information I understood. Or it's like from my perspective, the Freedom of Information, it is connected with the right to know. The right to know meaning also not the to the general knowledge but the right to know about the public officials and public institutions in general. Mr. Drummond said that there are some-- in the introduction, that there are some-- actually my clock doesn't work. So you have to rely on my feeling of the time. But still I believe I have 10 minutes.

Anyway, you said at the beginning in the introduction that there are some cases which are very obvious meaning that there is an example of the politician then want to erase the link on the political scandal. Maybe it's easy. But you said like there are a lot of gray areas. And I want to talk about these gray areas and maybe I have some recommendation for you. Although we are all such in a privileged position that in this situation that we are facing after the court's ruling, there is no bad and good answers, I suppose. So I would try to make my recommendation on that.

So as I said for that reason, I will focus on the Freedom of Information on this to ask the right to know and the relation, or rather the balance, between the right to know and the right to be forgotten. I know it could sound boring, especially after a lot of sessions that you had. But I have some boring legal stuff. But I have a lot of examples that I want to share with you to describe it.

First I will focus on the, I found very wide but wise, definition of a public information, which we can find in the Polish constitution. And it states that a citizen shall have the right to obtain information on the activities of [INAUDIBLE], of public authority as well as persons discharging public functions. Such right shall also include receipt of information on the activities of self-governing, economic, or professional [INAUDIBLE] and other persons or organizational units relating to the field in which they perform the duties of public authorities and manage communal assets or property of the state treasury.

As you can see, there is potentially loads of data, including personal data. So with the development of new technologies that the information is made publicly available, not only because of the search engines like Google, but first of all thanks to the government, they have special websites to put some information. And I want to briefly focus on that according to the Polish legal system. But it doesn't also only existing in Poland, they're the special official sites that some information, public information, is presenting. And as well, we have some kind of a personal data there, not only the public officials in the terms that European Court of Human Rights said or the Polish Constitutional Tribunal, but there are also some people you can name that are the third parties.

Or they're the natural person that at-- OK, wait, that at the same time, they at one time in a moment, they have some context with the government or with public funds. Like for example, the Declaration of Assets that is submitted by politicians, and for large group of that, it's transparent. You don't have only the names of the politicians or information on that, but also, for example, the names of the third parties that lent money to the politicians. OK, so it's a gray area I will come back to. I will come back to that.

But on the other hand, and what is quite important to think about it from the perspective of Freedom of Information transparency, like for example, in Warsaw, thanks to a public register of contracts that includes the name of the person, the subject of the contract, and its value, it was possible to spot the case of nepotism and misusing public funds. That can be the case for other towns or at least the awareness that any financial connection between the public authorities and private person will be publicly available is also very significant things to, or a strong preventive factor, in the battle against corruption.

It would not be possible without the putting of those data online. So again, a lot of data, not necessarily the people who are connected with the public posts. OK, some people can say, OK, nobody want to raise the data from the internet. This is only about the right not to be Googled at some point. And I find this statement, or the idea, quite convincing. But we also have to remember that there is a certain responsibility on a company like Google or any other search machine.

Because from time to time, I can imagine, it's not my imagination, I mean, these are the real cases, that for some reasons, one of the public officials want to erase the news or information from the official website. Let's say, the Declaration of Assets or other thing that he's not willing to share with the people. And if not for the Google, the people would not find it before. So the information wouldn't be spread in the internet. It wouldn't be secure on the other internet sites. So this is also the responsibility, in the terms of Freedom of Information that you are actually having is to protect the Freedom of Information from the public authorities sometimes. So it's a crucial thing.

It's a crucial thing to act. So the people have the right to know. And this is a Google actually, at this point, having the 97%, from what I know, in Poland, which also have to think about it. And coming back to the specific question that you ask, I think the answer for the questions like, do public figures ever enjoy a right to be forgotten? If so, when? Is there a time in which information becomes irrelevant to any public interest? What is the best way to define the boundaries of public versus private figures?

I think that you should actually not deliberate on that very much. I mean, this is always the information which is important. I want to know, like who 20 years ago, was a politician and, for example, what kind of the assets he got at that time he was performing public tasks. Let's take another example. There is a candidate for being of the Green Party. And 20 years ago, he was a candidate for a Far Right Nazi party. He wants contact you to request to take this thing down. Why we should do that? I mean, this is the question also about on the public responsibility, again, that you're sharing.

So to finish a bit, I just want to say that we shouldn't always go to the debate about the balancing. Because balancing, it's a really important concept, and it can help. But when, again, there is a John Smith or Jose Gonzales or anyone sitting in the Google and trying to balance it, I mean, it's not very helpful. I mean it's not A, B, C, right? You have to still-- you have to have a similar debate that we're having.

So from the perspective of the content, which is in the connection with, say, authorities, the power, the government, self-government, I would suggest that any content, or any personal data rather, that is in connection of the public funds or the public money and performing some sort of public task in the future, in the past, or at the present should not be deleted, or rather the link should not be deleted. Thank you very much.

DAVID DRUMMOND: Thank you very much. Questions from the panel? Sylvie, go ahead.

SYLVIE KAUFMANN: This is a more general question that I would like to ask maybe all members of this panel. Since we are in Poland, and I think one of you earlier on alluded to this aspect of the debate. This is a country which struggled recently with it's past. I mean, with the memory of its past, of its recent past. And I wonder, I mean there have been bit controversies here about what's use and what availability should be made of the Communist past and of the actions of individuals in that respect, people who were maybe private and now are public or the opposite.

And I would like to know, since we haven't had this experience at least recently in the other capitals where we were, how much has this weighed on your framing, on your personal framing of this debate or on your reflection, even if you are a lawyer or a sociologist. Has this been something that has weighed on your judgement or on your thinking?

JACEK SZCZYTKO: OK, I can answer as a physicist. Yes, it does influence because it-- I think we hear, we appreciate history, just to remember things. And then we debate dispute. But nevertheless, we think that the past is important. So this is my statement, OK?

IGOR OSTROWSKI: May I also comment. I think freedom of expression and freedom to information is a very important value in Polish society and I would say throughout central and eastern Europe. We see that also as Mr. Izdebski mentioned in our constitutional right to information and access to public information where not only NGOs, not only watchdog institutions but also individuals, very often, apply to governmental bodies for information.

We don't see that yet. We're not there at a stage where that information can any way be reused or, let's say, economically being utilized for economic gains. So when we look-- my personal opinion is when we look at access to information, we think of a transparency. We think of equality. We do not think of economic gains or specific, let's say, struggle for priority of information. So maybe for that reason, we listen to what the European Court of Human Rights emphasis on the issue of balance. And that, for me, is a very important issue.

ANNA GIZA-POLESZCZUK:  I would like to add that the problem of the boundaries is not only between private and public, because as you mentioned, this is the question of the, how to say it, percentage. Everybody of us is public from time to time. But this is also the question which we are very sensitive toward in Poland about what is information. Because, according to the scientific definition, information is something that helps us to make decision.

So if there is the fact that somebody got drunk during official meeting, this is not information. Because I do not need it to make my decision about important issues. So basically in all of these discussion, we all the time get lost in total definition of what, indeed, we have in mind saying private or saying right to information.

KRZYSZTOF IZDEBSKI: Yeah, just very briefly, I think it's also the question about the quality and credibility of data basically. Because from the Polish perspective, it is essentially based on European perspective at some point. For example, the information on the persons that cooperated with special services into Communist times. And sometimes, it's a part of the political fight and sometimes it's the information that it's important for the people to decide.

So I think this is something that actually would, I think, Mr. Bendyk said before, on this trust that we like more information [INAUDIBLE] we have, I think that generally, we are hungry for the information in Poland. Because still, like any single information is not credible for us. So that's why we need a lot of information. And that's why we need a lot of links maybe. That's it.

PEGGY VALCKE: I have a general question to anyone, every-- all the experts actually. I heard so many interesting things today. But it still leaves me with a big, big question mark. On the one hand, we heard several times that in order to render the protection of privacy rights and reputation effective as the court now requires from search engine providers, you have to think European wide or even globally. I know not everybody agrees, but that's what we heard a couple of times.

At the same time, I also pick up that local sensitivities should be taken into account. How do you reconcile those two? I love to hear you say, the European Court of Human Rights has developed European wide standards. It's true, but mainly with regard to political speech, I would say. And when it comes to hurting someone's religious feelings or protection of mortals, calling somebody names because he or she is bisexual.

So, then the course, they push the head hot potato back to the member state. A margin of discretion. So, to what extent can you take into account local sensitivities, but at the same time, think European wide, or even globally, when it comes to removing links from all versions of your search service. If anyone has thoughts on that I would love to hear them. Thank you.

KRZYSZTOF IZDEBSKI: I think it's a question, but like, the heart and the head, basically, because sensitivity is something that we can feel, but that's not always important, especially again, for a Google office, is responsible for deleting the links. It's not as much maybe interesting, at some points, but the head saying in the global village that live in, if you really want to take an action, and fulfill the ruling of the court, or try to fulfill that philosophy that is behind the ruling of the court, rather, you should do it globally, of course.

MAGDALENA PIECH: My first answer would be that Google to comply with EU standards should act, at least, at the EU level. Maybe not at each member state level, but on the other hand, I think that what Google should do is to make the right to erase a link, in this specific case, effective. And if you can prove that data subject right will be respected, and secure, let's say, by limiting the erasure of the link only to a national level, then maybe it should be considered.

Because as it was said before, maybe some kind of request, if they're blocked at the national level, will not go father. So, that's a question of who's searching? I mean, if I'm very determined to find the answer I will go to Google Comp, or I will even go to 20th or 107th website. I mean, to consider it we could ask data subject that requests such erasure, what would you be satisfied with? I mean, do you think that just putting the link on the 10th website is enough, because if your neighbor uses Google, sure, she will probably limit herself to the second website. I think that, maybe, we should also focus on expectancies of data subjects.

JACEK SZCZYTKO: May be provocative, again because I don't understand? Why do you care about it? If you introduce something global in the European Union then we have to survive with it, so we can wake up the next day, and we have to have different law. And I suppose that the new generation, they just accept that OK, the Facebook works like that. Let's accept Facebook. Google works like that, OK, let's let Google do their job. So I think the society is quite flexible, so don't worry about the feelings. You can just be the [INAUDIBLE], and you said that this is the rules, we develop it from the highest court, [INAUDIBLE] judge, and then go on. You have to survive in this environment. I don't know if it's important addition.

DAVID DRUMMOND: Great, so thank you. It looks like we do have some time for questions. So, I have been sent the questions from the audience. So, I will go ahead get going with them. The questions typically have names next to them, so I'll read out who's asking the question. Some of them have been asked anonymously, and they all have, at least, a preference for the individual be it on the council, or in the expert panel as the person to answer the question.

So, the first question is from Marin Olander, and it's to me. So, I guess I will start. And the question is, this discussion seems to focus on ways to implement the judgment. Has Google given up on the underlying question of being a controller of personal data? The Court of Justice judgments are not as final as US Supreme Court ones. That's somewhat news to me. My understanding is that it is pretty final. We don't really have an ability to appeal. So, we respect the court's authority, and we are assuming, for these purposes, that we are a controller at least for search, and we're implementing decision accordingly. So, that's question one.

Let's see, the next question is from Lukasz Kozlowski, and this is directed to either Ostrowski, or to me. So, let's see whose answers better. So, how important are decisions in Europe for Google's board. I'm not sure how you would know that, but in any event, you may have a very interesting opinion. And secondarily, how will this ruling impact Google's global business. You might have a view on that.

So, why don't I take this to start. Look, decisions in Europe are critically important us. Europe, of course, is a major, major market for us. You could argue, depending on how you slice things, our biggest market if you consider all of Europe. So, whether it's privacy, or whether it's competition law, whatever the regulatory topic is, or user preference issue is, it's really very, very important to us, and we take it extremely seriously. And this tour that we're doing is a reflection of how seriously we take it. We're spending a lot of time on this with the team. Our chairman has come up for some of the meetings, and will come out for more. And so, this is a very important thing for us.

In terms of the impact on the business, this is not an economic issue for us. In some speculation of the very interesting business proposal over here, but we don't look at this as an economic issue. The cost of it is not likely to be material to us. Although, there are people who are thinking about, well, if you have a broad regulation like this, or a legal rule like this that could impact smaller companies, say start up companies for instance, maybe you start getting concerned about the cost of compliance and so forth. But this is really an issue of what we're trying to do this in the right way, and get some guidance in order to comply with what the court said. Care to comment on our boards view of this, Mr. Ostrowski?

IGOR OSTROWSKI: No, but I would like to add a separate point. And that is, how is that going to affect business models going forward outside of Google? And I think, it actually may have some influence. Especially, if we look at new business models for OTTs, for Over The Top providers. I might as well see that there will be for example, US providers who will simply say, I do need marketing. I do need sells. Why don't we just outsource that? Or I'm just going to enter into some sort of a service agreement, but I'm not going to set foot in the European Union because it's simply too dangerous. So, I'm just going to be providing my activities from Silicon Valley, and I'll be very happy doing it there. I have my services there. I have absolutely no one on the ground.

And to go back to the issue of one that versus 160 little nets, maybe in the future we'll find territories that will prove to be safe havens for the free internet, or whatever is left of it. Maybe, for example, Iceland would be a country like that in the future. And if we keep on going that way, I think the repercussions can be quite severe for new unknown, yet, services, and the business models in which they will be rolled out.

DAVID DRUMMOND: OK, next question is from anonymous, and it's actually not directed to me for change. This is directed to Mr. Bendyk or Ms. Giza-Poleszczuk. OK, we all know that multinational corporations are very powerful, and most people think there should be more socially responsible. But how is it right that we give them even more power to control information? Why should a corporation decide what should stay, and what should be removed?

PROF. ANNA GIZA-POLESZCZUK: I think that we, both Edwin and myself, were exactly saying, that bearing in mind that you have to comply to this sentence, you should do it in a most minimal way possible. So, without involving any kind of judgment about who is asking questions, and what is the content and what is the context? That's why I was proposing to stick to very formal things, and very formal, and very small definition, and we were both talking about the necessity, so to say, to be back to ethical discussion about our society, and people. And how not to throw our responsibilities on the search engines, or companies, or providers or whatever, because, basically, we, ourselves, are responsible for who we are, and how we manage our life.

DAVID DRUMMOND: All right. OK, other comments?

MAGDALENA PIECH: If I may, of course I'm sticking to what I said, that at some point, we should avoid Google search engine to take a decision, and a big part of it should be taken by the authorities. But on the other hand, what I was thinking while listening to, especially, Panopticon concerns is that, to some extent it is naive thinking that data protection authorities which, my guess, in Poland, has 10, maximum 20 people to solve for those kind of complaints. That's my guess, I don't know. It would make, and secure the interests of data subject better, and quicker than the Google would. It's not very effective.

I think that in some cases, it is Google that will take better care over data protects interest than data authority. Sticking to what I've said before, but let's not demonize Google, I mean corporation also has clients, markets, and they do care about clients. Let's not say that authorities can solve everything. We have to be very careful about what's left to search engine, what's a private entity, which has their economic interests, but let's not simply put everything into authorities, and court because I think that we know how slowly they work in general, without putting too much criticism. But because this issue at stake is extremely, extremely difficult.

DAVID DRUMMOND: Great. OK, so the next question also from anonymous, is directed to Mr. Ostrowski. You're very popular, today. Although, when you hear the question I think others will want to weigh in on it. So, question, why do you believe so much in the power, and omnipotence of the internet? In the context of new problems, including legal ones, copyright, privacy, access to information, shouldn't the ECJ ruling be an incentive to reign it in further? Now is the time to regulate the internet. Mr. Ostrowski.

IGOR OSTROWSKI: I don't even know where to begin. The internet is unique. I think we all agree with that. Nothing ever was created of this sort, and the only comparison we probably have would be to the Library of Alexandria. As a sort of a moment in time were, suddenly knowledge was put together into one place of access. Now, of course, we could burn down the Library of Alexandria, it actually happened, and we know what the results are. So, I would say from a purely civilization point of view, reigning down on the internet is in my opinion, it's probably a bad thought. But there are certain issues, and I don't think there is an easy answer for all those fragments that we've discussed.

So, we've talked about privacy, quite a lot, but of course a whole other can of worms is the issue of copyright. Naturally, that is an issue that we don't have time to deal with today. There are certain, I would say, processes put in place to try to deal with it, but quite likely it may well be the ECJ will come back with rulings on copyright issues where we will, yet again, meet here to discuss the same issue in copyright. So, just as a precaution I would say that whatever is determined for privacy, as this is a bit of a precedent case, may then reign down on other parts of our legal systems, and if we start implementing the privacy rules as set out in the ECJ case to copyright, I think this would be the end of pro consumers, the end of alternative distribution of culture, the end of creativity in the internet as we know it. So, it's a very dangerous path we're on.

JIMMY WALES: I mean, if I speak of that. One of the core issues that I think we have today is that all around the world as legislators, and courts are dealing with new issues, they're often woefully unprepared to do so. They don't understand technology. They don't understand the culture of the internet. This particular ECJ ruling is based on European law that's older than Google, and is being adapted to deal with modern problems in ways that, I think most people agree, are a bit clumsy at best, I would say.

It's a massive human rights violation, but I'm a bit extreme on these issues. And so, I do think that one of the reasons to not crack down on the internet and rein it in now is that we have far too little understanding of the issues, because it's very easy to pass rules that either don't work, or that work, but a little bit too good. You can stop certain behaviors by stopping all behaviors, and it's just very dangerous time for the internet, so we shouldn't move too quickly.

DAVID DRUMMOND: Other comments. Sure.

LUCIANO FLORIDI: Just a quick follow on. I completely agree with Jim, and I think that the point of proportionality also needs to be taken into account. Given the benefits of what we have, to do any harm to what we have just because every now, and then there's a glitch somewhere that is not working as perfectly as we will like it to see. It seems to be an overreaction, so I'd rather see another half a million cases brought to Google, and having more meetings like this, then start seeing legislation breaking down, making sure that the internet is no longer what it used to be. So, you should be careful about what we wish for. Sometimes better is the enemy of good, and the internet is pretty good at the moment.

DAVID DRUMMOND: OK, we have one more question, which is also directed to me from Alexandra Souvala, I think I've got that right.

FEMALE SPEAKER: [INAUDIBLE]

DAVID DRUMMOND: Oh, you can't read it.

FEMALE SPEAKER: [INAUDIBLE]

DAVID DRUMMOND: OK, well--

FEMALE SPEAKER: [INAUDIBLE]

DAVID DRUMMOND: I have a question here.

FEMALE SPEAKER: [INAUDIBLE]

DAVID DRUMMOND: OK, well, why don't I read it, and if I have it wrong then you can amend it, and we can change it. So, the question is, as an American, what do you think of a focus on privacy in the EU? What's your opinion on the so-called Balkanization of the internet? And the second order effects pointed out by Professor Zittrain, that I assume refers to Jonathan Zittrain, who's a professor at Harvard. Law Professor who famously writes a lot about the internet. I think the second order effects he's describing are, if we have good, well intentioned internet regulation especially, from say Western countries, and democracies, especially, when it has extraterritorial application, how do we stop other countries that aren't so well intentioned, or perhaps don't have the same democratic traditions, from extra territorially applying their laws that we won't like very much? So, how do you handle all this?

I guess, my opinion from the first part of the question, privacy is a fundamental right, and it's recognized as a fundamental right in Europe. And the treatment of that is somewhat different in the United States, although, privacy is in our constitution, as well. But it's an important fundamental right, it has to be respected. We believe that we are building that into our products, and we always well. And so, when we have conversations like this, about how do we protect that fundamental right, while balancing against others, while protecting innovation, we want to be part of that conversation constructively, and we expect that we will continue to be.

I remain concerned about both the Balkanization problems Mr. Ostrowski talked about, in terms of getting to local internets, and regional internets, which I think will deprive the world, impoverished the world of creativity, and innovation, and we need a fix for that. But I'm also concerned about these so-called second order effects. The problem is of a least common denominator, so that the internet has the content that the most restrictive country that has enough muscle to impose its will on the internet providers, that those countries will prevail. And so, that's why, I think, we have always explored at Google, and I know some of those conversations come up today. Ways to respect local laws, and make sure that the people in those countries, that you democratically elect their governments to make rules, that those rules are provided, and they have the protections that they expect. While not necessarily imposing those worldwide where we can. So, it's a very difficult problem. And we're working on this in the context of privacy, and the right to be forgotten, but in the context of a lot of other issues, as well.

KRZYSZTOF IZDEBSKI:  Can I have a practical question?

DAVID DRUMMOND: Sure, please, we have two and a half minutes left, so--

KRZYSZTOF IZDEBSKI: I know. It's short. I hope it's not a trade secret, but how many employees of Google are dealing with the requests so far? I mean, how many people are working on the issue?

DAVID DRUMMOND: You know, that's a good question. I can't give you the exact number, but it's a good number. We have a removals team that deals with thousands, and thousands of removal requests for all kinds of reasons, from all over the globe, every day. So this, sort of, increased their workload. But it's not 10, I doubt it's 500, but it's somewhere in between. It's a good number of people. It's a good number of people. Not a trade secret, but I don't want to be imprecise. It's a good number of people it takes to review all these things, and go through it. Any other final comments? Jimmy.

JIMMY WALES: I've got a question for you. And again, obviously, you can't take it away your proprietary legal strategy, but I'm just wondering if you could say anything about Google's approach. One of the problems we have with the current ruling is that it's quite broad, and it's quite vague. And one of the ways that things could become more clear is if there's more litigation on different kinds of cases. Is Google planning to pursue that sort of thing, or are you trying to avoid that kind of thing, or somewhere in between?

DAVID DRUMMOND: Yeah, sure. It's a great question. I mean, where we are now is where we're getting guidance. We don't have court decisions, DPA rulings, et cetera, to provide that sort of body of law, and set of rules. And of course, there's complication here in that we have the European court setting out some very broad standards, but yet, everybody recognizes that when the rubber hits the road, it's at the national level, and that could vary. So, we don't have any of that experience. We will come to some conclusions.

We've already come to some in terms of how our current removal set up works, but we'll come to some even firmer conclusions once we're through with this process, we get input, and to the extent we have cases that challenge our approach to it. I'm sure that we will litigate, and that we will see more of this defined. But certainly, I think it's in everyone's interests, I think, to get some of these tough cases looked at very carefully, so that we can come up with a better outcome. Magdalena, please.

MAGDALENA PIECH: If I may, just a quick question to actually all members. Can you see some specificities so far based on the national events that you had. Are there some concern that's happened in one country, and didn't happen in another, or just more, or less the same approach?

JIMMY WALES: I guess I could say just a couple things, but it's just one person's view. One, each group of experts has been different, in different ways, but I wouldn't call it, necessarily, national differences as much as just the assortment of the people who came. The one thing that I have seen, I won't say universal, but very broad support for the idea that where we are today, in the law, where Google has to make these decisions, and there's no appeals process by publishers, no judicial oversight is problematic.

I don't think I've seen anybody say, this is the perfect possible legal arrangement. Wherever you fall on the question of deletions, and privacy, I think a lot of people were saying, we've gotten to this in a sort of a historical way, something about the law probably needs to be updated. People might fight to the death over how that might be updated, but that's the one thing that I've seen, consistenly, from most people.

SYLVIE KAUFFMANN: Actually, I had a chat with Mr. Szczytko earlier, and he watch all the hearings which I commend you for, on one weekend, I think. He noticed that the hearing in Paris was more suspicious of the intentions of Google, which maybe because I'm French, didn't seems so obvious to me, but apparently it did to the other panelist, so you might see this difference, probably.

DAVID DRUMMOND: Great, well, we're ending just about on time. So, this concludes our fourth meeting. I want to, on behalf of the advisory council, thank our experts for your lively, and interesting presentations. Very useful, and will definitely serve as background for our deliberations in the future. Also, like to thank the audience, folks who are here in person, and also, online. As I said, all the proceedings should be available to be seen later on the website, and the live stream. We will be holding the fifth meeting in Berlin, on October 14, and that will also be available for Mr. Szczytko, to check out, since he's our biggest fan.

JACEK SZCZYTKO: You'll have to ask my wife, OK?

DAVID DRUMMOND: We'll as your wife. Anyway, thanks everyone. This is has been great. I appreciate it. Good afternoon.