Published using Google Docs
Civil Society in Cyberspace - what it needs, and why it’s needed - anon
Updated automatically every 5 minutes

Civil Society in Cyberspace - what it needs, and why it’s needed?

Meeting note, Humanity in Cyberspace of April 18th 2024, NESTA, London

Chair-Intro 

The basic conception of this seminar series is that cyberspace is transforming art has transformed is continuing to transform our world, and humanity is half in and half out of it. And like any other space, it needs to be humanised and humanization comes out of a process of regulation of emergent action of what happens inside it, etc. But all of this has to be done in a thoughtful way.

The motivation for it came from being in Whitehall, talking a lot about data, and finding that the breadth of the conversation around the platforms around data and technology was extraordinarily narrow, that it needed to be broadened out. And there needed to be a sense amongst politicians, decision makers, that there is a broader conversation happening. So that's the idea of bringing this group together- to give some air for some broader thinking, and for thinking in particular, that isn't dominated by the fearsome lobbying efforts of the tech platforms.

A very common experience was to sit in front of politicians or senior civil servants to explain something, to talk about market power, or just to talk about what might be, and to get lots of nods. And then to find that a few weeks later, someone's saying, Oh, yes, but wait, we've had this report is interesting. And this report ended up being a piece of advice funded by Apple or Google or whatever, that actually explained all the reasons why this complicated stuff should go in another direction.

So it's the idea of giving a sense that there's a broader conversation that goes on. So that's what I hope we can continue to do.

Speaker1 - kick-off

This is no more than some food for thought just to get us going really. I'll draw on my experience at Citizens Advice, which is a particularly data rich charity. Civil society data is one of the sort of most obvious examples in which the way in which the current settlement falls short of what it could do.

I am going to talk about 2 perspectives: First, the perspective of policymakers, funders and decision makers - what civil society data could do for us in those terms? Second from the perspective of citizens and what does it feel like to interact with charities and civil society institutions?

From the first perspective, it's obvious that there is a huge amount of wasted insight & intelligence in civil society data. Because it is so fragmented and hard to draw integrated & rich insights from the data that sits in many different fragmented institutions across civil society, charities, and so on. I've always thought that being able to link together those datasets, like the data at Citizens Advice, charities, like Shelter, Mind, etc, so that you can get a rich and live picture - the kinds of challenges folk are experiencing across the country. You can imagine the power of a more integrated data ecosystem. If you could do that for things where funders spend their money, where the government spends its money as a kind of live early warning system for when problems are manifesting in civil society and society at large. This could become a valuable form of collective intelligence to inform everything from policymaking to funding, including the decisions that those charities themselves make about where to put their own efforts and resources. And over time that would build up a very rich qualitative data set. So you wouldn't just be talking about accounts and numbers; you'd be talking about how you can extract insight from text, from video, etc, etc. And many of those charities house these incredibly rich data sets, for example, Citizens Advice case notes, which is text data, and at the moment that data is under-utilised. It could be used to inform everything from policymaking to funding. 

It's an interesting example, I think, because there are many barriers to that kind of more integrated vision of civil society data. So partly it’s capability - none of those charities individually have the capability or time or headspace to integrate their data with others. So that sort of connective tissue ends up missing. Partly that is a question of  things like technical standards, if all of the data is structured differently, has different architectures, there's no kind of consistent metadata across it, and so on. And so it's just technically currently very difficult to say, how could you unite those different disparate data sets? question there about whether there's missing institutions in the landscape that would play the role of enforcing technical standards across civil society data?

But partly also. to be honest, the question is about incentives - charities are very competitive with each other & have not done much collaborating on things like data, not least because they're competing for funding, which is, as we know, shrinking. And so should there be more requirements, for example, on charities that are receiving public money to say, if you're receiving public money, you have to open up your data to some degree, or it needs to conform to certain standards so that it can be part of this kind of open ecosystem of civil society data? 

So that's one sort of set of questions.

Looking at it now from the other end of this telescope, from what does it feel like to be a citizen interacting with charities? Again, often it feels very fragmented. You know, if you're interacting with multiple charities, you have to repeat yourself endlessly. You're not easily referred between those charities, etc, they often don't know that you're interacting with other charities. Sometimes that's good. For privacy reasons. Sometimes, you might actually want not to have to repeat yourself to all these different charities. And I guess the vision there, if you sort of asked what would we like that to look like? The sort of image in my mind is open banking as a sort of example of the kinds of connections that are possible when you get these kinds of systems to work in more integrated ways.

So you know, just the other day, I used my openbanking app, and I added my mortgage to that banking app, it took about five seconds - I was genuinely stunned at how quick this was. Can you imagine a world in which something similar happens in your experience of charities, so maybe you interact with one charity and say, and you said, I would be comfortable you sharing my data with this other charity that could help me.

One of the really big questions, though, is the interface with government. As an example - we've been doing some work recently at NESTA on income maximisation - ie under claimed benefits. And in this recent analysis, we found that as many as one in four people who are eligible for Universal Credit don't claim it. That was around 7 billion pounds worth of money unclaimed; 800,000 families losing out as much as 800 pounds a month of benefits they would be entitled to but aren't claiming. And actually, it turns out there's been lots of innovation in benefit calculators so you can now go into charity and they'll run a calculator as to what benefits you're entitled to. Do you then have to re enter all that information manually into the government systems and apply for the benefits? And of course, that friction is decisive in terms of do people bother? So: could the government hold itself to the kind of standards that we are holding banks to? Government systems like the benefit system should also be open. And you should be able to say: “I would be happy with my bank automatically applying for benefits for me, I'd be happy this data being shared. I consent to that happening for me automatically, into those government systems. And you can imagine other systems like the legal system, and so you could interface much more easily if you had those kinds of open standards.

One other brief point, just as a slight side point is, we thought a lot when I was at Citizens Advice about power of attorney - it is a very antiquated aspect of the legal frameworks and power of attorneys are a very big thing to give someone because it's so sweeping; it's a very arduous administrative process to go through and for some good reasons. But should you be able to have much smaller versions of that where you can say, I'm comfortable with the Citizens Advice advisor optimising my energy bill for me. And I'm comfortable kind of giving them that minimised control over some small aspect of my finances because I trust them, and I'm confident they'll get me a good deal, and continually get me a better deal in the marketplace. And that way I don’t just end up feeling overwhelmed by all the choices. 

Currently, the legal framework certainly allows for that kind of thing, but it is too big- deal, too broad-brush. So those are just some examples of how the current setup doesn't work. It could do much more work for us in terms of policymaking, decision making, and also that kind of system citizen experience. I guess one big lesson I take away from a lot of that is that most of civil society, most charities still look pre-internet in the sense that they are mostly still kind of vertical silos, and not platforms. And that is increasingly striking. When you look at the private sector, of course, there's been this complete transformation in the last 10 to 20 years. And most of the largest companies in the world are essentially platforms. And they're sort of horizontal platforms. Whereas most of civil society still takes the form of a kind of hierarchy or a silo - a housing charity, for example, or mental health charity. So there’s a big question about what are the missing platforms in civil society? And how might you help those platforms to emerge? One interesting example I'm increasingly obsessed by Olio, the food waste app that help you share food you don’t need. I used it the other day; I was in a restaurant and I ordered a pizza, which I ordered gluten free, and it turned up and it wasn't gluten free, so I couldn't eat it. I put it on Olio, and before I left the restaurant, someone came to collect it for me, and I gave them a hot pizza - it's powerful, because it's not just saving food that would otherwise go to waste, It's tapping into the power of communities. So I had a live chat with that person. And we talked about the pizza, and it was nice. And I'm regularly now just putting food on olio.

These platforms are starting to emerge. Could we do more to help them emerge? To speed up the emergence of these immensely powerful platforms for kind of connection and community engagement? I think we could through funding through regulation, etc.

And the last point I'll make very briefly is to say something about AI. In any debate like this, clearly, whole new things are becoming possible every week. With AI, I think, the big question is what kind of digital assistants we’ll end up with. What if people had a kind of AI system that was genuinely on their side and helping them navigate some of these choices that we all face in interacting with the state, or with markets, or taking your boss to court for not paying you the minimum wage, etc, etc. I guess the big question I’d like to get into the room is who will own those agents? And will they be designed and optimised for profit? And will they be recommending products to you because they're making some money from recommending those products to you? Or will they be owned and optimised for other things? And will they be genuinely on your side?

One interesting point - we're incredibly tough on regulation of advice and guidance. Most people aren't allowed to give advice without a great deal of training. So can AI agents give you advice about products? I think currently you can surely ask that question. Should they be allowed to do so? I have no idea. But so where do we land on that kind of question of this world of AI agents who raise the agents? On what basis? Are they being incentivized and optimised? etc.

Chair 

I love the vision, Speaker1, and thank you for the provocative questions. I often wonder, when thinking about the role of civil society in this kind of future, “where is the permission, the social licence going to come from?” I have found the last Acemoglu and Robinson very useful in thinking about that - “The narrow corridor” - which is a long riff on Power corrupts…”, but how you can sometimes increase collective power nevertheless as long as civil society develops the powers to keep the state in check. Without that, you end up in some pretty nasty configurations -  perhaps China is a place that comes to mind in the context of the data & the modern surveillance society. So if we think of the Industrial Revolution, the enormous trauma of its early stages, and the development of countervailing non-state-aligned powers, like mutual societies, trade unions, etc - they were essential in humanising that traumatic modernity. And it really important that they were non-state institutions, in the sense that people can form the kinds of attachments to them, that they certainly can't to the institutions which are also in often difficult ways, deploying these new powers.

Citizens Advice was in some ways part of that history - it was set up not to humanise the factory, but the experience of the mass state that had also got very complex requiring particular behaviours from individuals. So does Citizens Advice today think of itself as having a role to help us navigate the complexities of cyberspace?

Take, perhaps, the distressing example of trying to come up with a vulnerability register that works across government and utilities - wy has that turned out to be so hard?

Speaker2

For those of you who don't know, the history of citizens advice - it was founded at the dawn of World War Two - we have our 85th birthday in September - to provide advice to people, who were now dealing with huge challenges in their life, and on what they could in turn expect from the state.

We're thinking about data and the sorts of issues Speaker1 has mentioned a great deal. We're a charity, we're subject to quite severe funding constraints at the moment, we've receiving more demand than we’re able to fulfil across all of our channels. And we have to think of new ways to deliver advice in more cost effective ways. And that's where initiatives like AI come in, but at the same time, we need to think very carefully, because we do have a strong ethos and brand, and implementing AI in certain ways could destroy trust, and give people - particularly the very vulnerable people who come to us for advice - a really poor experience. So it's something we're thinking about, I would say, we're probably at an earlier stage of considering how we're going to use AI.

Within our network. We do have an initiative that's being trialled at the moment called Caddy, which helps optimise the ability of our advice supervisors to check the quality of advice provided . We’ve received some support from the Cabinet Office during the build process, and it is being piloted across a number of local CA branches; if it goes well, it will be rolled out more widely.

Turning to the question of vulnerability: that is something, our advisors spend a lot of time on, and indeed they waste a lot of time by having to sign clients up to the different vulnerability registers that exist across various essential service markets. It's a huge waste of time. Over the years, I have been involved in various initiatives trying to crunch various datasets together & it never works very well. I think part of the challenge is that government tends to start small, lack ambition and therefore under-deliver in this sort of project. Take the common vulnerability register for energy and water customers - bringing the databases together took over 10 years to deliver. But doing such a small thing so slowly means that you're not delivering significant change for apparently a huge effort. So no one at the companies or the regulators treat it as a high priority because it seems clear it won’t deliver anything transformative. Lacking ambition means you never get the attention of all the institutions needed to make something like a common vulnerability register really work.

Speaker7

I want to reinforce Speaker1’ and Speaker2’s points about the difficulty and the size of the prize here with a couple of examples. At the National Lottery, we tried to streamline grant applications for charities that apply to multiple funders by agreeing a single common application that could be shared. It was very hard to get funders happy with a data sharing model, let alone what a common application might look like. But something around a data store for a grantee’s data that could be commonly accessed, with clear access control mechanisms, would surely be valuable. Another example comes from a European Social Funds project targeting long term economically inactive people. We ended up with a large database of this very important demographic - important both for research and for targeted programmes - but really no way to re-use that in government or the third sector. This sort of data waste stands in the way of progress, and if we can find a way around the control and privacy questions, we’d unblock a great deal of value. That has to come from the development of trusted platforms for data sharing.

Speaker3 

On that word “trusted” … I have been thinking about the acceptability of technological change to society for a long time, at least since having my fingers burnt at it over GM crops… There is a lot of talk in the tech/commercialisation community of the need for “public trust”. But the fact is that there is often far too much (blind) trust. What we need is trustworthy tech, which deserves trust not simply trust-in-tech. That requires a lot of well-understood further qualities: integrity, openness, competence, etc. These, of course, are human and organisational qualities, and not qualities that you can keep locked up in the data department or in protocols.

Speaker9 

I agree with the point that trustworthiness needs to be demonstrated to earn trust and how important it is to this space. But if you look at international surveys, it is striking that the UK comes absolutely bottom - whether private or public sector institutions are the subject of the question (eg Edelman Trust Barometer). That presents a particular challenge here.

Speaker10

By the way, when I did a bit of research on trusted institutions a few years ago, the most trusted body in my list came back as the Post Office…

I think we need to get to the fundamentals of what data is to make progress in this conversation. A lot of people think of data as an asset that belongs somewhere. However, it's actually a potentially dangerous, nonrival asset that can be here, there, and absolutely flippin everywhere,. But it could also be required to be in my personal data store - and then we can imagine an entity acting on my behalf as my AI agent that represents me in cyberspace. Of course, the data can also be used by the local authority helping it understand where best to squirt cash; it can also be at used in some research function. I could be in control of permissions over that data; I could, for example, allow flows to a Data Trust, which has got a wrapper and a governing structure that enables it to be used for research into addiction.

Governance in the end will be the thing that allows us to develop a trusting relationship towards the entity that enables this non rival, risky asset to be used in these ways. When we first started Ctrl Shift, the digital identity programme started up - that was 15 years ago. Today, we’re in a better place with ID than we have been for a while - we've got 49 (at last check) certified digital identity providers complying with a particular trust framework within a governance structure. And I see that as a sort of, at least a starter for 10, of a blueprint of how this might work: we have a governance structure, a trust framework, people are certified, there is assurance it will be usable. What we can see emerging here, with gov.id, is a framework through which banks, energy companies, data, citizens can all use the same verified data that identifies a person in the way needed for each use. The data itself, note, is not owned by me or owned by the digital identity provider, or owned by the government or the energy companies - it's a non rivalrous asset whose controlled flow is enabled using these governance structures. This makes me optimistic that we are seeing how all of this works. The data is something that can actually add value everywhere.

Speaker11

In terms of trust, we should really celebrate how well Open Banking has worked. At the time we were building it, there were lots of worrying scenarios presented - it would be the end of free-if-in-credit banking, the data would leak, security would be a nightmare, fraud would increase… The fact is that we implemented it competently and gradually, and none of those scenarios has come to pass. That is how you build legitimacy for these sorts of mechanisms.

Speaker12

It is worth emphasising this point - we were told there would be a catastrophic collapse of the banking system (said the banks). They offered to put all the data on an iPad and give that “free” to customers … they really wanted to keep hold of those accounts! But one of the nice things that has happened - we were solving for competition in banking, and we had no idea that the remedy would actually deliver something that turned out to be helpful to some vulnerable customer - payday loans for single mums are a great example. Banking data allowed fintechs to come in and offer those loans at a lower margin and without the nasty business models we had seen before. Politicians wanted us to break up Lloyds - well … that would never have delivered this kind of totally unintended innovation.

Speaker5 

A quick comment on “say it once” convenience and power of attorney. As it happens, last week, someone close to me died. I was able to “say it once” in state related services; but not for the private sector; so I still had to come out with 12 paper copies of the death certificate. Why can’t we digitise something like this? I also had their power of attorney, and some of the questions were ones I wanted to be involved in, and some not at all. So I agree we need to be able to restrict POAs in a granular way.

Chair 

I wonder how many of these problems could be made easier to solve with personal data portability - if your DWP data becomes a personal & portable; if your NHS data is personal and portable, etc - is there any chance that that might slice through the difficulties? The flow would be that a Citizens Advice advisor could be authorised by the client to collect the personal data from stores kept by the energy company, the DWP, the NHS, HMRC, etc. CitA is given some limited authority to collect that data and to provide advice based on it … would that cut through the difficulty of trying to keep it as a State/Corporate/Regulator initiative, with all the nervousness and legal difficulties of sharing data for operational purposes when it was not explicitly collected for those purposes?

Speaker13 

Like Estonia, you mean?

Speaker10 & Chair

Estonia may not be exactly the right model at the level of detail. It is very much dependent on a state-coordinated infrastructure for data sharing, and conceived as being primarily about state-citizen data and not wider society sharing (even though that is to some extent bolted on). But the Estonian model requires a degree of state legitimacy that a) does not exist amongst all citizens in Estonia - especially not the ethnically Russian population and b) is not appropriate for countries whose citizens that have a more distrustful - and not necessarily unhealthily so - attitude towards their state - perhaps the case of the UK and other older states… Look at the difficulty that we have even with the ONS’s production of national statistics.

Speaker8 

Yes, the ONS in some ways has very wide powers to collect datasets regardless of provenance, but only for use in the production of national statistics, not for any operational purpose. So it can enhance understanding, but you cannot use the same granular data to then act on that understanding. Of course, with such expansive powers, there is a huge fear of cybersecurity that is an important friction in wider data use. The public health urgency of Covid led us to be more ambitious, and we started bringing in real time consumer financial transaction data, for example. It was hard work, being both transparent about the fact that we were doing that, while protecting the commercial sensitivity of that information. We published a statement saying that we were being provided with aggregated data - definitely not anywhere near individual level. We had to get all the data providers to agree to that statement. And that took some quite senior negotiations - often with the chief security officers within the organisations that were providing information to us. The providers were worried they were going to lose customers or investors through a “big brother” backlash against them.

So we ended up with a statement on our website that we kind of put out quietly that just said we are using this data for our pandemic response, but that it will not, like other ONS statistics, be published because it's too sensitive. Since then, with trust between ONS and providers growing, we have gradually worked more and more with those providers, and for some series have even been able to publish series based on their data. Visa, for example, allows us to now publish some trends on underlying consumption behaviour. But I repeat, given the legislative powers that we have, all we can do with that data is produce statistics - we cannot use it for targeted delivery or operational programmes.

Speaker6 

I would like to inject a note of caution here. Take the vulnerability register the room seems so keen to create. The trouble is that the register is never likely to actually be a proper reflection of reality. You can be vulnerable one moment and not the next; your circumstances, the context, other stuff happening in life - all those things change at a speed and with a complexity that can’t be captured in a fixed database.

I would argue that the processes being proposed are actually the hurdle to achieving the humane outcomes you are aiming for, and thus the process will end up being a disservice to those you are trying to serve.

You have convened this seminar under the banner of “humanity in cyberspace”, but surely the focus should really be “cyberspace within humanity” - how do we make sure that whatever technical processes are put in place, they still serve the people that they are meant to serve? Take for example a lot of benefits processes today - it is hard to avoid the conclusion that they are designed to exclude people rather than to make it easy. The incentives of the State simply don’t align with the vision that is being put forward here. And if that is accepted. Then shouldn’t we be thinking more about the safeguards in designing these systems?

NicolaHamilton 

Yes - of course, from a system perspective, our intention is that lots of good things come from data use and that we minimise the bad effects. However, we should remember, especially when it comes to vulnerability, that the people using data or making decisions or creating frameworks for using data - people like us in this room - are often not the ones to whom bad things will happen. Not having had those experiences might make us too ready to maximise data-use, and therefore its understandable why some people in society won’t be as supportive.

Speaker14

There is a very good example of the challenge is what is some excellent work by local government in London that has been done recently to alleviate homelessness. When you see someone sleeping rough and try to find them a bed, you actually need a lot of information, some of it sensitive. For example, some people should not be brought together under the same roof - think of histories of abuse, etc. So any  matching system needs to know names and histories of the person and of the current occupants of housing options. That is a lot of sensitive information. And when someone is asked for all that information, someone on the street, it can easily feel as if they are being put into a system they have not trust in. That can stand in the way of doing good for them. Humanising that process is essential for the data to be put to good use.

Speaker15 

I would like to return to the point of missing institutions and make a somewhat contrarian point. I have been vocal in advocating for a digitalization orchestrator in energy - an institution dedicated to coordinating the digital investments across the sector. I have been involved at a deep technical level in various schemes to make GDPR compliant identity and authentication schemes work. One thing - not very trendy - that has struck me is that although we put a great deal of emphasis on being “people centred”, we should remember that the whole premise of digitalization is that we're trying to get computers to do things for us. So when you look at this from the engineering point of view, in fact you are usually trying to take people out. To some extent, if we want computers to take tasks off our shoulders - and we do - then it is often us who has to adapt to them, and not the other way around. This goes against the grain of much of what has been said today, but it is important to remember that this is a fundamental force at work in all digitisation processes. Of course, they need to be working for us, but that might require us changing for them.

Speaker16 

We should remember that when considering the vulnerable, many are particularly disadvantaged when it comes to exercising agency over consent. This means that it should be particularly easy in such cases to get the social licence to have institutions exercise collective, delegated consent for these cases.

Chair 

I guess much of this goes back to my question - isn't there a non state interface that helps here? And that to the power of attorney point: is there a missing institution or institutions plural - just as once you trusted your union to do the negotiation with your employer? Some institution that you believe is on your side, and whose fiduciary responsibility is to you.

Speaker17 

Here is a good example of how non-state institutions can be vital as trusted data intermediaries. In Germany, the state does not - for pretty understandable reasons - collect data about citizens’ ethnicity. But during the Black Lives Matter protests, it became clear that German blacks found it hard to provide the tools and analysis that backed up their subjective experience. An institution called AfroCensus has now emerged that collects ethnicity and identity data, does analysis about it, and provides it back to the community that wants it.

Speaker1 

I agree with the brilliant description of the nature of data as a non-rival risky asset. But that underlines how important it is to think about incentives in the system: is it profit maximisation? Or hitting a home office target on deportations? What I want my data used for as an individual might be absolutely at odds with some of those motivations. One of the reasons that charities like CitA or Which? emerged in the 20th Century was that it was necessary to fight for the consumer, or the citizen. When it comes to AI and AI agents, we again need them to be clearly looking out for our interests. Of course, there is a problem with the onus on the person of all this control and permissioning, while at the same time, I can justifiably be uncomfortable giving agents a great deal of autonomy over choices - we need to work out what limited power of attorney might mean in practice, and how it is implemented, enforced, assured. And these agents - whose are they? Who are they working for? How are they funded? We need to keep an eye on the incentives…

Speaker18 

Let me point out some of the bumps along the way in the future described by Liz. This notion of individual control, I think, is really important in the vision - it's making quite a lot of it acceptable. But I think it's only part of the solution. Something that we're talking about at Connected by Data, is the importance of collective and democratic governance over this for a few reasons: first, there is a sense it might not actually be our individual data that is important - it also contains lots of data about other people; second, a lot of the value comes from companies or government is by aggregating it - my data might be used to make a decision that affects others and not me. So for instance, apps that are helping drivers navigate the roads, they're not using personal data, but the way that that data is used is going to have an impact on me in my community. So I think that control & consent is massively important. But we need to think about what it means at a collective level. Finally, all this emphasis on individual control is putting a huge burden on the individual when that is exactly what we see the role of other institutions, and even the state as being - to relieve us of those burdens. The same point plays out in terms of us being able to spot harms - it's very difficult for me at an individual level to see what bad things are being done with my data set. Whereas if I can talk to various other civil society groups who are active in different spaces, they will be able to see harms and potential opportunities at a collective level that we simply haven't seen at an individual one.

Here’s my challenge, then: how do we think about control and consent at that higher level of aggregation of interests and assets?

Speaker5 

Here is a really pressing and important example of “who is the agent working for?” - under the Data Bill, the DWP is asking to have the powers to access Open Banking data of individuals and related parties in order to determine eligibility for universal credit. So, Speaker1 - you talk about the under-payment of benefits … but here we are with this infrastructure being used to address the much smaller issue of benefits fraud instead. Should the DWP really be allowed to put AI signalling into the bank accounts of 14% of the UK population? How will that help with building a climate a trust over data remedies amongst the vulnerable?

Speaker10 

Let me share something that I see happening in the market at the moment around these personal assistants: there is a separation going on between the assistant that offers controlled access to your data and the apps - assistants - that use that data. In some cases, the two are bonded; but more and more they are separate and provided by separate companies. That is another safety mechanism in the market. An example is a data exchange in the Netherlands . that is taking in subjects’ health data, standardising and structuring it on behalf of the subject, and then they permit access to that from other app providers - say someone looking after my diabetes, or whatever. It sort of becomes a “platform of me”.

Speaker19 

Do we think we’ll have one agent or many? And if it is many, then will we end up with the same sort of fragmented and confusing information environment we have today?

Chair 

That separation of data from assistance is very important, and goes to the suggestion of a “mini power of attorney”. Take the point about the amount of regulation that we have currently over any providers of “advice”. If you advertise, regulation is very light - though an advertisement is surely “advice that you should buy x”. But if you advise, for example in financial services, then you are subject to the FCA’s notorious 12,000 pages of conduct regulation guidance. It is worth thinking about how this has played out in the past with company pensions - in the bad old days of self-regulation, you could easily get scandals like the Maxwell/Mirror scandal. We learned from those and developed regulation of those acting on behalf of the members of pension schemes - real independence, oversight that the decisions were actually in the fiduciary interest of members, etc. I think that even if you have separation between the assistant and data holder, these questions of fiduciary responsibility will not easily be solved, and we will need some kind of regulatory framework.

Speaker7

I do keep thinking about where we can kick-start these institutions. The same sort of thoughts came out of the session on health. As there, much of the data we are talking about here is in state or civil society hands… Perhaps that means there is already some agency and democratic legitimacy to get started with these areas.

Speaker6 

I think it is useful to start drawing some distinctions between types of data. Sitting at the Council of Europe, on AI and education, we're trying to look at why you regulate at all. Open Banking is great. But what about my 90-year-old neighbour? I log in for her and do her banking, without that “mini POA” we all seem to want. So have we designed the tech with all of those types of uses and data in mind? GDPR is different from Convention 108+; aggregated data is different from personal data … drawing some of these distinctions when we think about the required data architecture is necessary and useful.

Speaker8 

I would like to emphasise the point about community data. Food banks provide important data about what is going on in an area; communities are paying for air quality sensors, etc. There is data about mould in housing stock in a South London project - it is about houses but not necessarily about the people living there. Of course, that could lead to analysis, for example, asking whether these issues are particularly affecting certain groups of people, and thus would have justice implications.

Establishing that, of course, means extending these data sets to include some links to personal information. The ability to cross that data means that people see something useful for them or their communities coming out of data use, and, as was mentioned for open banking, the trust is in the pudding as it were. It is useful to know that I am subject to mould, that others are in my position, and that together we can get recourse. That makes me more willing to share personal data. This is more psychological than logical - you have no idea if the mould-data-collector has the cyber security infrastructure to handle personal data … but it has proved useful, so you go ahead.

Speaker3 

We should not over-emphasise personal pay-off as the only driver of action, important as it often is. I did work on trust with the Irish NHS, and people are not just driven by personal motivation, they are often motivated by social good alone.

Speaker4

The question that keeps coming to mind for me in this conversation is power - how can this empower communities, and who holds power and how? It is not just about controlling power, it is about distributing it to the right places.

Speaker5 

I agree that power is the key here. Just recently, we’ve been fighting about whether civil society organisations should be allowed to act on behalf of data subjects without their knowledge. The government does not wish to give that power to civil society. But we may not want to give our data to government institutions, for all sorts of other reasons, partly because they keep on setting up ways to use that data against us. So I am horrified every time you say, “we need more institutions”, because I'm exhausted going around the institutions we already have. My fear is that building institutions actually drains power away from where it ought to be - and where it has already been much drained - communities and collectives.

Speaker6 

There's a difference between institutions and infrastructure. Very often, we have formal powers but no infrastructure to make it real. Take consent in education on using student data to improve a product - parents have that right, but schools have no mechanism, no infrastructure, to get and process parental consent. In public datasets, we often do not even collect consent, because that is not the legal basis for data control. And often, consent looks like coercion - what actual choice does the homeless person have? My question would be as we build these services, who do we empower? So how do we build these services that actually redistribute power in ways that are equitable, and not just a redistribution of power, but not to the group we are really wanting to empower?