The Penta Podcast Channel

Artificially Intelligent Conversations: AI-enhanced policymaking

February 29, 2024 Penta
The Penta Podcast Channel
Artificially Intelligent Conversations: AI-enhanced policymaking
Show Notes Transcript Chapter Markers

This week on What's At Stake, tune into another episode of our Artificially Intelligent Conversations series. Each month, Penta Partners Andrea Christianson and Chris Mehigan will dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management, policy, and more. This week, they discuss the fascinating ways in which AI will influence policy, and, simultaneously, how policy will influence AI.

The pair touches on the impact of the EU's AI Act on companies across the Atlantic, specifically the regulations high-risk AI systems will be subject to. They express cautious pessimism about the impact of AI on elections, before turning to a more optimistic subject: the power of AI to transform and craft policies. Specifically, they discuss the City Digital Twin Pilot Project's potential impact on urban planning and Destination Earth, a digital model of our planet that simulates and predicts the interaction between human activities and our environment, ultimately helping to fine-tune policymaking. Tune in!

Speaker 1:

Hi, I'm Andrea Christensen, partner in PENTA's DC office and head of our firm's AI task force.

Speaker 2:

And I'm Chris Meegan, partner PENTA Group in the Brussels office.

Speaker 1:

Thanks for tuning in to artificially intelligent conversations. This is a recurring series that's part of PENTA's what's At Stake podcast.

Speaker 2:

We're facing a time of unprecedented technological innovation in the AI space. Each month, andrea and I are going to dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management policy and more.

Speaker 1:

Let's dive in Chris, All right, so first, happy leap day. Do you celebrate?

Speaker 2:

Happy which day Leap day.

Speaker 1:

Leap day.

Speaker 2:

Yeah, not really, although a friend of mine got hit a couple of years ago where there's a strange tradition in Ireland I don't know if it's elsewhere where, if you haven't asked your fiance to marry you, that the fiance can use leap day to surprise you, and this did actually happen to a friend of mine.

Speaker 1:

Did it end well, I'll treat that for that.

Speaker 2:

Well, it ended very well.

Speaker 1:

Yeah, Well you know there's a whole movie about that.

Speaker 2:

Really.

Speaker 1:

Yeah, it's called Leap Year.

Speaker 2:

I don't know if I was involved.

Speaker 1:

I think it takes place in Ireland. It's a rom-com you should tune in.

Speaker 2:

Maybe, or the Irish, thing, I don't know?

Speaker 1:

Okay, well, you know. I found out today that Julius Caesar actually was the person who decided there should be a leap year every four years.

Speaker 2:

Oh yeah, the Julian Calendry. Yeah, yeah, yeah.

Speaker 1:

That's a fun little fact for our listeners today, but what we're really here to talk about is AI, and I'm going to go with you first. Anything new, fresh. What do you want to share with people? What's happening?

Speaker 2:

Well, yeah, like you know, there's been quite a lot. We're getting down to the sort of the business end of things over here in Europe and the AI Act, which has been so much focused for so long. We're finally through the legislative process. It's been published, it's been done, it's been put into the implementation phase and the establishment of the AI office, which is an agency which is going to be required or set out within the AI Act and is going to be there. Very much about the implementation.

Speaker 2:

So there's been a lot of focus on that and I think we can probably speak about some key elements of it. But really the work now is going to be on realizing the ambition of the AI Act over in Europe and, but of course, some other industry news over here as well, or continuing to see, you know, considerable investment. Probably the biggest news was Microsoft's interest in that French startup, mistral, which really sprung onto the scene sort of late last year and gave us a bit of excitement from a European perspective in terms of developing up a large model, which is really the first one from the European perspective that we would have. But what we're also seeing is some very interesting developments in the European space on more kind of bespoke or more kind of niche elements in terms of where they're deploying AI and machine learning, and some of those projects are really exciting.

Speaker 1:

Awesome. Well, over here I just listened to an interview with the CEO of Google's DeepMind and, right as I'm getting comfortable with Gen AI and all of the cool things that'll be done with it, he's talking about artificial general intelligence and that we're probably a decade away, and so I'm sort of back in this oh my God, things are changing really fast phase and trying to get my head around it, thanks, although you know he's obviously very positive and and and coaches a cautiously optimistic approach by by folks. But that was that. That's kind of the newest thing over here, where I'm Regrappling with the timeline of and pace of change that we're probably looking at.

Speaker 2:

Absolutely. It's an interesting topic and one we will probably see more of over here, I think, as we get into the next evolution. But, like I said here, a lot of focus has been given on the the actual matter were finished with the text, what Exactly did it say, and so forth. And the big topic here I think that I would be pointing people towards is understanding the risk-based approach, which has been very much at the core of the European approach and but, having said that, it went through from very significant changes during the Negotiating period for the AI act you know it'd be great for for listeners here in the US.

Speaker 1:

You know the biggest, as you mentioned, language models and developers are US based. You know Google, microsoft, open AI, anthropic all of this is here. How should they be thinking about the high-risk Elements in the AI act and what's in there that's gonna affect them, and what practical things do do US companies need to be thinking about?

Speaker 2:

It's, and you know it, the AI actors is. You can't help but compare it to the general data protection regulation in terms of the scope and ambition and size, and it's actually bigger when you look at the number of articles and and annexes and you know it does eclipse the GDPR, which is quite interesting. But there are certain similarities and there are certain approaches which are consistent between the two and a lot of it comes down to almost a certain degree, an element of self regulation or self-assessment by, you know, companies. In terms of bringing a product to the market, they do need to make a determination of where they fit, and the risk category is really a key aspect on this and it comes from. There's really a sort of a sliding scale from unacceptable risk down to high risk, medium risk and low risk, and if you fall into the low risk category, like this, very little in the AI act there's gonna concern you too much. On the unacceptable risk, obviously that's going to be a much, much bigger issue, but there's very few that fall into that. The real question is the high risk piece, and the high risk piece is something which is Defined to a certain degree within the third annex or annex three. So it sets out very particular areas which are automatically deemed to be high risk, and this includes things like the use of biometric systems and systems that are required for critical infrastructure, systems for education or employment and Are access to sort of you know, essential products or central services, whether public or private, law enforcement, immigration, and then administration of justice and so forth. So, while that's a fairly comprehensive list, it is quite focused, as you can see, in certain areas.

Speaker 2:

Now, having said that, although the deemed Categorization for anything in that space is going to be high risk, of course there are exceptions, you know, and this is where you need to kind of then consider what the actual outputs are. So there are a couple of categories you can get into here, but really what you're looking at is stand whether or not the AI system performs a very narrow task, or if it's intended to improve the result of a previously completed human activity. Well then, you know that could be a derogation as well. Whether or not it's it's intended to detect decision-making patterns against what's not making a decision, but it's rather kind of honing or, in you know, enforcing a decision that's taken elsewhere. So, whether the AI system is intended to perform a proprietary Asked-to-assessment. So again, not that it's kind of in acting in isolation, but rather it is a supportive work number. So if, if you fall into one of these derogations, then even though you're within one of those Classifications on Ralex 3, you could still find that you're not deemed to be high risk.

Speaker 2:

Beyond that, then there's a second process we need to go through and this is where companies would have to get into Conducting their own impact assessment and there will be the development of these systems. You're going to see companies coming forward with this with you know, whether it's online forms or consultancy work or even sometimes just a spreadsheet, which is what we see an awful lot being used in terms of privacy impact assessments under GDPR. But this will be about you know. You'll go through the process and you'll need to identify. You know what is the impact on health, safety or fundamental rights. Do you? Are you compliant with relevant harmonized legislation?

Speaker 2:

So this would be particularly important in areas such as Medical devices or whether not you're compliant with GDPR, for example, and so forth. You know what the scope of the application is. You know, have you done a conformity assessment or so forth? And you'll have to go through this process and identify. You know where you sit within this. Now that means, for example, you could end up with Identifying then where you run into a problem that brings you to that high risk level, and perhaps you can tweak your system, you know, to get it to the point where you come back on your medium. So there's quite a lot of flexibility within the system. But you know you do have to engage quite practically on it.

Speaker 1:

Yeah, and I think that everything that's happening in all the conversations that's happened around the AI Act kind of bring to mind all the high risk things, all the potential problems, put you more into the concern category. And you know, here in DC we've been having a lot of conversations with people both influencing policy and then also a lot of young professionals who are thinking about implementing and using some of these tools in their work. And you know, I'm really hearing two competing concerns. So the first is the concerns about the bad AI could do think elections, think, you know, catastrophic problems, and paired with a recognition of the lack of deep understanding within the individuals and within other kind of decision makers in Washington, and so that's kind of one view. And then the second is this idea that we're not even hearing any of the good. We aren't talking about the positives of AI that are already happening out there and what are to come.

Speaker 1:

You know, I think about things that you know just this week there was a group at Princeton who'd got a nuclear fusion breakthrough, something to do with kind of plasma instability. That I mean the idea that we could have boundless energy at some point due to large language models is wild. And I mentioned the Google DeepMind CEO interview earlier that you know gave me pause on some of the AGI stuff but at the same time he's saying that their Alpha Fold breakthrough from a couple of years ago were just a couple years away from AI design drugs being in clinical trials, and that's really amazing when you think about that and you think about agriculture, and so there's all of these really positive stories out there and I don't know that we're at the right balance. I mean, we should be concerned, but I don't know what's your point of view, what do you think? Where do you land?

Speaker 2:

That story from Princeton that blew my mind.

Speaker 2:

I was really impressed with that and I remember years ago talking at an event on energy you know, demands and energy into the future and so forth and the whole question of fusion came up and some of the projects and the trials that are being conducted around the world and how it's such an important area for scientific research.

Speaker 2:

Even though there are massive barriers to achieve this, the potential benefits are so huge that people are really kind of continuing to keep pushing on it and although what we saw coming out of Princeton was not, you know, the definitive, this unlocks it for us.

Speaker 2:

It did show that, you know, using AI, they were able to solve a problem, get over the hump of a particular issue that was holding back the advancement of the research in the space, and in many ways it was what really struck me there was about.

Speaker 2:

It was a great example of how AI is so different and so valuable, in that we have a problem which we know what the problem is, but it's outside the capacity of a human brain to react with the speed necessary to deal with the problem. But because AI can deal with such vast amounts of data and can move at blinding speeds, much, much faster than we can. Then it was able to be deployed to solve the problem successfully, and that's a really good example about the differences between an AI being deployed into a particular issue as opposed to a human being, in that the capacity to, you know, process massive, massive, massive amounts of data so, so quickly is what really sets it apart. So it shows that from a scientific research perspective, there are, you know, huge opportunities here about deployment of AI, and that's incredibly exciting as well, you know.

Speaker 1:

Well, and you were. You said something earlier about an effort to create digital twins and how that could help city planning, but also sort of climate change planning, so can you talk a little bit about that and what's going on there?

Speaker 2:

Oh yeah, this is one of the ones. I'm going to end up geeking out about this an awful lot, you know, but it's um. So there was a. There was a project coming out of Bulgaria a couple of years ago which really caught my eye and it was about developing a digital twin city. So they created this virtual city and then of Sophia, and you know, this would set up sensors around the city and measure things like traffic and air flows and pollution and pollen movements and so forth, but it allows planners now to go in and through the digital version, through the virtual city, they can say well, what happens if we develop up this part of the city and if we create buildings of these dimensions? What does that do in terms of the air flow, the traffic flow, pollution and so forth? So, before they break ground, they get to have a good understanding of what the potential implications are for the city as a living space. I thought that was very, very interesting and a great way to use AI and a great tool for urban planners around the world.

Speaker 2:

What has since, I've discovered, of course, is that this has been brought up to a much, much, much larger scale at a European level through a project called Destination Earth and which has got a really great hashtag of destiny. So D-E-S-T-I-N and then a capital E, which I love. But they are taking the same concept of a virtual city, but it's a digital twin for the entire earth and they're looking at things like climate patterns, they're looking at transport, they're looking at pollution, they're looking at how do you respond to meteorological disasters and so forth, and the idea is to allow policymakers to draw on this data, to use these twins to model out systems, to model out scenarios, so that they can have more informed decisions. So again coming back, I suppose, to much of what we talk about with all our clients at Penta, about how the importance of data being given into discussions in terms of informing policy decisions. We're seeing this here, of course, with this project A fascinating project.

Speaker 2:

I'd encourage everybody to have a look at it. But, like I said, it allows them to draw on this data in terms of their policy making so that they can make more informed decisions. It also will help them in terms of mapping out best scenarios in terms of responses to emerging meteorological issues or climate issues and so forth, their natural current disasters and so forth and have better response times to save lives and be more efficient. So really fascinating deployment of AI and machine learning within that space. Incredibly ambitious project. It's only two years in at this stage, but they're moving through it quite quickly. They'll come to the end of the project development and full delivery by 2030, but we should start to see some of the aspects coming on sooner than that. I think about 2026 is what they're anticipating, certainly around some of the meteorological aspects. We'll see them that soon. But again, this is a really exciting use of AI which probably doesn't get as much coverage as you would hope it should.

Speaker 1:

Yeah, I agree. So, when we think about this and we have these conversations like, are you, how are you? Have you landed on AI optimist, or are you like in the middle, where are you Hiten? Thank you, thank you, thank you.

Speaker 2:

I'm still. A friend of mine said to me it's not about being the glass half full or glass half empty. You can always refill the glass, right. So I definitely in that space that we go up and down. But I actually think on the research and science side, some of the deployments of AI are extremely exciting and really encouraging. So I'm definitely on the optimist side, I think.

Speaker 1:

Yeah, okay, me too, but I have talked to a couple of people who are not on the AI optimist side, particularly when it comes to elections.

Speaker 1:

There's been all of this talk about the Indonesian elections and how one of the candidates had a more cuddly AI generated version of himself, and there was this discussion whether that was okay or not okay, and my view on that was it was disclosed, it was AI generated, it was done by the campaign. I mean, how much of this is just a new version of using tools better, the way that Barack Obama used Twitter and social media in really novel ways back in 2008. And so, and then, how much of this is a little bit scary. You know, you had in the New Hampshire primary, you had a robo call. You know going out that had purported to be Joe Biden, and that's a little bit scary. That's not okay. What if there was one that told everyone that the election day had been moved? How do we think about that? Where do you come down on elections, as this is a very big election year across the globe?

Speaker 2:

It's such a huge one, but I think you're right, like it's easy to be concerned and in many cases, because it's difficult to really understand what the potential impacts would be or how would they be or would we be able to determine them. Definitely the buzzword, you know, from an AI perspective, in the AI act, rather has been about transparency. So, for example, in that case, you know if the candidate wants to kind of use a filter or whatever the case is, represent themselves in different light. If it's being declared this is generated by AI, well then is that okay, whereas if you have a situation where you've got people being deceived willfully deceived through AI tools, that's a very different conversation. So, you know, definitely a big area of concern. I think there's also, rightly or wrongly, I think, there's a sort of a pessimism around this in terms of all the elections taking place, that people sort of anticipation this is going to happen and that in itself, the fact that there is that has a mystic element to bearish is a bit concerning.

Speaker 1:

Yeah, and that does lead me to a question Like there already is disinformation, there already is misinformation. I mean, you look at what happened in the Brazilian elections a few years ago. There was just kind of massive disinformation and it was shared via WhatsApp by groups of people and they ended up limiting the number of people that you could share with. So we've already had these problems. I get that Gen AI magnifies it, but is this more of a literacy question and helping people understand the difference, or are we just in a different category of threat that we need to be thinking about? And I just don't know that I have a full view on that yet.

Speaker 2:

No, it's very difficult, and you mentioned WhatsApp and I said there was this huge difference between closed systems like that, closed groups like that. How do you monitor, how do you manage that? How do you avoid the proliferation of false content through those private groups? It's almost impossible to do so. The content moderation side of things or the content control side of things is going to be very difficult. It is somewhat encouraging that we're seeing some leadership coming from some of the big tech companies in particular to say that they are going to address this. Whether that's open AI, whether it's meta, do you mean? They have made the point that they recognize the concern, that they are going to take steps, but how effective that can be is yet to be seen.

Speaker 1:

So it sounds like we are cautiously pessimistic about elections and what could come out of those, and we'll be watching that very closely throughout the year. So more to come. But let's refill the glass quickly and go back to some of the positive elements. You mentioned something to me about common data spaces and how the EU is planning to kind of use different data sets for planning, so why don't you talk a little bit about that?

Speaker 2:

Yeah, this is something again when we look at the AI space and all the rest and looking beyond just AI policy itself, there are so many other elements. That is dependent upon whether that's infrastructure, connectivity, so forth, but also a key area is the data strategy from a European perspective, and one of the areas is a couple of aspects of this. There's a data governance act and there's a data act and so forth. But one aspect they've gone into is a concept called data spaces, and the data spaces are part of a strategy where they've brought together initially nine data spaces. So these are areas where what they're doing is they're bringing the data in from public sector and businesses to create a common data space, create a common pool around these particular verticals, and then that allows companies and allows organizations who want to provide a service into Europe to come in and access these data spaces. So, for example, there is one on energy, there's one on mobility and so forth. So if you're a company that was coming in and wanted to work with self-driving cars, for example, and talk about solutions of autonomous vehicles in Europe, they could access through an intermediary, they can access this pool of common data drawn from 27 member states about traffic patterns and so forth in Europe, and it means that they're not required to go to a third party, to another country or to another company, rather to license that data, but rather Europe buys it through these common spaces. So it's very much about enabling companies to provide these services into Europe.

Speaker 2:

Now there's another aspect to this which is probably worthy another discussion another day, which is that there's also an issue about data localization. So, for example, they don't want the data being taken outside Europe Now that's a whole other discussion and that a localization and making sure the services provided in Europe is drawn upon European data and so forth. And it plays into kind of really continuing policy areas that we've seen under the e-privacy directive and the e-privacy regulations being negotiated, under GDPR, under the data act and so forth. There's a very clear message from Europe that the data has to stay in Europe. But again, they are looking to facilitate these areas, these common data spaces, and the ones they start off with are health, industrial and manufacturing, agriculture, finance, mobility, the green deal, energy, public administration and skills, and again, around each of these areas now they've added to them since with media and some others. So it's building out as we go and there's a lot of work being done and just a few months ago, they awarded a contract to a consortium of companies who will create the actual network and infrastructure through which companies will be able to access these.

Speaker 2:

But I think it's a very interesting approach. It's a very European approach, as the data localization element is a key part of this. A lot of companies might say, well, it's in some way disadvantageous to US tech companies who hold a lot of this data. But again, that's a European approach and that's how they've taken it. It's a key part of the strategy and it's there to enable technologies under such as the AI Act to be deployed at European level. But again, for me personally, I think it's a very interesting initiative and opens up a lot of opportunities for further growth in Europe.

Speaker 1:

Yeah, well, I think we should end on the positive note of all of the investment and new ideas and creative thinking about how we can use all this data that we have and allow an AI that has the ability to draw all these connections in a really powerful way to help guide us as we think about the best policies and the best way forward. And so thank you again for such a great conversation, chris, and to all of you listeners for listening to this month's episode of Artificially Intelligent Conversations. Remember to like and subscribe wherever you listen to your podcasts and follow us on LinkedIn at Pentagroup.

Speaker 2:

Thanks very much.

Speaker 1:

Thanks, Chris.

AI Regulation and Industry Updates
Digital Twin City and Data Spaces
Harnessing AI for Policy Development