The Penta Podcast Channel

Artificially Intelligent Conversations: Leaders focus on AI at Davos

January 17, 2024 Penta
The Penta Podcast Channel
Artificially Intelligent Conversations: Leaders focus on AI at Davos
Show Notes Transcript Chapter Markers

This week on What's At Stake, tune into the inaugural episode of our Artificially Intelligent Conversations series. Each month, Penta Partners Andrea Christianson and Chris Mehigan will dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management, policy, and more. This week, they discussed two major reports released leading up to the World Economic Forum in Davos: one from the International Monetary Fund on the impact AI will have on the economy and one from Deloitte on the impact of AI on jobs and companies. 

With the EU's AI Act on the horizon, Andrea and Chris debate the role of companies and governments as regulatory arms. Further, they explore the alarming potential for AI to produce 90% of online content, and the potentially polarizing effects of deepfakes and rumor bombing on our political landscape. Don't miss the first episode in this series!

Speaker 1:

Hi, I'm Andrea Christensen, a partner of Penta Group in the DC office.

Speaker 2:

And hi, I'm Chris Meegan, partner of Penta Group in the Brussels office.

Speaker 1:

Thanks for tuning in to Artificially Intelligence Conversations. This is our recurring series that's part of Penta's what's At Stake podcast.

Speaker 2:

We're facing a time of unprecedented technological innovation in the AI space. Each month, Andrea and I are going to dissect the AI latest news and developments from each side of the Atlantic, helping you understand how AI will affect stakeholder management policy and more.

Speaker 1:

Let's dive in. So, chris, there has been a ton of new reports out over the past week. We're already AI readiness by country, readiness by company. What's going on in the reporting world?

Speaker 2:

I mean, there's been so many reports. People were very busy over December, and probably earlier into that, running these surveys to get this information, but we have been hit with a ton of information over the last two weeks. One report would really caught our eye this week was the IMF report which dropped right before the World Economic Forum in Davos, and this is one which you and I have spoke about briefly. But why don't you touch base on some of the key findings there?

Speaker 1:

Well, I mean, in short, 40% of jobs will be exposed to AI. You know, this is going to be more significant up to 60% in developed countries. Less significant and less about countries, but that shouldn't necessarily be caused for good news because you're thinking about, you could have an increasing gap in inequality between countries, within countries. There's all kinds of implications for this, and so I think there's a question of like are we ready, are we not ready? I mean, the view was that Denmark, the United States and a couple other countries were more ready than other countries, which I guess is good news. What's your take?

Speaker 2:

Yeah, it's.

Speaker 2:

You know you read different reports on this, but the IMF one was really stark, and it is something I think that has come up a few times before in some of the conversations and concerns that were put forward about AI and its adoption, that it would create a sort of a greater gulf between those countries that are able to harness the technologies you know effectively and those that perhaps lack the skills or lack the infrastructure to do so and to benefit from the same extent.

Speaker 2:

But this now we actually have this sort of feedback in terms of this report and it is a really, really stark number. But I suppose there are two aspects of this. You know, there's not only the fact that those countries might be primed to be in a better position to benefit from AI and those opportunities, but also they're also identified as countries that could be most affected negatively, also in the fact that you could see job replacement, you know, skills shortages and so forth, and the pressures that they would bring to bear as well. So really, about the whole question, about they might be primed to benefit, more so than other parts of the world, but of course, there's work to be done in order for them to actually, you know, realize those benefits and, of course, longer term thinking them. What we need to consider, of course, is the concerns around the issue of a huge, you know, shift in terms of inequalities that we would have certain countries really come to the fore in terms of being able to harness AI and those countries, other countries, which are just being left behind.

Speaker 1:

Yeah, and I think it also raises this question of you know, what are governments doing now? What should governments be doing? What are companies doing now? What should companies be doing and thinking about? I mean, our whole premise at PENTA is stakeholder engagement and understanding your audiences and engaging your audiences, and we're looking at employees, we're looking at political actors, we're looking at investors. So what I would say like let's think about first, like what are they thinking about in the EU and what maybe should they think about, based on some of these near reports, Well, again, there's lots of things we could unpack there from the EU side.

Speaker 2:

One of the things that jumps out of me here, of course, is that when we think about preparedness as well, it's about the environment. It's not just about whether or not the tools are there to be harnessed, but the environment in which they're deployed. So the EU maybe have got a little bit of a, you know, on the front foot in that regard, because they are just about to pass now the EU AI Act, which, of course, has been subject to so much reporting. But you know, they're not the only ones moving forward with regulation and, if anything, what we are seeing is business leaders are actually craving some of this regulation in many cases, but also looking for greater global cooperation so that there is a consistent approach, and there are certain mechanisms which are coming in line that could facilitate that, such as, you know, as I said, the cooperation at the G7 level and so on.

Speaker 2:

These are opportunities as well where we can see greater collaboration from an international perspective, but creating that environment, a stable environment, where we know how AI is going to operate, where we know what the rights are of the individuals who are affected by it, where we know what the boundaries are in terms of how it can be deployed, and not about, for example, whole questions around intellectual property, questions about transparency, questions about privacy, all of these things which need to be explored. So when we have a regulatory system around that and that is set and is in place, that gives greater certainty about how services can be deployed. Of course, having said that, you know there are many other areas and you said in terms of stakeholders, a big question here will be understanding what those stakeholder concerns are, what their fears are and what their requirements are in terms of any AI that's going to be deployed, whether that's employees, whether that's consumers, whether that's business partners or whatever the case may be. So there's a lot of work for companies to do to really realize this.

Speaker 1:

Yeah, and I think that there's a question that I have, and I think others have, about how flexible is the AI app framework? Because I mean, you just look at the news every day and things are changing. I mean what this new Anthropoc Report basically said that there's a way to train AIs and LLMs to deceive. Open AI just changed its policy quietly to enable them to do national security work, which can be interpreted as military work, and so I think there's a question of like can we really do a regulation that's going to be able to like a keep Pandora in the box, or is the AI app flexible enough? Should it be a model for the rest of the world? I can get into kind of what the US is talking about, but let's stay on the EU for a second.

Speaker 2:

Yeah, well, absolutely. There's all sorts of these questions. If you recall, back you and I did a podcast recording sometime back. We spoke about AI, and one of the points I made was the real test, I think, for the AI Act is not about it being adopted and putting in place regulation, but it's how does it respond to advancements within the AI industry and so forth? Is it flexible enough to really address the next evolution of AI that is coming down the tracks? And that's a really, really big question. We won't know that, of course, until it comes out, but that is something that they're going to definitely be challenged with, but I don't think it's just an issue for the regulators either. In that regard. I think it's something for all companies to be involved in order to be concerned with this about how do you respond to some of these issues as they emerge.

Speaker 1:

Yeah, I mean, it's a great question, that's something that we're thinking about all the time, and I mean, when we get asked, hey, what should we be doing around AI, our answer tends to be well, you should be engaging with AI, you should be, thinking about it all the time and it's funny.

Speaker 1:

I mean, at least in Washington DC, feels like we're a little bit behind a lot of times In terms of how we're adopting technology or engaging with technology. I think from a regulatory standpoint, from a legislative standpoint, we're moving along. I don't think there's a view that the US is going to move really fast on some kind of legislation. President Biden put out his EO. I think there's going to be a lot of focus on how that's being implemented in 2024. But some of the people I'm talking to don't think there's going to be any meaningful legislation out of Washington this year. We'll see how that shakes out. I think it's possible there could be some. It'll also be interesting to see how this election shakes out and whether there's any issues related to AI that could come out of that. But I think the EU right now is far and away in the lead. But I'm a little skeptical that we're going to be able to keep this in a box.

Speaker 2:

Certainly. But it's not just about the hard legislation, the primary legislation and yes, that's very much the EU's way of doing business but, of course, what we are also seeing is that we're seeing policies being developed by companies who are being quite responsive to some of the concerns. Look, there's a really interesting dynamic that's coming through a lot in the various reports we're seeing, which is that there's a relationship between the level of expertise and knowledge an individual has around AI and the level of concern that they have. And actually, where we see greater knowledge and greater understanding of AI and how it operates and what it's capable of, we see a greater level of concern about its deployment in various areas, and this could be through questions about misinformation, it could be concerns about implications for jobs and so forth, which is a really interesting dynamic, and it's something I think that business leaders really should be paying very close attention to, that they need to be communicating to their stakeholders about these concerns and about these areas.

Speaker 2:

But this brings me to my point is that it's not just about primary legislation. We are seeing companies being quite responsive to some of these concerns and bringing forward their own policies, whether that's met. For example, who are coming forward with your introduced policy and undertaking that there will be transparency around AI being used in any political campaigning or whether it's open AI taking a similar position just recently also that it is going to declare where its technology has been used in this regard. So the question in particular around political campaigning and around political communications in 2024, which we know is a crucial year from a political perspective it's really important that we're seeing these policies being adopted by the companies themselves.

Speaker 1:

Yeah, and I guess what we think about companies and companies using this. You mentioned to me a little bit before we started this recording that there was a new Deloitte study out that said that companies aren't really ready. So what? Give us some detail on that?

Speaker 2:

The Deloitte study has just come out today and it came out, of course, similar to the IMF study, came out in light of the World Economic Forum in Davos, where clearly there's so much discussion taking place on artificial intelligence. But it was a survey of 2,800 director to C-level executives and it found that only one in five executives believe their organization is highly or very highly prepared to address AI skills in their company, whereas just one in four believe they're well prepared to address AI governance and risks. And again, this is the policies that they adopt internally. But 47% say they are sufficiently education employees about AI. So what we're seeing is that they feel that they are starting to tackle the issue, that they're working closely with their employees, that they're bringing forward the discussion, but there is definitely a lot of work to be done yet.

Speaker 1:

Yeah, and you brought up Davos, so I kind of want to move us over there. World Economic Forum in Davos this week. Many people are there, including some of our colleagues at PENTA a shameless plug. Our president, matt McDonald, is going to be on a panel there talking about some really interesting stuff on AI. We were talking about his prep notes and just some ideas he was floating and things like what if you could interview infinite potential employees and you can just go out there and have people do an interview with a chatbot and one? It's fascinating because you're not you could find multiple needles in a haystack, essentially, which is really interesting to do versus just having someone manually do a resume review. Then also it was a little creepy to me but really cool was this idea of almost AI replicas of people. So if you're a fantastic manager and you can get a replica of that, you can manage way more people than you can do in person. Now, I think there's drawbacks and pluses to all of that, but really interesting things to think about.

Speaker 1:

But to bring you back, basically, to Davos, they, world Economic Forum, put out their global risks report for 2024. The most severe global risk near term, as has been over the next two years was how people will leverage misinformation and different information to widen societal and political divides. Obviously, there's hundreds of elections happening this year. I think something like 3 billion people are going to vote over the next year or two. So what misinformation and disinformation is going to do, and then how is AI going to fit into that? This is a big question. So, chris, what are your thoughts?

Speaker 2:

I can't think of a bigger question this year than this one. It is going to be everywhere because we have got elections happening across the world and everyone's going to be focused on this. We've already seen this being deployed in Europe, that we've seen some of these concerns raised. So in 2023, there were over 500,000 video and voice deepfakes shared on social media around the world. That's a huge number, but in Europe they anticipate or they in a report that they released quite recently that 90% of online content may be generated synthetically by 2026. So we're on an upper trend. There's going to be more and more and more of this content coming out. And how do we determine what is real and what is fake? That is a really, really, really big concern and it's something that is going to be taken seriously. But there's no doubt in my mind this is going to be focused on so much discussion we're going to get it. Every other week We'll be talking about this and I'm not too sure how they're going to crack this egg.

Speaker 1:

Yeah, and I mean it's interesting. I think a part of it is education.

Speaker 1:

I mean, even last night, my husband was on Twitter X and there was a video of a container ship being hit by a missile and being on fire and his first reaction was well, I need to go to a different news source and go to the finance times and the New York Times and make sure that this is real, because I don't know that what I'm seeing is real, and I was a little bit pleasantly surprised at how fast we are moving into this like verification effort in terms of videos that we're seeing, which I think could be valuable. But I think there's definitely this piece of this that we're going to have to be educating people on early, and particularly if LLMs can be trained to deceive, which, of course, anthropics says it's very hard, and I imagine it is very hard and not something we should be overly concerned about, but what we need is almost a better training on how to recognize when things are manipulated.

Speaker 2:

And I don't know if we're going to be able to do that.

Speaker 2:

It's so important. You know there was recent studies have shown that a majority of US adults are concerned it's this better false information into 2024 presidential elections and a majority of EU citizens in member states are concerned about the threats of AI and deep fake technology. We've seen deep fake technology, deep fakes already appearing, as I mentioned, in a couple of European elections, notably the Slovakian and Polish campaigns. They were used. But deep fakes are probably the button that people kind of anticipate is where we're going to see it. But there's much more subtle areas where misinformation and disinformation can be used by through AI technologies as well, and there's a thing such as having a you know coming out with rumor bombing okay, in order to deter voters to go and vote, and so that kind of you know restricts on voting day and so forth are using these kind of techniques in mass and extremely well targeted, because AI can do extremely good audience segmentation and targeting, so really really focused targeting on sweeten voters. So you know things like this about how AI can be used. It's not just going to be necessarily deep fakes, but there are other elements and other techniques that could also feature as well. A big part that came out in that world economic form, which again, was really, really interesting, and it points to a continuing trend which we've seen and when they look at this, they talk with this that it came up in previous reports also in terms of the social polarization and also the lack of trust in institutions, such as the media and political figures and so forth, and all of this is feeding into this and drawing off that. So it is becoming a very, very fragmented debate or engagement with voters, and that's what there's a huge opportunity for misinformation, disinformation, to take place and to really take a hold. So I suppose what I'd be very encouraged with is that your husband saw that you know image and went off to double check another source. That's it. At the very least that's a really important piece is being able to seek multiple sources of information to really be able to verify if something is or is not true.

Speaker 2:

Another aspect of this and this is where it comes back to the companies is the whole question of content moderation and the whole question of trying to understand what content is real or is not real.

Speaker 2:

And there's questions about using watermarks and digital watermarks, but again, studies have shown that they are not always the best way of doing things that they don't always work. They can be faked, they can be tricked and from a human perspective, we might think that we're great at identifying these problems, but really we're not. Only about a third of survey respondents are actually coming back and able to identify deep fakes in any given situation, so these are effective methods to trick people. It's another thing that should be borne into consideration. And again, coming back to the companies and content moderation, the last thing I'd say is that awful lot of companies are relying on technology themselves to identify deep fakes or to identify fake content, and in fact, it's been proven to be really ineffective also. So we do have definitely got a problem in that we don't have an effective gatekeeper for all this information, and it's getting out into the populace.

Speaker 1:

So the good news, Chris, is that the EU is going to solve this with the AI Act. Tell us what is happening.

Speaker 2:

Yeah Well, the EU, I suppose, again, is a little bit ahead of the game on some of these topics. It's not just the AI Act and of course, there are elements within that but it's also the legislation that came previously, which the Digital Services Act and the Digital Marketing Act as well, which is also trying to deal with disinformation online. So there are mechanisms which are in place and are starting to come forward, and hopefully there will be ones which will work. On the AI Act specifically, they were looking very much again at this watermarking of content to identify where something is or is not generated by AI. Now, ultimately, though, I think we're going to be looking at technology to try and help solve this problem, which is created by technology, because we need to find a system, a robust system that can clearly identify information is generated by AI, and one that is trusted by individuals. So that is something which is going to have to be a solution that's going to have to come to the fore at some point, but it's a really important piece of the puzzle.

Speaker 1:

Awesome. Thank you. We will look forward to what the EU is going to bring to the world with the AI acts and complementary actions.

Speaker 2:

One thing I'll add right one thing which came up over the I don't know if you saw this over Christmas there was this huge piece. There's a Spanish model and she makes a fortune. Do you know what? I mean? She's like an Instagram model and influence and all the rest, but the key thing is that she's not real. She's AI generated and she's got massive following on Instagram. You know brand placement and so forth and she's not a real person. But a lot of people are interacting with her and that brought us down the rabbit hole as we looked into it a little bit and found this really interesting business model that this company in Spain has come up with. But it came up with this other question about you know other non-real, you know interactions, such as chatbot girlfriends, which is, of course, a thing, and there's a big, big one, I think, in the US, right.

Speaker 1:

Yeah, it's apparently one of the biggest things in the open AI app store. That's, there was a lot of chatbot girlfriends in there.

Speaker 2:

Right, so not one. I've had a lot of experience with myself, but you know, I suppose meeting market demand, I guess.

Speaker 1:

Well, I mean, yes, I mean it is if there's the demand for it.

Speaker 1:

But you know, I think there's some broader questions that come out of something like that, where we think about just AI and technological change generally and you know what we view traditionally as social and what we are viewing as social in today's world, I think that there's got to be a little bit of an adaptation. I mean, we're kind of dealing with a loneliness crisis. You're seeing depression rates for young people growing across the globe it's not just specific to the US and so I think that there's a question of you know, is spending all your time talking to somebody, even if that's a meaningful conversation, but to somebody who's not real? Is that going to do something to solve loneliness or is it going to perpetuate it in ways that we don't understand yet? You know, I'm obviously not a psychologist, so I don't want to pontificate too long on this, but I think these are really interesting questions that we should be thinking about, and I don't really know that there's a governmental solution to some of that, or if whatever there should be.

Speaker 2:

I know, but I think that's a really interesting question, though is about what are the implications whether it's benefits, whether it's negatives of these types of interactions, and maybe that's something which we could find somebody to talk to us about, because I think that'd be a pretty interesting discussion. I think it does point to, of course, the fact that this technology is changing society. Beyond just data management or data manipulation, beyond sort of things like doing tasks at work and coming up with new drafts of press releases and so forth, there is some other applications of artificial intelligence which has societal impacts, and that's something which society is going to have to come to grips with at some point. How about be sooner the better?

Speaker 1:

Yes, and Chris, you bring up a good point. This is the inaugural conversation. We're having artificially intelligent conversations. Thank you, chattpt, for suggesting the name. By the way, I think we need to give credit where credit is due, and we expect to have guests from time to time to talk about things like AI Chat about girlfriends or watermarking or healthcare innovations that can be used with AI, and so we also are recording here. Just as a side note, it is a snowy day in DC, so we're recording from my home in DC and it's frigidly cold in Brussels, as I understand, chris. So thank you for joining me.

Speaker 2:

Well, I'm actually in Dublin at the moment. Oh yeah, in Dublin, okay. But you know, when it snows over here, we call it L'Ache Nogte, which is Irish for literally day of snow. So L'Ache Nogte. You're having a L'Ache Nogte today over in DC.

Speaker 1:

I love that. I'm going to start using that. Well, anyway, thank you so much for listening to this month's episode of artificially intelligent conversations with myself, Andrea Christensen and my partner, Chris Meaghan. Remember to like and subscribe wherever you listen to your podcasts, and follow us on LinkedIn at Pentagram.

AI Readiness and Government Regulation
The Impact of Misinformation and AI