My IBM Log in
Davos Panel Hosted By IBM CEO Ginni Rometty Explores Precision Regulation of AI & Emerging Technology
Jan 23,2020

Event formally launches IBM Policy Lab, a new forum to advance bold, actionable policy recommendations for a digital society and foster trust in innovation

Also follows IBM Policy Lab publication of industry-leading priorities to guide regulation of artificial intelligence based on accountability, transparency, fairness and security

 

At the World Economic Forum in Davos, IBM launched the IBM Policy Lab – a new global forum aimed at advancing bold, actionable policy recommendations for technology’s toughest challenges – at an event hosted by IBM Chairman, President and Chief Executive Officer Ginni Rometty that explored the intersection of regulation and trust in emerging technology (full transcript below).

Watch full video here.

The IBM Policy Lab, led by co-directors Ryan Hagemann and Jean-Marc Leclerc, two long-standing experts in technology and public policy, provides a global vision and actionable recommendations to help policymakers harness the benefits of innovation while building societal trust in a world reshaped by data and emerging technology. Its approach is grounded in the belief that technology can continue to disrupt and improve civil society while protecting individual privacy, and that responsible companies have an obligation to help policymakers address these complex questions.

Christopher Padilla, Vice President of Government & Regulatory Affairs, IBM, said:
“The IBM Policy Lab will help usher in and build a new era of trust in technology. IBM pushes the boundaries of technology every day, but we also recognize our responsibility relating to trust and transparency and address how technology is impacting society. I see an abundance of technology but a shortage of actionable policy ideas to ensure we protect people while allowing innovation to thrive. The IBM Policy Lab will set a new standard for how business can partner with governments and other stakeholders to help serve the interests of society.”

Sign up for the IBM Policy Lab Newsletter

Ahead of the launch event, the IBM Policy Lab released  landmark priorities for the precision regulation of artificial intelligence, as well as a new Morning Consult study on attitudes toward regulation of emerging technology. The perspective, Precision Regulation for Artificial Intelligence, lays out a regulatory framework for organizations involved in developing or using AI based on accountability, transparency, and fairness and security. This builds upon IBM’s calls for a “precision regulation” approach to facial recognition and illegal online content—laws tailored to hold companies more accountable, without becoming over-broad in a way that hinders innovation or the larger digital economy. These approaches are reinforced by a Morning Consult survey, sponsored by IBM, which found that 62% of Americans and 7 in 10 Europeans responding prefer a precision regulation approach for technology, with less than 10% of respondents in either region supporting broad regulation of tech.

IBM’s policy paper on AI regulation outlines five policy imperatives for companies, whether they are providers or owners of AI systems that can be reinforced with government action. They include:

  1. Designate a lead AI ethics official. To oversee compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official.
  2. Different rules for different risks. All entities providing or owning an AI system should conduct an initial high-level assessment of the technology’s potential for harm. And regulation should treat different use cases differently based on the possible inherent risk.
  3. Don’t hide your AI. Transparency breeds trust; and the best way to promote transparency is through disclosure  making the purpose of an AI system clear to consumers and businesses. No one should be tricked into interacting with AI.
  4. Explain your AI. Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
  5. Test your AI for bias. All organizations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure. This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalized. For higher risk use cases,this should be reinforced through “co-regulation”, where companies implement testing and government conducts spot checks for compliance.

These recommendations come as the new European Commission has indicated that it will legislate on AI within the first 100 days of 2020 and the White House has released new guidelines for regulation of AI.

The new Morning Consult study commissioned by the IBM Policy Lab also found that 85% of Europeans and 81% of Americans surveyed support consumer data protection in some form, and that 70% of Europeans and 60% of Americans responding support AI regulation. Moreover, 74% of American and 85% of EU respondents agree that artificial intelligence systems should be transparent and explainable, and strong pluralities in both countries believe that disclosure should be required for companies creating or distributing AI systems. Nearly 3 in 4 Europeans and two-thirds of Americans of respondents support regulations such as conducting risk assessments, doing pre-deployment testing for bias and fairness, and reporting to consumers and businesses that an AI system is being used in decision-making.

In addition to its new AI perspective, the IBM Policy Lab has released policy recommendations on regulating facial recognition, technological sovereignty, and climate change, as well as principles to guide a digital European future.

The IBM-hosted event in Davos, Walking the Tech Tightrope: How to Balance Trust with Innovation, also featured the President and CEO of Siemens AG Joe Kaeser and White House Deputy Chief of Staff for Policy Coordination Chris Liddell. CNN International Anchor and Correspondent Julia Chatterley moderated the discussion.

Transcript:

Chris Padilla:

Hello, my name is Chris Padilla. I’m Vice President of Government Affairs for IBM. This event today serves as the official launch of the IBM Policy Lab, which is meant to offer  bold solutions to public policy problems of today, particularly focused on emerging issues around technology policy, skills and education and workforce policy.

It will be based in Washington and in Brussels; and we’ll try to offer the point of view of IBM for public policymakers and the public at large, stakeholders, if you will, on specific actionable recommendations on how policymakers can address tough issues, like facial recognition or the regulation of online content, or today, as we’re announcing in a new paper, ways to regulate artificial intelligence.

We’re thrilled to have a very distinguished panel with us today to help launch the IBM Policy Lab and discuss some of those issues. So I’m pleased to introduce Ginni Rometty, Chairman, President and CEO of IBM; Mr. Joe Kaeser, who is the President and Chief Executive Officer of Siemens, AG; also the founding partner of the Charter of Trust for Cybersecurity, of which IBM is a proud member; Chris Liddell. Chris is Assistant to the President and Deputy Chief of Staff for Policy Coordination at the White House, and a strong voice on technology and innovation policy in the US government.

And to moderate and lead our discussion today, we’re really honored to have you, Julia Chatterley, from CNN International, who has met and spoken with many of you and many experts in this field. So without further ado, Julia, over to you. Thank you all for being here.

 

Julia Chatterley:

Thank you so much and welcome everybody. Thank you so much. [applause] Two vital themes that I’ve certainly been discussing, these leads have been discussing. The first, tech for good. Let’s remember how beneficial tech can be. The second is, trust in tech. Perhaps more specifically, trust in the use of technology. And the bottom line here is trust in technology is declining. People are afraid.

I’ve got a great stat, as I’m known, I’ll keep with one I promise. That 47% of people in their trust in a technology survey said, one, innovation is going too quickly. Two, it’s leading to changes that are not good for society. We can talk about some of the recent stories, cybersecurity issues, privacy issues, data, then we can talk artificial intelligence, the use of it. All of these things, automation, the impact on jobs. It’s no surprise that people are afraid.

The key is, how do you find the appropriate level of regulation; how do you react as a company; and what these guys are going to be talking about, which is and we’re calling, a tightrope here, finding the right balance. Not to suppress innovation, but also to protect society. We think we’ve got anger problems now, and we’ll talk about this, oboy, when you hear some of the statistics that this panel will talk about.

Let’s get on with it and hear from the experts. Is it right to be calling it a tightrope? Is that how it feels? Ginni, kick us off.

 

Ginni Rometty:

Well, I think it is because of the word balance. A friend of mine, don’t ask me why, was trying to walk a tightrope this weekend and sent me a video of her trying to do that. And as I watched her – and thank goodness it was only this far off the ground, because she only made it two feet. And I thought – and it’s so funny you say, because I thought well, what is it? I’m like she has no balance.

And to me, this is a balance, because part of why we are I believe in a little bit of a fever – not a little – a fever over this is, people are not looking at it with the right balance. And it is going to be a balance between security, if I go to starkly say it, a security in innovation. And you cannot – this is not an either/or discussion. I’m passionate about this.

And I hope when – first off, I apologize for those of you standing, as the host, because we didn’t think that would be that many people interested. So I appreciate you are and please be patient, okay? Give them more food in the back or something. Appreciate that.

 

Julia:               We’ll keep it lively.

 

Ginni:

But this issue about, so balanced security innovation. And if I can just add one other comment about when Julia said this is about trust, I can’t tell you how strongly I feel this is a decade of trust. I told her, I remembered coming to Davos four or five years ago, producing a paper that said, we said, we’re going to write down our principles for trust and transparency in this era. And everyone’s like ohh, they weren’t that interested.

And there was a reason we felt so strongly, and to this day – we won’t talk skills ‘til the end of this – but I believe at the heart of his is that people are not sure there’s a better future here, when they take all of this technology into consideration. So yes, there’s security issues, but I think what’s really at stake is that is this an inclusive era? Will this digital era be an inclusive era? And if not, have/have nots get wider, societal divides get wider. People go to places you don’t want them to go. And so I think that’s why, yes for privacy, security, we’re going to go down there, but this to me is as much about an inclusive way forward.

 

Joe Kaeser:     Well, look, it depends on who you ask.

 

Julia:               Say people.

 

Joe:

Oh, depends on you ask. I mean if you ask the experts, the professionals, they will tell you we have a way of how to handle it, free flow of data, cybersecurity and what have you. If you ask people, the uneducated public, _____ like tech no. They have fear because they don’t exactly know their concern. They have fear.

The uneducated public, again, the elder generation, like I don’t know, me maybe, maybe has more fear than somebody’s been growing up with mean machine, human machine interfaces and intuitively deals with it. So it’s a very, very broad topic.

I think the issue which we have is we are in the middle of the fourth industrial revolution, the Internet of Things or you call it. And like the first three, we feel tangible. When somebody was inventing the steam machine, everybody who’s been working really hard physically, they saw oh, man, this is really good. It makes my life better. I have a better life. Second one, electrification automation. It was tangible. You could see it. So you said okay, well, actually I can see the benefit.

Now comes the fourth industrial revolution, the Internet of Things, technology. It’s like this is about the cyber world, somewhere, somewhere, you don’t exactly know, is this the dark force or what does it do to me? There is this uneasiness about the intangibility, what the hell is that all about? And that’s the issue.

So what I would say is, there is fear in the uneducated public or the younger ones just dealing with stuff they would probably be sorry about 20 years later if they make their selfies when they are naked as a baby and when they’re teenagers, they probably will like that one a lot.

And the middle thing, and this is what Ginni said, the middle thing is how to balance, how to be inclusive of experts versus uneducated is trust. That’s exactly what it is. The question is, how are we going to build trust? Through action, through self-containment, through building a sort of an integrity of the intent; or to come with the big thing, with the regulator. And I think that’s the environment we are in. There is no black-and-white story here. There is no binary story. We need to build one at a time.

Think about the Internet of Things. Well, free flow of data, connectivity 5G, all super, everything is connected, Internet of Things.  So great. However, now we have a regulator who’s been…

 

Julia:               Chris, they’re calling you a regulator.

 

Chris Liddell:  I know…[overlapping talk/laughing]

 

Joe:

He’s closest to me. But I was saying, take any government, the government is used to a territorial integrity. The law is about a territorial boundary. The Internet of Things couldn’t care less about how high you build your walls. It’s just got to fly over, because that’s what the connectivity will have. The question is, how do you deal with extra territorial lawmaking? There is no point in having the EU doing some stuff, United States do some stuff, China does some stuff, if we are in the countries of the world, and need to get our 500 factories together.

So that’s the issue on the professional side, and we do need to have a regulation for that. And preferably in a global system like the WTO. Should there ever be [overlapping talk] renewed? You need to have that.

 

Julia:               Don’t worry, we’ll talk about that.

 

Joe:                  Yeah, went a bit long, but it’s a fact of life as it likes like in digitalization.

 

Julia:               Mr. Chris regulator.

 

Chris:

Yeah, sure. I totally endorse what Ginni says about balance, and I think that’s probably going to be one of the issues, so one of the concepts we come back to. If we still with your tightrope analogy, I think the only extension I’d make – I think tightrope is good metaphor as any – is it’s really a series of tightropes.

So the relations(?) associated with autonomous vehicles or synthetic cancer drugs are going to be quite different to movie recommendation engines. It’s not one-size-fits-all. And we also get quite quickly into privacy, free speech and other issues. There’s a landscape of different issues and some of them are more severe in terms of if you fall off the tightrope than others.

I think the metaphor is great, and I think balance is a good way to think about it. And my life as a regulator, which I’m not, but I’m in charge of policy coordination at the White House, is I think it’s as much about the what – I’m sorry, is about the how as the what. So there’s a natural tendency to leap to the conclusion, but the process by which you get to the conclusion is how you build trust, and hopefully build the best solution. And I’m happy to talk about that.

But I think the important things in the first question is, is this is not a one-size-fits-all concept. So yes, we’re on a tightrope for a lot of these issues; yes, the landscape is change below us; but one-size-fits-all is very unlikely to be it. So you need a very thoughtful process about getting to different results.

 

Julia:

And arguably, big governments are on the highest tightropes of all, because you’re in the line of fire if you get it wrong, if people get angry, if they are mistrustful, if they get injured by society; but also you want to foster innovation and economy. We’ll come back to that.

Safety nets. Nice when the floor is only a foot below. Still get a bit of a bump, but when I’m talking about tightropes, when I’m thinking about tightropes, I do think about the safety nets here. And I think that goes – and we’ll move on to the specifics here – but I think that goes to, Ginni, what you’ve announced at Davos and the Policy Lab. Looking for that safety net, how responsible is government, is regulators and what responsibility does the private sector take in the intersection of those two things? And we’ll start with AI, because that’s what you’re specifically focusing on; and then we’ll talk about all the different tightropes that we’ve got. It’s a crisscross.

 

Ginni:

So again, you’re here because you’d like to I assume, hear just some different views of what we all think should be, could be some recommendations, right? And so one set of ours around AI, having been some of the original mothers and fathers of AI, back decades through its summers, its winters, its back. And in a strong belief that it does change 100% of jobs, didn’t say replaced, said it changed, in some way.

And we see this happening for the good and I’ve always said, this will be the opportunity of our time; it’ll be the challenge of our time. And therefore, what would be pragmatic things – the speed that it’s moving. It means that government can’t do this alone by any stretch. It will be government and in business together.

If I could divert one quick second. We were talking before we started, Joe used all the analogies of the industrial revolutions and I said to Julia, if you really go back and look in time, when things started to happen, when people were no longer going to work in a farm and there was, as you saw, steam engines and everything else, we forget, there were times when it was not a law that you had to go to high school. There were not – the societal fabric always follows, it seems, behind the time the physical change happens. So we’re in no different period, in some ways, than what happened before.

Back to pragmatic. It’s happening quicker though. That is why it’s going to be partnership now. This isn’t like waiting for the government to make some edict. Here would be some practical recommendations, we would say. And Chris knows this. He’s I believe a strong supporter – now maybe not everything I’m going to say – but a strong supporter of innovation and this balance. He just said it himself.

One is, I wouldn’t think of regulating the technology. We thought of the word precision regulation. So precision medicine. Precision medicine means if my leg hurts, the doctor does not chop my arm off. And therefore, with precision regulation, the first rule would be, regulate use of technology, not the technology itself.

Think about that for a minute. The use. So in other words, did – every one of you, well, not everyone, you have an Apple or a Samsung phone or whatever you happen to have. How many of you used your face to open it? It’s facial recognition. I’m going to turn it off. How do you feel about that? Now, on the other hand, if it was used to take away your civil liberties, you would feel very differently about this. So it’s the use of how it’s being done would be my first – our first basic recommendation.

The second thing would be, Chris did mention this on risk. Is one thing if I’m giving you a restaurant recommendation; it’s way different if I’m going to recommend the top three ways to treat your particular kind of cancer. That’s a risk-balanced set of regulations that are out there.

And the third is the one that I think has been crystal clear from the beginning on AI. Is that if you’re going to use it, be transparent and clear about bias. Transparent meaning, I know when I’m using it. I know who’s trained it. I mean if I’m going to get medical advice, do I want it trained by the unnamed internet? Or do I care that maybe five good institutions that I trust trained it?

Bias though, if I can say one last word on bias. We built software to tell you if we think there’s bias in something. I want to pause for a minute. Very active bias is bias. You are applying some set of values to decide something’s good or bad. You decide everyone should get a home mortgage, should not discriminate for certain – you’ve made a set of value judgments in whatever it is.

As an example, I think you should check for bias. It’s against the values you picked. We’ve built software that will – it’ll look at your algorithm built by anybody. Google doesn’t matter. It’ll just look for patterns. It’s not judgmental. But I think everyone’s got to be aware of what bias goes in.

So do it based on regulate-based – I’ll just summarize. Regulate based on the use of the technology, not itself. I am very afraid this could go overboard. People that don’t understand technologies could try to completely stifle the innovation. And some other country is going to go forward then. The second thing is going to be do it on a risk-based way, and the third is with transparency and with this keen eye around bias.

 

Julia:

My first observation there and in the discussions I’ve had with AI, there’s so many different strains; there’s so many different not only use purposes, but when we’re talking about AI, algorithms, the processes, fast data processing versus human intelligence AI. So that complicates it. But I just want to bring in one stat.

 

Ginni:

That’s right. You have to have an AI – we have an AI Ethics Officer. I am sure most, many companies nowadays are starting to do that. It’s not black and white, to Chris’ point.

 

Joe:

Yeah, I think it’s easier if you deal with B2B because are experts versus experts. So pretty much know what you’re apt to be doing in both of these _____ companies. I think the real issue is where data from consumer go to someplace. And the question is, shouldn’t the consumer have – or whoever gives data away to somebody else, shouldn’t that consumer have the right to decide whether or not that data goes someplace? Or secondly, it’s being used to something.

And this is what I always call the integrity of the intent. I think we should have a regulation where no matter who uses your data, you have a right to know what people do with your data. And you say yes or no or maybe. I think it’s really important. It’s integrity of the use of data. You need to know what happens with that thing. And if you say well, this is going to be taken to impact my behavior, well, I don’t care. Well, then fine, do it. But if I say well, no, I didn’t think that is a good idea, then stop it.

We need to have this, because otherwise you never know anymore where things travel. And the worst case is, you’ve got the society in the world which gets manipulated by much smarter people than the last century manipulation happened coming to catastrophes. This is really what concerns me.

 

Julia:

And what utility you’re buying for your data as well. I mean Facebook is the obvious example. People are still adding to Facebook because they see a utility in giving up their data for free.

 

Joe:

If you had asked me, I would have probably used that example. That’s exactly what it is. You get a power over the people in a way that people don’t even know about it; and once they figure it out, it’s too late. That’s in essence where we are at. And this can’t be just let go and say well, may the better team win. It’s clear who the better team is. That’s decided.

 

Julia:               It’s on the tech side, not on the consumer side.

 

Joe:                  Of course, who else?

 

Chris:

So I think there’s a role for that public say again in the private sector. The public sector sets the minimum standards and then private sector through terms of service can decide if they want to, to experiment with more restrictive terms or different terms. And to Ginni’s point, the public sector is likely to be more comprehensive, but slower. Private sector can experiment and do things in advance of the permanent solution, which can actually help them form the solution as well.

But I think the key in my mind is how do you get good regulation, and that’s a more generic question. In terms of my approach, I don’t come from the public sector in terms of background, I come from the private sector. But I use the same basic principles in the public sector as I do in the private. Which is you start with a good values-based conceptual framework; you apply that framework and you be willing to adopt, monitor and measure it and change it as time goes on.

And if you just do number one, that’s just empty words. And there’s plenty of that. So you actually need to take a conceptual framework and be able to apply it in a systematic fashion. If you do number one and number two without number three, you get permanent regulation, which is unlikely to actually survive the test of time. And most of us will agree that the internet is quite a different place now than it was say 20 years ago.

That’s exactly the approach, coincidentally, we came out with our artificial intelligence regulation principles a couple weeks ago.

 

Julia:               [inaudible]

 

Chris:

And so there are 10 principles which bucket into sort of 3 basic theories. The first which I think is the most important one, is engage the public. So all of these issues are unlikely to be 100 to zero issues where absolutely everyone agrees on what the regulation should be. Most issues in the political world fit into sort of 55/45, 55% of people agree, 45 don’t, or the 80/20. Let’s just say generously these are going to be 80/20 issues. That means 20% of the people aren’t going to agree with it and are going to feel disenfranchised by it. Having public comment and having that debate up front is incredibly important for something like artificial intelligence.

The second is stop regulatory overreach. This is a US-based approach. Our general approach is to do the minimum amount from a regulatory point of view and do cost benefit analysis. So different jurisdictions will take a different approach where the balance is. Most of our approach in the US, and this administration, particular, is going to be to do something, but to do what we consider the minimum to protect the most number of people.

And the third is promote trustworthy artificial intelligence applied to this. That’s going to pick up a lot of the things that Ginni was talking about, about transparency, lack of bias. So clearly, in order for people to trust it, going back to phase one, you actually have to have protections.

I think the other important aspect, and this again picks up on something that was talked about before, is there’s a natural tendency to think we are going to regulate artificial intelligence. No, we are going to make regulations associated with activities which have artificial intelligence embedded in them. Because artificial intelligence is increasing not a thing, it is something which is embedded into other activities. So it’s not just regulating artificial intelligence, it’s regulating every activity that may have an artificial intelligence application.

And if you take that basic approach that I talked about, which obviously is going to differ enormously, depending on the situation, that’s in our view at least how you’ll get the best regulation, whatever that might be.

 

Julia:               I look to – yeah, please.

 

Ginni:

Could I just jump in and something both that Joe and Chris said, and – now Joe obviously lives in Europe. I mean not obviously, but Joe does, if you know Joe. And Europe’s got GDPR and there’s how many countries now with GDPR? Chris, you know it, it’s in the 20s or 30s or something that have followed and modeled after, right?

 

Joe:                  5 or 7 or some.

 

Ginni:

But Chris said this word, I mean Joe teed up the privacy point on consumer and privacy. Chris teed up the point of engagement. I still think the US has yet to go through this phase, in that it needs to have – I think what Chris is describing, this engagement around this is not a technology issue, actually, whatever regulation we have is values-based regulation in this topic. So therefore, you have to engage the public in – the wide public, I mean by that — all different constituents – to have a discussion of where should the law, or where should the line be drawn? And by who?

And you can’t just let that be decided by anyone. And in our country, in the US, the Congress has to decide. Those are laws that get written. And therefore, there has to be engagement, dialog and then courts uphold them. Courts don’t decide those, that’s how we do these things.

So I think your point on engagement is a really critical point for the US right now on that topic, because it’s a values-based decision of what you want. Because like the old world, if there was a wire tapping, you got a warrant, you could do a wire tapping. I mean in the United States, if you tell something to a priest, that information is protected. I mean there’s things – in the past there has been a dialog and a decision made on those things. We now need to do that for this world.

 

Julia:

Can I streamline the question there a little bit? Where Europe’s led, do you think the United States will follow? Because even if I take AI as an example here, the first line of the Executive Order is “maintaining American leadership in artificial intelligence.” So if we’re looking at this as a relative competition, whether it’s data use, whether it’s innovation, whether it’s artificial intelligence, is this a global solution required or is maintaining American leadership, and therefore, did in some way extra regulation as far as data privacy concerned, relatively hurt Europe?

 

Chris:

I think it will be a global solution is what is needed. But when I look at so the EU versus the US, I’d say there’s much more commonality then difference. I would say, and jumping between AI and privacy, but let’s just stick with AI.

 

Julia:               But it’s an approach to regulation I’m talking about here, whether we’re talking about AI or data…

 

Chris:

I think most of the values that we would say were American-based values, like transparency and lack of bias I would say, the Europeans would sign up for exactly. We might argue about the implementation and the amount of prescriptiveness in the regulation or legislation, but I don’t think the underlying values will be significantly different between most liberal democracies.

 

Julia:               “Maintaining American leadership” though in something is the first line of your guidelines, is pretty punchy.

 

Chris:

From an industrial point of view, obviously we want to promote US-based companies. But in terms of the actual regulations that sit behind it, as I say, and we are regulating the US, not the world, I don’t think you’ll see significant difference in the artificial intelligence area…

 

Joe:                  I mean if you think about maintaining the leadership, assuming it’s there, which I believe it is, you don’t need to look to Europe, you need to look to China.

 

Julia:               Well, I was going there next.

 

Joe:

But like it or not, and very simple, because the scalability methods, you want to try something out, it’s good to have it scalable. And if you have 1 thousand 400 million people and a government, right or wrong – everybody needs to decide on that one – have a government which says tomorrow we go from here to there and next morning, 1.4 billion people go from here to there, a difference.

If I say something in my company, tomorrow we do it like this, next morning people start debating whether that was a good idea or not. [laughter]

 

Julia:               Reflection is always useful.

 

Joe:

Well, that’s what big companies are all about. It’s corporate – I think you got the former _____ is a corporate _____ cast, is that what he said? Something like that. I don’t know, I don’t know, maybe that was only in Stuttgart, but doesn’t really matter. But anyway, what I’m trying to say is, if you have a systemic difference in governance, state governance, and the scalability to practice on 1.4 billion people, you better figure out what you have against that one. Right or wrong? Probably the regulation.

And then it gets iffy because then you say, hey, the only one thing I can defend it is if I put regular the borders on it, then forget about IoT, free flow of data, everything is connected story. Is that going to work? And this is the sort of catch 22 we are in the middle of.

 

Julia:               Absolutely, Ginni?

 

Ginni:

Well, I want Chris to – this to me is why Chris you’ve written this paper and actually why Europe’s written these things too. This  is not about – scale is not the only way you win in AI. And innovation in AI, or any of these technologies, comes from a number of different factors. It comes from having the right people with the right skillsets. It comes from the creativity, it comes from the way you protect intellectual property. It comes from a whole ecosystem of things.

I for one am not giving up the that one country because of scale is going to own all the innovation in this world. I don’t go for it. You don’t either, really.

 

Joe:

Well, I mean as much as I love the United States, it’s been my second home. I worked there for six years and that’s where I got to know the 49ers and they are going to the Super Bowl and they are going to win. But leave that important stuff, leave that important stuff aside.

What I’m trying to say is, don’t underestimate the capabilities of the Chinese. They are Huawei. [Ginni: Absolutely right] Why is that whole bus about Huawei. Well, because they are the leading – they are 5G generation by far. We’ve tested all five of them and believe me, whether we like it or not, they are at least probably a year and a half if not two, ahead. Something.

The second topic is, I just visited Huawei, because sometimes it’s better to talk to each other rather than about each other. I went there and they showed me industrial automation, simulation, digital _____. And what they intend to do in automotive. Let me tell you, hallelujah. If you are an automotive OEM provider, on any sort of electronics, you better watch out. The car of the future is nothing, nothing but the platform on wheels. That’s what it’s going to be.

 

Ginni:              But Joe…

 

Joe:                  And they have again, a million, a billion cars. So again, we go apples is scalability. And artificial intelligence, if not autonomous driving, what else can we get it started?

 

Chris:              A couple thoughts. I mean just a word about scales.

 

Joe:                  I be provocative here.

 

Julia:               Yes, I like it.

 

Ginni:              That’s why you’re here Joe.

 

Julia:               Chris is going to say something.

 

Chris:

If you’re just worried about scales(?), depending on how you measure it, somewhere between 75 and 100 democracies in the world, which have something like 4 billion people. So you actually have a liberal democracy-based set of AI principles, then scale’s not an issue. Sure, China’s one country and that’s a series of countries, but let’s assume to a large extent IBM operates in virtually all of those 75 countries, I would assume.

Secondly, the principles that you’re looking at were based on an OECD process that went through the whole of last year. So it’s not like they suddenly appeared from nowhere in the US only.

But the third, which I think is probably the most important is, if you think about more of an ecosystem, what’s going to make the ecosystem win? And it’s not just the regulatory side of it. Obviously it’s the innovation engine, which I would put the US/European innovation engine ahead of China anytime. [Joe: yeah] Trust, to come back to this core principle. You can’t force people to use technology. Adoption is going to be incredibly important. I think anyway, we’re going to actually get adoption of cutting-edge technologies if people trust it.

And if you don’t have citizens that trust their own government, you won’t have adoption. So I think trust is fundamental to making the ecosystem work properly as well.

If you put sort of scale, I don’t think it’s a big disadvantage with the innovation system, which I think we’re ahead of, albeit China is a formidable competitor, together with the other planks associated with trust. And when you get into trust, you get into other issues which probably not going to talk about a lot, but cybersecurity is one of them. IBM and other companies putting enormous amount in cybersecurity.

How you trust your data with private platforms is important. How you trust not to be bridged by cybersecurity is a big issue. Making sure that we have citizens who trust, or companies who trust their data with platforms like IBM is incredibly important as well. And then we get into issues like workforce development. People are only going to accept that we should be leaders in technology if they believe it’s going to be beneficial to them. And therefore, building trust is associated with that.

If you think about the ecosystem, I think in most cases the US and its allies are at least equal to and superior to other alternatives.

 

Julia:

I guess the only points I’d make there very quickly is that China cares less, perhaps, about their people’s interest in trust, than developed market nations. And the other point is, and I think this is to your point Ginni, actually, I don’t think, from my conversations in China, they want the situation to get out of control either. The societal sweep that we’re seeing for this kind of digitization technology, artificial intelligence, is having societal changes everywhere. And no one wants that out of control.

Can I ask about facial recognition technology? What do we think about this? Because the difference, again, in approaches here, this is something that China, other surveillance nations like Singapore, for example – it’s not just about China – are using facial recognition technology very differently. And then there was the article in the New York Times if people read it this weekend, about clear view AI scraping the internet for pictures and working with law enforcement in the United States to literally pull all sorts of information, social media profile, address. That in the wrong hands in any country, never mind elsewhere, regulation, where does, again, private sector versus public lie?

 

Ginni:

Well, this is where I would say strongly this is you have to regulate on use and not on the technology. Because those same technologies are what also protect you. And so when you are going into an airport, some of us that go through the iris and go through the – these are things that protect you. And so you wouldn’t have – you don’t have an issue with most of those things. And therefore, this description about is it to restrict your civil liberties, and in some way all the things most of us in this room would say are wrong, or is it for good? And not only for protection, for healthcare, for all different reasons.

And so that’s why I wish – everyone wishes it was black and white and it’s not. It is between those two things. And so that’s why I think when people call for outright bands on things, that just solves absolutely nothing. You’ve got to actually – Chris said have that engagement, that dialog, You make your decisions based on values. And that will require precision regulation to get done. I don’t know, Chris are you – you’ve worked on this facial recognition point quite a bit, right?

 

Chris:

Yeah, I agree with you entirely. I don’t think outright bans make any sense. And I don’t’ think regulating the technology is stopping the technology, especially when you are in an innovation race with other countries who don’t care so much about it.

Having said that, none of us sign up for a surveillance state. So clearly that’s an area that we need to look at. But there are plenty of positive aspects of facial recognition, not just the more trivial ones of using on your phone. But missing children, catching criminals, dangerous criminals or terrorists. It is a technology which is incredibly useful. Finding a balance and it’s an argument that we’ve been arguing about for 240 odd years. This is just the latest incarnation of that. Finding a balance that makes sense is I think incredibly important.

 

Julia:               How urgent is it? You’re talking about the policy level. You said four years ago people didn’t care and some of your statistics…

 

Chris:              Facial recognition is pretty important. It’s _____ because it’s out there.

 

Julia:               Yes. Are we going to move quickly enough?

 

Chris:

Well, legislatively, unfortunately, in the US we’re probably not going to see a lot happen in this year, just because we’re in electoral cycle. But there are hearings on facial recognition going on as we talk, not as we talk, there’s nothing going on in the legislative side as we talk. [laughing] But let’s just say other than a pause that’s happening at the moment, it’s an issue that’s being debated. So I would suspect, if not legislatively, then certainly we get free, we’ll see something relatively soon in the US. I can’t speak to the EU.

 

Julia:

Do you think political change shifts the environment dramatically in the United States? I’m not asking you to predict an election win or loss here, but we have had some pretty feisty rhetoric from Democratic candidates in the United States about breaking up big tech and more regulation. They’re playing into the fears and misperception and misconceptions I think about the bad side of the use to technology, perhaps. Does that make a difference, Ginni, in the way you’re planning?

 

Ginni:              Oh, I thought you were talking to the regulator…

 

Julia:               Well, no, no, I’ll come back to this as he was thinking about it too.

 

Chris:              [overlapping talk/inaudible] stay away.

 

Julia:               I’m not going to say anything.

 

Ginni:

So I did the opening panel, or the panel with Klaus last night; and the topic was stakeholder capitalism, which I don’t think it is just a technology discussion that you’re teeing up right now. And Joe and I happen to run companies that are, well, I think yours is older than mine. I think we talked about that once. I’m 108 and you are…

 

Joe:                  174, but a more interesting thing…

 

Ginni:              You look it, no, so…

 

Joe:                  Thank you. But that’s how the difference what we look like? But look…

 

Ginni:

Let me finish, then I’ll turn it over to Joe. All my point was at that point, why have we existed that period of time? My view is it’s that society has given us both a license to operate society. And that is because we have over time always balanced all those different stakeholders. And when you speak of it isn’t just about what technology did for the good or the bad; it isn’t just our customers; it was our employees; it was the communities we lived in; it was our shareholders that we returned; and it was that and in a virtuous circle between all of those.

And so when you say should things be broken up, I think it has more to do with how you look at your role. And your role is, in fact, to make it so that – I mean I feel that way in all the countries we operate. That we, therefore, make this a place that they want us there to be a citizen and they want to do business with us. And that’s because we do something good for them; and we provide jobs and good jobs. And we do it in the right way according to a set of values.

I don’t think that’s motherhood. I believe our actions speak louder than our words, every one of us. And it gets to when people speak about a breakup or a – whatever they want, it’s because they don’t like what’s happening today to them. And this to me circles back to people believing they have a better future in the digital era. And that will rely on can I have the skills that I can have a good paying job and a chance for success, have a family, whatever my dreams are; and that I see a brighter future.

And this goes back to the skills topic, if you allow me one second. And that is why I believe all of our efforts have been very timely. Many of you in the room that have worked with me on the skills topic. And many of us, Joe’s very dedicated, extremely dedicated to it. And it takes everything from the speed – means you better have a different paradigm. Everybody can’t have a four-year college degree. This is going too fast. And so, therefore, skills matter more than a degree.

You better give them multiple pathways right now to get into your company, because you can’t wait that – we can’t wait long. And you better give them different models. So some of us have done apprenticeships, some of us have done six-year high schools with vocational schools. But they need on-ramps, more on-ramps.

And so that to me is the biggest thing we could do to make this – to address many of these issues, actually. Yes, precision regulation, but they’re in a broader context of a dislocation of an industry change and people seeing there’s a good future.

 

Julia:

This is the key for me, because this is where you move faster than anything else. Regulation is going to be behind the curve. I think we’ve agreed that. But if you can try and address some of the societal issues that take place, the job displacement, then you’re tackling some of the biggest risks. Joe.

 

Joe:                  Well, I mean I couldn’t agree more. First of all, it’s more important – it’s not so much important how long we’ve been around. It’s more important how long we’re going to be around in the future.

 

Ginni:              Oh yeah, that’s true too.

 

Joe:

Because obviously, we are in the biggest transformation of all time. Let’s face it. And what technology will help us is we’re going to reduce – cut the value chain in half quicker and faster than ever before. And let hundreds of millions of people who need to change jobs or lose the job. So we better have an answer for that one. And then we are going against that fear type of stuff.

The American people will decide what’s going to happen, but it’s not my business. But there cannot be a bipartisan issue. This is a global matter of responsible people on how to shape the world, and explain it to the uneducated what’s in for them going forward.

This is a leadership topic. Ginni and I, IBM and Siemens and a few others have been putting this Charter of Trust in place. What has been done there is as I look, we are companies. We sign up for certain set of rules and values and everybody who signs up for that one, will be sort of a preferred supplier and partner because we know you do the same thing, in terms of protecting data and our interests. We are a guy who are closer to us. And then we have tons of people who wanted to join and then at the Munich Security Conference, there was somebody state – a company from Russia said hey, this is really good, can we join? And I said, hmm, maybe later. So I don’t know.

But this is the type of stuff the private sector can do. We can do that. How do we act together as partners in ecosystems? Then this is the topic of how will the big nations deal with each other, that the Internet of Things is extra-territorial. So things need to come together and constantly need to be reapplied in the line, because we must not be…

 

Julia:               It doesn’t obey borders.

 

Joe:                  But training people and make them understand what it takes, make them fit for the next generation of know-how. That’s the biggest recipe you can’t be wrong. It may not be enough, but you can’t be wrong.

 

Julia:

And Ginni, your statistic, and this for me has become one of the issues I talk about on a weekly basis on my show. 125 million people needing re-skilling, to some degree, over the next 3 years. At Davos this year, climate, sustainability, it’s incredibly important. But for me, when you’re talking about that kind of risk – and it isn’t that much incremental re-skilling, but it’s enough – that kind of risk over that time horizon, these are the real issues that we need to be discussing. Chris, how do you approach this?

 

Chris:

Well, coincidentally, we’re working with Ginni on this very topic. We have something called the American Worker Force Council, which Ginni and a number of other top CEOs are on. And they are collectively making a commitment to retraining. So we have something called the Pledge to the American Worker, which Ivanka Trump and the White House and a number of CEOs have been working on. I think we’re up to 13 or 14 million?

 

Ginni:              I think 15.

 

Chris:

15 million. So a commitment to retrain 15 million people, collectively. So clearly, we as a government can do something, because we spend money on retraining. But this is very much a private sector-led initiative. So it’s been a focus, really, for the three years we’ve been in there. I agree with you, it’s one of the most fundamental issues we have to deal with.

And 15 million is a fantastic start, but it’s not everything. It’s a great start and you have companies doing real things. IBM is not just talking about this, it’s actually doing it. I think it’s a fabulous initiative or series of initiatives. And private sector led. Some great companies.

 

Joe:

It’s very cool. We kick it up together in the White House together with Ivanka, Ginni and I and Mark _____, he wanted to train a billion people. So not sure how far it’s got. But it’s good start and I think he’s planting trees or some. But on a serious…

 

Chris:              A billion(?) trees _____ people. Don’t get that mixed up.

 

Joe:                  No, no, what I’m trying to say…

 

Julia:               It’s a good number.

 

Joe:

What I think you also need to do, and that’s a company leadership topic as well as a societal one. I was on a business council and a panel. We talked about the same stuff, because this is all stuff you talk. And then I said, hey, you know what? We are spending 600 million Euro, about 750 million Euro, dollars, every year, for training, educating and _____ our people. Pretty cool. Someone from the audience asked me, got a question. I said okay. Go ahead. Well, what do your shareholders tell you if you waste 600 million of their money?

I thought hmm, of course, all the high chief entertainment officer gathering, so I thought, well, I will say, the easy way out could have said well, we took _____ and educate and what you typically say. And I thought come on, and I said look, you know what? What? If any shareholder believes that this is a waste of money, then should sell my share. That is not the person. You could have heard the needle drop in a Wall Street associated environment. Not saying it’s wrong or right. And saying is, we don’t have the guts to go out and say we are in here for a greater good of things for the longer term, for society, without making compromises on performance.

If you have issues on performance, you better shut-up and get the job done first and then speak up. This is really important. That we get this one out and say look, we know what we’re doing and it is not about missing the next quarter by a penny. It’s hard because you could lose your job. But I’ll tell you something on Germany. It’s a little tiny company compared to the United States and others. Chancellor Schroder(?) a long time ago, the whole staff, the whole world of Germany today is based on Chancellor Schroder’s say 2010 where he said, you know what? We’re going to do a reform now because this is enough.

He lost his job immediately because obviously people wouldn’t vote for him. They said oh no, what’s no nice, complacent, so what are you about? So he lost the job. But people in Germany, the Chancellors, they will not be reminded(?) how often they’ve made it into the Chancery. They will be reminded in the history books of the Germany economic system what they did for their country. So a lot of stuff you can do which called leadership.

 

Chris:

This is an issue I’ve seen a lot of change in the last three years I’ve been in the White House in terms of societal changes. One of the first questions I used to ask CEOs is how much they spend on workforce training. And the first year I was there. And I always found that interesting, because I used to be a CFO and I couldn’t have told you the answer to that question.

 

Joe:                  Well, CFO is complicated.

 

Chris:

And it’s complicated, yeah. I can say you’re telling it to a dollar how much they spend on healthcare, but not on training. And the interesting – the extent that people can articulate it, the answer I think is quite interesting. We spend about roughly – and these are US numbers, but it’ll translate reasonably well. We spend about $10,000 a year educating people. So most people go through school or college. If they go through college you’re spending roughly $10,000 a year on educating them.

We spend circa $500 on retraining them. So how can that make any sense in the society we’re living in where we’re in a world of continuous training, where we spend less than 10% a year retraining people for what are going to be vastly different jobs as they go through?

Part of what we’ve been working on in the workforce is thinking about not only how we spend more money, but how we spend it more effectively from a retraining point of view. And this has changed dramatically in the two or three years we’ve been working on it.

 

Julia:               Would you say that if the answer is complicated is it’s not enough?

 

Chris:              Yeah, probably.

 

Julia:

Probably. We are coming to towards the end of the panel, so I just – I want to get back to where Ginni began. She said we were looking back at Davos four years ago and it was 2015, trust and innovation. And actually, trust has declined since then, despite the fact that some people were talking about it and others were just yawning.

 

How do we prevent in four years time having this same panel and this same discussion and me saying yep, trust has fallen again? We’ve talked about it, precision regulation, not sure whether we can epic actually do something on a global basis to coordinate whatever the use of technology is. Individual companies will keep doing their own things. But how do we prevent…

 

Ginni:

I think every company has to stand up and act within their values. They have to take responsibility for these things, set what their principles are and be willing to be audited against them. That’s what – and we are the ones. Many companies are the ones that are doing – they are the ones that have this data, collect this data, decide what to do with it, how to handle it. I think you need to declare your principles; you need to live by them; put that in place with actions that you could be measured against.

And I think it is time for everyone to stand up to realizing that you can’t be an innocent bystander on this. We want this to be a healthy era that is both prosperous for people and people see a better future; and that is going to be the responsibility not just of government. So while we talked about precision regulation, I absolutely believe this is a co-partnership, that industry – and I’m using that in a really broad term – industry, private sector, however you like to look at it – has to take its responsibilities.

And that’s how I – and I hope we’re there Julia. I hope that we are there by virtue that everyone is now talking. So now we need to – everyone needs to walk the talk on this.

 

Julia:               The word for is stakeholder capitalism. If we actually believe that it means something and we’re at an inflection point where profits matter for business, but they don’t matter in absolute, other things matter too.

 

Ginni:              I think like Joe. You don’t exist for the long term if you don’t manage all constituents. It’s that simple I think.

 

Julia:               Joe?

 

Joe:                  Absolutely agree. As we as companies need to set examples to what we say and I keep banging on governments and they will give up.

 

Julia:               And the truth is, you can’t rely on governments because they change too often.

 

Chris:

If you’re talking four-year cycle, that’s an electoral cycle in the US, and absolutely we should have some legislation across a number of different areas, and regulation we can do quicker than legislation. But we should have big planks of legislation associated with all of the different topics we’ve talked about and the implementation of some of the regulation associated with that. We need tangible action. And we need to find a mechanism, which unfortunately is difficult in the government, of speeding up the way in which we adopt regulatory change as well.

At the moment, unfortunately, we’re heading slightly in a different direction which is towards more inertia, but hopefully over a four-year period, in particular from a regulatory point of view, we can find a new muscle and move faster.

 

Julia:               Might be easier to stay here.

 

Joe:                  The economy is fascinatingly strong. This is opportunity to do this now, because everybody is like…

 

Julia:               It’s only going to get more complicated if economic conditions…

 

Joe:

Well, if job losses are going to come in, not a good idea to say, oh, got to do something else here. So it’s a perfect timing now to piggyback on that one. You need to state it. Talked to the President just last night, it was pretty cool what happens there in terms of how the economy and the jobs are being created.

 

Julia:               Time to act. Thank you guys. Good panel.

 

Ginni:              As your host, let me thank you for all sitting in a hot room, in a crowded room. I hope you took away something of value. So thank you.

END

 

Share this post: