Leadership by Algorithm: An Interview with David De Cremer

test2-1-1200x1101.jpg

We are thrilled to feature an interview with David De Cremer, Provost Chair and Professor of Management and Organizations at the National University of Singapore Business School. Dr. De Cremer is an expert on how organizations, and in particular leaders, can best adapt to the new technologies changing our world. He is the author of Leadership by Algorithm: Who Leads and Who Follows in the AI Era?, as well as the founder and director of the Centre on AI Technology for Humankind.

PTI: What are the questions that motivated you to write Leadership by Algorithm? Do you foresee a near-term future in which AI systems actually sit in management roles, or will the transition be more gradual than that?

DDC: I had several reasons to write this book. First of all, several senior leaders in my executive classes approached me from time to time asking why it was necessary to still learn soft skills like decision-making in our leadership classes. They were clearly afraid that machines would eventually take over tasks like that and therefore believed it would be better for them to learn to code and become much more tech-savvy. Second, in the last 10 to 15 years I've met several executives who are always pushing the limits to increase productivity and performance, and they believe technology will provide solutions to all the problems they are encountering. Putting these two points together, it became clear to me that a lot of senior leaders and executives are afraid of AI, they do not understand it entirely, and they confuse its abilities with their own leadership responsibilities. They see AI as literally taking over their decision-making powers and therefore wonder what is left for them. They fail to see, however, that there's a lot left for human leadership in the AI era. Hence, the reason for my book.

PTI: What is the difference between management and leadership, and how do you foresee AI filling (or failing to fill) each of these roles?

DDC: The difference between management and leadership is well known within the literature. Having consulted quite a number of companies, I’ve always noticed that management is really about trying to maintain the status quo of an organization. That is, you use procedures to control and maintain stability to ensure you don't rock the boat too much. So, management provides order and a stable foundation for the company, but it’s not focused on helping facilitate change. Leadership is focused on dealing with changes. When change is needed, leaders are required to point out the direction, explain why this would be a good direction, and engage in the transformation process to move from the present situation to a future situation, which they are able to communicate in a vision. Leaders require a proactive attitude where they make sense of the changes required and in doing so inspire and motivate people to engage in that change process.

As a result of its focus on stability, management has become very metric-driven. We have KPIs for everything, even to such an extent that people are experiencing KPI fatigue. The more KPIs we use the more people become extrinsically motivated and focused on the short term. Obviously, when we want to change our organizations for good, then we require a longer-term perspective as well. When an organization is run by means of numbers (KPIs) in a kind of ticking-the-box mentality, a perfect situation is created for AI to take over. AI, after all, is consistent, rational, accurate, and works very fast, so it is the perfect candidate to tick all the boxes. For that reason, I do think that in the short term that many or even most managerial jobs will become automated. In my book, I therefore say the new MBA has arrived, which is Management by Algorithm.

PTI: One of the pitfalls of AI systems that you highlight is that they tend to optimize within a particular narrowly defined framework. How might this become a problem in a leadership context, and how can humans mitigate this risk?

DDC: Much of modern artificial intelligence is essentially statistics. In machine learning, a system learns from data and infers a prediction model for a particular output based on that data. So, the first sense in which AI works within a very narrowly defined framework is that it works with numbers, and is constrained to what those numbers can represent.t has no sense of intuition as humans understand it, no real feeling for the contextual, social, and emotional aspects of a decision that are important to humans. Machines are metric-driven and that approach works best in what we call closed systems. Management as we know it today, since the day that Frederick Winslow Taylor in 1911 published his book Scientific Principles of Management, has taken the shape of a more or less closed system. When we run our organizations, we manage what happens within them by means of established procedures to maintain control, which in turn depend on metric systems. In such closed systems, applying AI can be very effective because it will work quickly and will increase the accuracy of its predictions within that closed system. Leadership, however, deals with changes that require leaders to engage in transformation efforts, so they have to break open the closed internal system of organizations. 

Leadership deals with open-ended systems and in that context AI-because of its limited abilities to engage in critical, imaginative, and creative thinking-will be less useful. In my book, I therefore say that leadership will remain a human social activity.  We will definitely use AI for managing the stability of our organizations, but we will need human creativity and proactive thinking abilities to guide our organizations through changes. In other words, humans will be the stewards of our organizations. All of this requires that we train our leaders to embody the values of their organizations, understand the identities of their organizations, and grasp how these values and identities relate to the value these organizations create for their stakeholders

PTI: Creativity is a large part of leadership. Are present AI systems capable of the kind of creative thinking that leading an organization requires?

DDC: AI can definitely play a role in producing creative solutions. Creativity can be defined as the invention of something that is both new and useful. I would also add to that definition that creative solutions are meaningful to people. When we focus on the first part of the definition, that creativity is something new, then I believe that,  for that stage in the creative process, AI will be the main driver. AI is so much more efficient than humans at coming up with several new combinations from all the data out there. The bigger question, however, will be whether the new combinations AI comes up with will be experienced as useful in solving problems.  And this is a function of how meaningful people perceive the new combination to be in their lives. In this second stage, AI lacks at this moment the ability to really assess whether the new solution is useful and meaningful to humans. First of all, AI doesn't know what a human is, it doesn't understand what it means to be human, so it becomes very difficult for AI to reason and learn whether it's useful to an entity it does not understand the real nature of. Of course, if we define problems very accurately, AI will be able to provide satisfactory solutions. However, AI cannot engage in reverse causal thinking and this is how, for example, the post-it was discovered. When an adhesive was developed that was not as strong as super glue but still enabled us to stick things to a wall or computer screen, someone came up with the idea that it could be used as a means to help people remember their ideas by sticking them wherever needed. It was thus a meaningful and helpful solution to a problem that had to be identified after the solution was designed. AI is not able to do this kind of reflection and reasoning.

PTI: Because humans are cognitively similar, we can to some degree make our reasoning transparent to one another and give accounts of our decision-making. AI systems, by contrast, are black boxes, and the more complex they become, the harder it is to understand their reasoning. How might this be a problem in an organizational context?

DDC: AI is indeed a different species than humans. One of the consequences of this distinction is that we do not entirely understand how algorithms come up with certain predictions and types of advice. Therefore, we speak of AI as a black box. Obviously, this invites problems when employees have to work together because one important quality that promotes collaboration is trust. So, if AI is perceived as a black box, it means we do not trust AI that much and therefore will make less use of it. Moreover, we will definitely not trust AI when it has to make decisions that impact our own interests. I think the trust issue will be less of a problem when it involves AI doing routine tasks that do not necessarily have a big impact on our own personal interests. Within organizations, it is therefore important that leaders make a strong business case for applying AI within the decision-making process. Employees need to understand why they are using AI, what kind of value it is bringing, and how it will work. This will be the first responsibility for leaders. 

PTI: Much of the discussion around machines in the workplace centers on the fear that AI systems will replace people or put them out of work. Do you share this fear? Is the future one of replacement by machine, or human-machine collaboration?

DDC: In my view, our future will be one where machines and humans collaborate. This is what Thomas Malone calls “super minds” - human intelligence and machine intelligence working together. In theory, this all sounds very interesting and it’s an outcome that I very much hope we can achieve. In reality, however, what I see happening today is a little bit more complicated. People indeed have fears because we are overly impressed by what AI is able to do and we quickly make the inference that AI will therefore be able to take over our jobs as a whole. But, don't forget, AI systems tend to be good at single tasks, and this is what we call narrow intelligence. AI doesn't have general intelligence, the ability to do different tasks very well, and understand the context of each of these tasks. So, on one hand, I feel that people are somewhat overwhelmed by what the business and tech gurus are telling us, and as a result, fear is widespread. This is especially the case because the emphasis is so much on the fact that AI is modeled after the human brain, so (the thinking goes) it’s just a matter of time before it will replicate our mind. Knowing that with the industrial revolution, technology was able to replicate our bodies, people fear that technology will now be able to replicate our minds. As a society, but also as educators and researchers, we have the responsibility to be more precise in communicating what it is that AI will change when it comes down to jobs. What I see happening today is that automation efforts are actually fragmenting jobs into a series of tasks. So, if a job is being fragmented and several of the tasks are being done by machine, companies will need to make investments to enrich that job. Tech and business gurus are always saying that AI will create more jobs because we will not have to do routine tasks anymore, but this can only happen if organizations help transform the job description for employees. It is only then that we can truly establish a collaborative relationship between machine and man and hence create more jobs than the number of jobs that we will lose because of automation efforts. 

PTI: One of the major challenges we face as AI systems begin to permeate the economy is that we will need to retrain vast segments of the workforce. How do you think we can best accomplish this?

DDC: I don't really like the concept of retraining that much. It seems to suggest that up until now everything you've been doing in your career suddenly becomes useless, so we have to retrain you. I like to adopt a more constructive approach where all the experiences that you've gathered so far and your existing abilities can be used to enhance your unique human abilities like creativity, emotional intelligence, and critical thinking. At the same time, we also need to learn how to use these abilities alongside machines. All of this means that our workforce needs to become more tech-savvy, like being data literate and understanding the basics of how AI works. But, retraining does not mean that everyone has to become an engineer, scientist, or coder and forego their unique interpersonal and cognitive skills. These abilities need to be trained in tandem. There's no point in training people to become algorithms themselves because as humans we will lose the ability to compete with real algorithms. Instead, we need to use the existence of AI as an incentive to find out more about what our human identity and morals really are and invest in those unique skills.

PTI: What trends do you see in the attitudes held by employees and managers toward algorithms? Are these attitudes positive or negative? What do people tend to misunderstand about algorithms?

DDC: I see both positive and negative attitudes in the psychology literature. The phenomenon of algorithm aversion has been introduced to indicate that on average people are aversive towards the use of algorithms, especially so when they make decisions on our behalf or impact our interests. When it involves more routine and repetitive tasks that we are very familiar with, most of us, however, are very happy to automate those tasks and we don't worry too much about the possible consequences. A bigger psychological phenomenon that I'm really interested in is that, as humans, we have a strong tendency to very quickly attribute human qualities to AI. One of the reasons is that because we explicitly talk about intelligence in the notion of AI, we see it as a learning entity and therefore likely possessing human abilities. So, although AI is basically statistics and does not have a sense of awareness, we do seem to humanize machines very easily and therefore consider it very quickly as a competitor that may even act independently from us humans. We quickly forget that AI cannot work without data, so if we do not feed it with data it will not function, highlighting that machines are dependent on us. This tendency is also reflected in what we see happening in organizations, namely that leadership invests a lot of money in the technology itself but not that much in data management. And this is unfortunate because AI cannot work without data and thus this is yet another example of our tendency to look at technology as an independent being.

PTI: Your center, the Centre on AI Technology for Humankind, aims to inspire the development of AI that is human-centered, rather than purely technologically innovative. What does it mean for AI to be human-centered, and what are some examples of failures on this front?

DDC: Our center is an interdisciplinary platform where we bring together psychology, behavioral economics, management, philosophy, and computer science to understand better the role that AI can play in creating a more effective and humane world. We do not develop technology ourselves, but we advocate the idea that technology has to be developed in a way that will benefit humanity. This means that in all our efforts to create innovative technologies, the end-user should always be human and not machine. Why am I saying this so explicitly? History has revealed that once we are focused on technological innovation, a rather narrow mindset sets in among developers where the only goal becomes to make technology so perfect that at the end of the day, we're actually creating a world that is most fitting to the technology we developed. In other words, the risk is high that because of a narrow focus on optimizing technology, we create a world for machines. 

We have seen examples of this already with employees of Amazon working on assembly lines. Many of them leave their jobs because they feel they are being treated like robots. The reason for this is that their supervisors are algorithms that, in certain circumstances, can fire employees without any human intervention. 

Another example is what happened during the Cambridge Analytica case, where it was revealed that Facebook data was used to predict the political preferences of people. What was interesting in that case is that Mark Zuckerberg, the founder of Facebook, admitted in April 2018 during a radio interview that up until then he did not see it as his responsibility to take care of what exactly happened on the Facebook platform.  He explained this by saying that, for most of his career, he was thinking like a typical engineer and as such only focused on developing the best technology and the best platform possible. He did not feel he had to do more and definitely not take into account the social context, and thus humans, in his development plans. In his world, he only cared about the technology, and because of this narrow focus, a platform was created that ultimately served technology but not so much the well-being and interests of the human end-user. I call this phenomenon the “innovation only bias,” where we see that people get involved so much in innovation that they see it as the end goal itself. 

This interview has been edited for length and clarity.

Guest UserComment