CEO or C-3PO?
A recent article in the venerable British publication The New Statesman (by their business editor, no less) pondered this question in its heading: “CEOs are hugely expensive – why not automate them?” The subtitle is “If a single role is as expensive as thousands of workers, it is surely the prime candidate for robot-induced redundancy.”
There is justifiable outrage at CEOs in some cases being paid thousands of times the median income of their employees, including pay-for-performance. And some of this has happened during the Covid pandemic while these organizations have received government aid to help prop them up. These corporate good fortunes certainly aren’t a direct function of CEO performance, and as such rewards should not accrue.
More generally, a common mistake we make in assessing CEO (and others’) performance is the “fundamental attribution error”. Organizational outcomes are not necessarily the result of individual efforts. These outcomes are likely due to individual performance in conjunction with systemic and situational conditions beyond the CEO’s control (e.g., market, business sector, and/or political conditions – a rising tide lifts all boats). How much success (or failure) can be attributed to the CEO is often unclear. So compensation directly commensurate with outcomes is always legitimately suspect to some extent.
With apology to gender: the 19th century historian Thomas Carlyle claimed that all of history is but the biography of great men – individual attribution akin to the prevailing approach to executive compensation. His contemporary, the philosopher and early sociologist Herbert Spencer, argued that great men are the products of their societies and that their actions and accomplishments would not be possible without the advent of myriad social conditions that were shaped before their lives began. In many ways we are “corks on waves”, buffeted about by vast impersonal forces beyond our control.
So which is it? Likely both. Behaviours and outcomes are the product of individual dispositions, capabilities and efforts in conjunction with the systems and situations within which we find ourselves. The stronger the situation, the weaker the dispositional effect. The weaker the situation, the stronger the dispositional effect. For example, in the early years of WW2, U.S. President Franklin Roosevelt wanted to provide American support to the UK and Prime Minister Winston Churchill in the fight against Nazi Germany, but he was restricted in his ability to do so by the isolationist mentality in the U.S. – a strong situation to challenge, diminishing his dispositional efforts. However, after the bombing of Pearl Harbor in December, 1941, the situation changed – the resistive isolationist forces dramatically weakened, and Roosevelt was able to bring the U.S. into the war – his dispositional efforts prevailed.
So executive compensation should somehow be tempered to reflect this interdependence between individual capability and forces beyond one’s control. And the longer the tenure of the CEO, the more likely there is stronger correlation (and arguably causation) between their efforts and organizational outcomes (successful or otherwise), justifiably reflected in compensation, especially long-term.
But there is a significant difference between fixing CEO pay and getting it right, to reflect the indubitable value of highly capable CEOs and their critical contribution to the success of an organization (vs. poor CEOs and their lack of value, or worse), and using warped CEO pay to justify automation of the role through a misguided assessment of the complexity of the role and its added value.
What does a CEO do? Scan the competitive horizon; understand environmental factors affecting the business and continuously respond to myriad emerging issues; anticipate major business situations and create opportunities; deal with myriad stakeholders – the Board, investors, regulators, and so on; oversee and engage in creating, continually refining and executing a competitive value proposition, value chain, and strategy; allocate capital and other resources; oversee and engage in creating, maintaining and refining the organization needed to execute, from structure, to behaviours and culture, to systems and processes, to talent acquisition, nurturing and development; and more. And use judgment and discretion to make complex decisions in doing all this, decisions that guide and set the course of the organization over the long term, in addition to making multitudes of decisions in the short and near term.
Doing all of this is much beyond the capability of current incarnations of artificial intelligence.
AI is not about the exercise of judgment and discretion and making decisions under uncertainty, which is the essence of human work, and certainly work at the CEO level. It’s largely about assimilating massive amounts of input data and somewhat mechanically “learning” from this data to recognize patterns. While it can be successfully applied in some domains, higher complexity human capabilities in decision making are still elusive and very difficult to emulate, and likely will be for the foreseeable future, if they can ever be replicated (which many very thoughtful researchers as well as philosophers of mind still reasonably doubt).
The French neuroscientist Yves Frégnac has observed that the current fashion of collecting massive amounts of data to make decisions and solve problems is sometimes misdirected: “Big data is not knowledge … we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.” There is speculation that current “deep-learning” AI technology might display emergent properties – things that cannot be predicted from an analysis of the components, but which emerge as the system functions, as in the way the human brain seems to work. But it would seem that AI replicating any significant level of human thought, let alone the level at which a CEO needs to operate, is a major stretch at this point. The brain does not seem to work simply like computers and AI currently do.
So it’s a bit premature to advocate a transition from CEO to C-3PO.
Just because housing prices are being driven by dysfunctional market forces doesn’t mean we don’t need actual houses.
And just because CEO compensation is being driven by dysfunctional market forces doesn’t mean we don’t need human CEOs.
More muddled management misthinking.