By Adam Fisher, Investment Analyst, Mazars, and George Lagarias, Chief Economist, Mazars
Board wars have enough elements of human drama to rivet even the most demanding viewer. You start with a multi-billion corporation. There’s always a villain and a hero. There is betrayal. And there’s the inevitable twist – who is the real victim? And who’s the real traitor?
HBO’s hit series “Succession”, which focused on heated boardroom battles, won a slew of Golden Globes. The recipe, in and by itself, is flawless. Add in the hottest issue of our time, Artificial Intelligence (AI), and you have one of the most interesting stories of the year, and surely the makings of the next blockbuster.
It wouldn’t be too much of a stretch to say that ChatGPT, the AI chatbot tool launched by startup company OpenAI, has revolutionised the way that workers and businesses interact with generative AI. However, it hasn’t all been plain sailing for its creator, despite the influx of new interest and mainstream success of their flagship product.
In the last few weeks, the straight-out-of-Hollywood scenario played out not via streaming services but in the financial news. OpenAI’s board (ChatGPT) rebelled, ousting two of three founding members, including Sam Altman, the company’s CEO and frontman. In a dramatic twist, after seeing two new CEOs in the space of a few days, over 90% of employees threatened to resign and a key partner who initially voted for Altman’s removal reversed his vote. In the end, Altman was reinstated, and it was the board that handed in their resignations.
What are the takeaways here?
One, that the board structure must evolve with the company. The key difference between the infighting at OpenAI and these classic instances of boardroom drama is how OpenAI is structured. It was initially founded as a non-profit organisation in 2015, aiming to build artificial intelligence both safe and beneficial in nature, but in 2019 the company underwent a restructuring to open a for-profit subsidiary, allowing them to raise capital more quickly. The clash between the ‘non-profit’ board of directors and ‘for-profit’ shareholders highlights the fragility of corporate governance in the absence of monetary backing. The concerns of an independent governance board were, in this instance, secondary to the business commerciality prospects and the will of the shareholders, a classic example of the Principal-Agent problem. The events at OpenAI are sure to become a case study for corporate finance students in years to come.
Two, that certain strategic industries need increased oversight. AI has the potential to disrupt not only industries but geopolitics itself. AI, today, has the potential to commit white collar crimes, or even kill its operator. Part of the issue, it emerged, was Altman’s plans to aggressively develop AI capabilities, leaving the board with grievous safety concerns. Advanced artificial intelligence could threaten cryptography, which forms the basis of everything that has ever been online, including military secrets. From corporate treasuries, banks and stock markets to nuclear bombs, everything is linked and protected by passwords. It would, of course, be naïve to believe that authorities aren’t keenly aware of the threat imposed by AI. Even so, greater official and transparent oversight is needed. This technological advancement isn’t like the development of the PC. Rather, it is like the development of the nuclear device. Should it be allowed to be developed by the private sector, independently of supervision? And if supervision does exist, shouldn’t it become more formal?
Three, supervision is not the same as regulation. The fact that technology is advancing at a faster pace than our ability to comprehend it has been evident for years. Social media changed the face of human relationships, human knowledge and even democracy itself. Yet we still stand at the cusp of the fourth industrial revolution, with little understanding of what lies ahead. Are we staring at James Cameron’s “Terminator”, Isaac Asimov’s “I, Robot” or a much more benign use of revolutionary technology? We simply don’t know. We cannot fathom it, any more than we could fathom how the world would change when Larry Page and Sergei Brin launched Google in 1996 or when Steve Jobs introduced us to the iPhone in 2007. We simply cannot regulate what we do not know. Regulation can only attempt to slow down the inevitable. Humanity’s impetus to explore, discover and invent is the one true constant in its history, ever since primates experimented with tools. “Because it’s there”, Sir Edmund Hilary exclaimed when asked why he wanted to climb Everest.
Only proper oversight will bring about the right regulation at the time when it will be most needed. And this board war, if anything, clearly demonstrated that, in the case of Artificial Intelligence, boards are unable to provide sufficient oversight in their current format. A more robust structure is needed.