2023: THE YEAR OF OPEN AI

In this week’s 52INSIGHTS, Donagh Humphreys, Head of Social and Digital Innovation at THINKHOUSE breaks down some of the big news items, the ongoing debates, and a whole load of other handy need-to-knows that will make you the go-to person for all things OpenAI at your Christmas work party.

CHEAT SHEET

Before we get into the meaty stuff, here’s a handy cheat sheet for some of the terminology and jargon that you might have been hearing being bandied about in conversations at the office:

Here’s what happened:

Altman's Firing from OpenAI: Sam Altman was unexpectedly dismissed as CEO of OpenAI by the company's board. The specific reasons for this decision were not publicly disclosed​​. Following Altman's firing, hundreds of OpenAI staff members threatened to resign and join Microsoft. By Monday, nearly all of OpenAI's over 700 employees had threatened to leave unless the board stepped down and reinstated Altman​​​​.

Altman Joins Microsoft: In the wake of his dismissal from OpenAI, Sam Altman, along with Greg Brockman (OpenAI's co-founder) and several other former OpenAI members, joined Microsoft to lead a new advanced AI research team​​. Microsoft had previously invested over $10 billion in OpenAI and acquired almost 50% ownership, making this move significant as it brought the leadership of a major AI startup directly under the umbrella of a major investor​​. This was seen as a strategic move by Microsoft to bolster its competitive edge in the AI field. Altman had been a key figure in OpenAI, and his leadership was considered crucial for the company's progress​​.

Rehiring of Altman at OpenAI: After intense discussions and a tumultuous weekend, Altman was rehired as CEO of OpenAI, and Greg Brockman resumed his role as President​​. As part of these developments, Microsoft gained a non-voting observer role on OpenAI's board.

Read Altman’s own take here.

These events are of real significance as Altman’s dismissal and rehiring is largely thought to be not just a literal but also a significant victory in the ongoing philosophical debate that is raging around AI

IDEOLOGICAL SPLITS AND SUPERINTELLIGENT AI

Rapid advancements in artificial intelligence are nudging us closer to artificial general intelligence (AGI). Essentially a superintelligence that if a century of sci-fi writers and some of the greatest minds in the world are to be believed would be the end of the human race as we know it.

The split between Altman and the board at least partly seemed to fall along broadly ideological lines, with Altman and Brockman in a camp known as “accelerationists” – people who believe AI should be deployed as quickly as possible – and “decelerationists” – people who believe it should be developed more slowly and with stronger guardrails.

While Accels vs Decels is important to understand, it also might be an overly simplistic explanation of some of the more radical thinking in the world of AI. Two of the more popular, not to mention alarming, main viewpoints are articulated here:

AI Accelerationists (Part of the "e/acc" Movement): This group views capitalism as a form of intelligence superior to human intelligence, steering our evolution since human intelligence has plateaued. They believe in an impending techno-capitalism singularity driven by advanced AI, which they see as the next evolutionary step in intelligence. Intriguingly, they don't necessarily see humans as essential to this future evolution, suggesting we might not be a part of the longer-term path of intelligence development.

AI Alignment Advocates: This camp is concerned with aligning AGI with human values, using methods like reinforcement learning with human feedback (RLHF). They view AGI as a potential existential risk to humanity and aim to prevent a scenario where AGI could cause harm. They note the challenges in AI alignment, given the diversity and often conflicting nature of human values and beliefs.

OpenAI's recent AGI roadmap attempts a balanced approach, advocating for incremental AI development with regulatory oversight, but this middle path may not fully satisfy either group. With Altman’s return, it seems the Accelerationists have stolen a march.

WHY IS THIS IMPORTANT?

Whether you’re in the Accel or Decel camp, if your business is operating in the EU legislation is fast approaching:

“The first comprehensive law on AI by a major regulator anywhere is a law that assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.” - The EU Artificial Intelligence Act

The regulation will cause organisational compliance overhead and huge fees if companies fail to prepare accordingly. Gartner expects that by 2026 organisations that operationalize AI transparency, trust, and security will see their AI models achieve a 50% result improvement in adoption, business goals, and user acceptance.

BRAND TAKEOUTS

If you are using Generative AI then in the short term it’s a good idea to have a governance policy in place, as in the medium term it’s likely to be mandatory. AI governance enables organisations to unleash the full potential of AI, while mitigating risks like bias, discrimination, and privacy violations. The regulatory landscape will evolve quickly reflecting the supersonic speed of change in the tech landscape.

**Chat GPT was used in the summarising and drafting of this piece.