Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Exactly a year ago, I published an AI Beat column that detailed 5 AI stories I was waiting to cover in 2023.
Thanks to what I like to call a constant ‘tsunami’ of AI news in the wake of OpenAI’s November 2022 release of ChatGPT, I didn’t have to wait long: From GPT-4 and the EU AI Act to the battle for AI search, open vs. closed AI, and the hunger for training data and computing power, I wrote about all of those topics multiple times.
Now, once again, with New Year’s only a few days away, I’m looking ahead to the AI narratives I think will be central to trends in 2024:
1. OpenAI vs. Anthropic
At the beginning of 2023, OpenAI was riding the wave of ChatGPT, but Anthropic — a startup founded in 2021 by siblings Daniela and Dario Amodei, former senior members of OpenAI, was already hot on its heels. Anthropic released its ChatGPT rival, Claude, on March 14, 2023, the same day as OpenAI released GPT-4 in a surprise announcement.
VB Event
The AI Impact Tour
Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.
Learn More
With both companies seeking massive new funding rounds — Anthropic is reportedly in discussions for $750 million from Menlo Ventures, while OpenAI is reported to be in talks for a fresh round of funding at a $100 billion valuation — the LLM rivalry is sure to heat up in 2024.
2. Open source AI plays GPT-4 catch up
After French startup Mistral AI dropped an early-December surprise — a new open source LLM, Mixtral8x7B — with nothing but a torrent link, it became crystal-clear that the open source AI community has no intention of slowing down its efforts to catch up to its proprietary model counterparts. In fact, Mistral CEO Arthur Mensch declared on French national radio that Mistral would release an open source GPT-4-level model in 2024.
And at the center of open source AI predictions is Meta, which is rumored to be preparing a Llama 3 release, speculated to be on par with GPT-4, in the first half of 2024. This past fall, META FAIR researcher Angela Fan told me that Meta looks for feedback from its developer community, as well as the ecosystem of startups using Llama for a variety of different applications. “We want to know, what do people think about Llama 2? What should we put into Llama 3?” she said.
3. The impact of AI on 2024 elections
The Hill reported yesterday that fears are growing over AI’s impact on the 2024 Presidential election. It quoted Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” he said. “We will all be learning as a society about the ways in which this is changing our politics.”
And Nathan Lambert, a machine learning researcher at the Allen Institute for AI, recently told me that generative AI will make the 2024 US elections a ‘hot mess,’ whether it is from chatbots or deepfakes, while at the same time, politics will slow down AI regulation efforts. “I think the US election will be the biggest determining factor in the narrative to see what positions different candidates take and how people misuse AI products, and how that attribution is given and how that’s handled by the media,” he said.
4. Training data at the heart of AI
I covered issues related to the data that trains today’s AI models extensively in 2023. But I think it will only ramp up in 2024 — because I think it’s really at the heart of what we know, and don’t know about LLMs and diffusion models. Issues around copyright, bias, deepfakes, disinformation, labor issues, open vs. closed models, and risks all circle around the data that trains these models — whether they are datasets that were created years ago or new ones being developed.
News around AI training datasets has ramped up in just the last couple of weeks, including LAION‘s removal and OpenAI’s negotiations with organizations to build new datasets. There is so much more to dig into — and I believe it will become more and more important to cover in 2024.
5. Effective Altruism vs. Effective Accelerationism
As the future of AI remains uncertain, its belief systems around aI risks and opportunities — some of which approach religious status — continue to grow in influence. I’ve just (belatedly) begun to fully understand the intricate, expensive web that the effective altruism (EA) movement has woven to build its influence in AI policy circles, particularly in areas such as AI security. I’ve also witnessed the rise of effective accelerationism (e/acc), which VC Marc Andreessen has latched onto and I’ve called the ‘other side of the AI doomer coin.’
While I want to amplify AI pragmatists, who I think will be more helpful in the short and long-term evolution of AI, I think it is essential to keep covering the EA and e/acc belief systems — as its billionaire funders and founders will continue to have outsized influence on how AI policy and investment plays out for the rest of us.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Credit: Source link