In the coming year, a new report from CCS Insight predicts that the excitement about generative artificial intelligence (AI) will dwindle. The report highlights several reasons for this, such as the declining interest in the technology, the increasing costs required to operate it, and the growing demand for regulations. These signs suggest that AI might slow down in its progress.
CCS Insight, in its annual predictions for the technology industry in 2024 and beyond, anticipates that generative AI will face a reality check in 2024. This means that the initial enthusiasm surrounding generative AI will be replaced by a more practical understanding of the technology’s high costs, risks, and complexity.
“The bottom line is, right now, everyone’s talking generative AI, Google, Amazon, Qualcomm, Meta,” Ben Wood, chief analyst at CCS Insight, told CNBC on a call ahead of the predictions report’s release.
“We are big advocates for AI, we think that it’s going to have a huge impact on the economy, we think it’s going to have big impacts on society at large, we think it’s great for productivity,” Wood said.
“But the hype around generative AI in 2023 has just been so immense, that we think it’s overhyped, and there’s lots of obstacles that need to get through to bring it to market.”
Generative AI models like ChatGPT, Google Bard, Claude, and Synthesia rely on extensive computing power to function. Companies need powerful chips, typically advanced graphics processing units (GPUs) from Nvidia, to run AI applications. Now, big companies like Amazon, Google, Alibaba, Meta, and OpenAI are even designing their own AI chips. The cost of deploying and maintaining generative AI is extremely high, making it unaffordable for many organizations and developers.
“Just the cost of deploying and sustaining generative AI is immense,” Wood told CNBC.
“And it’s all very well for these massive companies to be doing it. But for many organizations, many developers, it’s just going to become too expensive.”
CCS Insight also predicts challenges in AI regulation in the European Union (EU). While the EU is expected to introduce specific AI regulations first, these rules may need multiple revisions due to the rapid progress of AI. The final legislation may not be in place until late 2024, leaving the industry to implement self-regulation measures.
Generative AI has generated a lot of excitement due to its ability to create content in response to text-based prompts. It has been used for various applications, from generating song lyrics to writing essays. However, this advanced technology has raised concerns about job displacement and is facing calls for regulation from several governments.
In the EU, the AI Act is under development, which would introduce a risk-based approach to AI, potentially banning technologies like live facial recognition. Developers of large language model-based generative AI tools must undergo independent reviews before releasing them, a point of contention within the AI community.
Different companies have varying views on AI regulation. OpenAI’s CEO Sam Altman has called for government oversight, while Google prefers a multi-stakeholder approach to AI governance.
CCS Insight also predicts that search engines will add content warnings to inform users when the material they’re viewing is AI-generated, similar to how social media platforms introduced information labels for COVID-19-related posts to combat misinformation.
Lastly, the report forecasts that arrests will be made in 2024 for AI-based identity fraud. Police are expected to arrest individuals who use AI to impersonate others, using techniques like voice synthesis or deepfakes. These AI technologies can create realistic impersonations using publicly available data, leading to potential damage to personal and professional relationships and fraudulent activities in banking, insurance, and benefits.