Unchained AI: Paris Defies Red Tape

Bangladesh-Pakistan Thaw and a Regional Realignment
February 21, 2025
The UAE’s Rising Influence in The Middle East
February 24, 2025
Bangladesh-Pakistan Thaw and a Regional Realignment
February 21, 2025
The UAE’s Rising Influence in The Middle East
February 24, 2025
Noureen Akhtar

Amid the growing global conversation around artificial intelligence (AI), there has been only one event – The AI Action Summit in Paris, which has sought to define a holistic and responsible path for how AI will develop, as well as be used, in the future. The summit was called by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, and the aim was mostly to discuss the huge promise and, though equally as sizeable, the risks the technology poses for the society.

Even as the release of artificial intelligence into the world has become unavoidable, globe-much leaders have been gathering under the shadow of climate concerns and geopolitical nervousness to discuss how it will transform their industries and implement war. In his opening remarks, Macron acknowledged that AI is demanding a lot in terms of environmental toll (both energy consumption and resources needed for its widespread deployment).

The Paris AI Summit has become a battleground between deregulation and overregulation in the global AI race.

It was a portent for the central theme of the summit: how to expand the frontiers of AI in a manner that is good for the environment and also respects ethical obligations. In a self assured tone que Macron reaffirmed Franc’s commitment to develop AI innovation while upholding the highest regulatory standards. Macron drew a parallel to the historic restoration of the Notre Dame Cathedral of the rising flames, after which was renewed in time, even with bureaucratic obstacles, under the Notre Dame an approach. By cutting AI regulations this strategy, according to which AI ventures in France could flourish in a univers for innovation, would accelerate their operations.

Just as the rebuilding of Notre-Dame has, much like AI, will its potential be revealed as a golden monument and who will be the ones to blame? Yet as the summit’s forward-looking visionary, a noose tightened around the summit’s focus: the regulation. As the world struggles with growing big data and fragmentation, coming about at the confluence of geopolitical interests and lavish power of private corporations, it’s worth asking if extreme deregulations will create real innovations or undermine some important safety nets. Approaches taken by China and the US are contrasted with Europe’s approval of the AI Act in 2023, which will be the first world’s totalling regulations regarding AI in the world.

Interestingly, while China has used its mighty state backed conglomerates to take the lead in AI domains, Europe has reckoned these entities right from the beginning and has set rules to govern AI in the control of privacy and human rights. Going in the opposite direction, the U.S. has opted for a hands off approach that risks inadvertently empowering the titans of Silicon Valley who are more and more in the business of monopoly and will be in charge of the future of AI in the event of the recent developments over OpenAI.

No other case of this looming concentration of power, like the news that Elon Musk, there in person, has reportedly bid $97.4 billion to buy OpenAI, is more obvious. It seems Musk is wide eyed (pun intended) for hand at steering the future of AI with his investments of electric vehicles, space exploration and social media.

Despite that, OpenAI CEO Sam Altman shot down this acquisition, suggesting that an AI war for its course is underway. Tidings are disquieting: as AI does new feats of specious perfection, it could be diverted to petty dictators acting as mistresses to a few corporate moguls, destined to decide who sags to their wishes and who grows rich or wedded. Yet, Musk’s latest bid adds an important question of who will govern AI’s architects and to what end? While there was a bit of forward movement here for international dialogue on AI, the delicate balance between unruly innovation and the decent supervision is as easy as it is pointless. But if left unchecked AI could become a very explosive force: either used as a weapon for global war and social manipulation or as a method of suppressing people by means of algorithmic superiority.

AI regulation must strike a balance between innovation, ethics, and environmental sustainability to prevent monopolization and misuse.

As more powerful states place AI into the defense of their nations and capital becomes more concentrated, regulating AI may soon be considered a question of national, or indeed, global security. After all is said and done, however, the one thing that remains standing about this Paris summit is this: either the development of AI will be an acceleration of economic progress and social good or else an unchecked race toward digital dominance and inequality on mass.

Will we be able to let governments regulate technology without crippling the creation of innovating synergy, without uttering the words of inequitably wealthy future? Or will they let the corporate world and the interests of the autocrats prevail on the future of AI? The answers forged now may well determine whether AI succeeds or fails, and human progress gets to an entire generation.

In a burst of international ambition, the Paris AI Summit has sprung up as a hotbed of AI’s future and a counterweight to the creeping red tapism. It was opened by French President Emmanuel Macron alongside Indian Prime Minister Narendra Modi, with both of them making a clarion call for cutting bureaucracy that could stifle innovation. Macron’s invocation of ‘the Notre Dame’ approach referencing the swift and lightly regulated rebuilding of the iconic cathedral after it was brought down by fire underscored his view that fast, relaxed processes can spur the progress in a competitive global market.

In recent years, the cost of the environment in AI has been center stage at the summit. This acknowledgment of the amount of energy and resources needed to develop AI was acknowledged by leaders and recent studies suggested that it could take the amount of carbon produced by several cars during their lifetime to train large AI models at a time when the European Union was already leading the way and becoming the first region worldwide to closely regulate what it regards as the ultimate corporate weapon against ethics, in that case, where it approved in 2021 its comprehensive AI Act—a world’s first to tackle this pioneering tech comprehensively. But as the EU ratchets up its scrutiny, the question is whether an excessively excessive supervisory regulation may be counterproductive to the same of innovation it seeks to preserve.

And in the case of AI regulation, stark differences between the ways around the Atlantic and beyond are becoming more and more apparent. Europe is preparing to regulate and pour money into its tech ecosystem, while China goes further and opens up through state-backed tech giants, and the US still prefers hands off while it still has room for Dynamics to develop in the market. As we speak, the chief executive officer of Google, Sundar Pichai, told the European parliament that Europe’s productivity in the future is closely related with the isolated embrace of new technologies a sentiment echoed by many who see France as a breeding ground for artificial intelligence innovation.

Elon Musk’s rumored bid for OpenAI underscores the growing concentration of tech power and its implications for AI governance.

Unverified reports have surfaced that, a few days ago, Elon Musk tabled an eye watering offer of $97.4bn to buy OpenAI, an act that, should such a thing be true would be utterly explosive, implying a total tech power concentration such as the world has not seen before. But the bid was quickly dismissed by the CEO of openAI Sam Altman on social media X, fanning the flames of debates among others who see consolidation of influence in a business that is already characterized by fiercer competition. These figures have yet to be corroborated by mainstream media, but with such presence just being announced – the global AI game is still very much on – while the mere circulation of such figures obviously reminds the world just how high the stakes are.

It has become much more than a policymaking discussion in the summit, it has become a battle ground for the ideologies of deregulation and overregulation. Especially when the notion of European AI is effectively, inevitably struggling to keep pace with the United States, and it’s in opposition to critics who warn that excessive red-tapism in Europe could embed paralyzing constraints on digital, tech-disrupting start-ups and the major tech players among them alike, consigning the continent’s hopes for AI to the sidelines of a race that’s ever more likely being led by less encumbered innovators.

The world of AI market is predicted to hit $190 billion by 2025, making it a good investment and peril simultaneously. In that respect, Macron calling for a ‘Notre-Dame approach’, perhaps, is the plan for somehow balancing haste in a technology rollout with responsible governance.

However, the greater challenge is about more than economic competitiveness. As pointed by the International Energy Agency, without making significant improvements on energy efficiency, growing energy demand attributable to the deployment of AI has the potential to intensify global carbon emissions, an unwanted counter pulse to the abundant enthusiasm that permeates tech discussion.

The energy consumption of AI threatens sustainability, raising urgent questions about responsible technological progress.

Earlier, global leaders discuss the future of artificial intelligence, the specter of tech red tappism clouds of which is almost pervasive—a regulatory quagmire that can be another opportunity for sustainable innovation or an irredeemable strangle hold that stops progress. At this crucial juncture, could the world take the circumspection to strike a fair equation among promotional innovation and the standards of ethics, the environment, and society, or will bureaucratic inertia take its course and let artificial intelligence’s transformational potential go out of control?

The Author is a PhD Scholar (SPIR-QAU) and has worked on various public policy issues as a Policy Consultant in the National Security Division (NSD), Prime Minister Office (PMO). Currently, she is communication head and editor Stratheia and works for Islamabad Policy Research Institution (IPRI) as a Non-Resident Policy Research Consultant. Her work has been published in local and International publications. She can be reached at https://www.linkedin.com/in/noureen-akhtar-188502253/  and akhtarnoureen26@gmail.com  She Tweets @NoureenAkhtar16

Unchained AI: Paris Defies Red Tape
This website uses cookies to improve your experience. By using this website you agree to our Data Protection Policy.
Read more