The debate on how to legally frame DeepFake is now at the centre of attention of governments, institutions and businesses. This essay explores the possibility that these technologies cannot be confined solely within restrictive regulations, but rather become a constitutive part of a new digital reality. It will analyse the legal initiatives, reputational risks and possible future scenarios that characterise a rapidly expanding phenomenon, with the aim of highlighting the limits of a purely regulatory approach and the opportunities, albeit controversial, associated with the adoption of a more flexible framework.

Reformulating digital identity: the urgency of a global debate
Our perception of ‘reality’ has been disrupted by the advancement of Artificial Intelligence, in particular by the phenomenon of DeepFakes, i.e. audiovisual content artificially generated or manipulated by neural networks and deep learning techniques. Tragically known cases include manipulations on photos and videos depicting, for instance, public figures in compromising actions, or the creation of pornographic content in which faces and bodies of real individuals are replaced with faces and bodies of completely unsuspecting people. Some of this content, referred to as deepnude or deepfake porn, has caused serious violations of personal dignity, damaging the image of the affected person and even generating fraud and extortion in several situations.
It is also increasingly common for DeepFakes to be exploited to raise the spectre of political disinformation, damaging institutional figures and influencing public opinion and voters’ votes. In Italy, for instance, the news of voice forgeries attributed to Defence Minister Guido Crosetto, used for purposes of fraud or economic blackmail, has caused particular clamour. This confirms fears that the consequences go far beyond personal offence: the very pillars of democracies and institutional credibility are touched.
The exponential growth of this content raises unprecedented questions about the right to identity and the dangers of devious manipulations, such as those involving the creation of fake child pornography material or persuasive videos and audio sockpuppets useful for sophisticated social engineering attacks. Suffice it to say that, according to the 2025 Identity Fraud Report, there is a DeepFake scam attempt every five minutes. And, according to World Economic Forum projections, by 2026 more than 90 per cent of online content will be artificially generated. The stakes are crucial: we embrace the challenge and try to govern it, or risk a collapse of trust in the information ecosystem.
From the recent approval of the European AI Act, to the first attempts to regulate DeepFake in various legal systems, the debate is pervaded by contradictory thrusts: on the one hand, there are forces calling for ever harsher penal provisions to punish those who abuse this technology; on the other hand, there is a current of thought that recognises the unstoppable dimension of innovation, calling for the evaluation of strategies of coexistence and adaptation. In what way? By placing identifying marks on AI-generated productions and imposing transparency obligations on manipulated content. But is a constraining approach sufficient or is the phenomenon already irreversibly entrenched?
Is a constraining approach sufficient or is the phenomenon already irreversibly entrenched?
Regulatory framework and prevailing view
In the Italian context, the draft law approved by the government on 23 April 2024 intends to introduce the offence of DeepFake, establishing a legal framework in Article 612-quater of the Penal Code and providing for prison sentences ranging from one to five years. This provision, while representing a step forward, seeks to curb the most serious consequences of digital manipulation by defining special cases of ex officio prosecution in cases of minors, incapacity of the victim or involvement of public figures. The initiative is part of a broader line of action, also inspired by the Garante della Privacy, which has long been promoting computerisation and transparency to curb the abuse of AI.
At European level, the AI Act has the merit of introducing risk assessment criteria associated with the various uses of artificial intelligence systems, also regulating the use of automatic content generation techniques. It is a framework legislation that steers the Member States towards a principle of consumer protection and safeguarding the reputation and dignity of the individual, but at the same time remains sensitive to market trends and technological development.
The current interpretation, supported by numerous jurists and policy makers, sees restrictive regulations as the main response to counter the dangers of generative AI. In fact, the severity of penalties for spreading false and defamatory content is justified by the intention to curb psychological, economic and reputational damage for victims. The chronicle repeatedly cites examples of how DeepFakes can destroy careers and reputations, not only for celebrities but also for ordinary citizens: blackmail and so-called digital ‘revenge porn’ are increasing exponentially, with serious social repercussions.
However, the succession of legislative interventions, and the attempt to impose, within platforms, obligations to insert watermarks or watermarks for AI-generated content, is not always easily matched on a technical level. Indeed, it is common evidence that cybercriminals continue to perfect machine learning tools and models capable of circumventing watermarks. This makes the concrete effectiveness of purely repressive measures uncertain, prompting some scholars to raise a more radical question: if surveillance means and harsher penalties are not enough, does an ethical, social, and even cultural model become necessary to deal with the ‘new reality’ as a given, building resilience strategies and new forms of authenticity verification?
Towards a paradigm shift: when fiction becomes reality
The suggestion that some currents of thought propose is to place DeepFake in the broader horizon of a digital ecosystem where manipulation and counterfeiting are integral and, to a certain extent, ‘natural’ parts. Just as photomontage and post-production have long since modified our relationship with static images, so could AI-generated voice and video synthesis impose itself as a new form of expression or narration, sometimes even artistic.
It is true that privacy and security implications remain crucial: this is demonstrated by reports of social engineering attacks in which artificial voices of friends or colleagues induce victims to make payments or disclose confidential company information. But while efforts are being made to strengthen technical controls, several Behavioural Science experts suggest that countermeasures may also lie in making people wary of ‘too real to be true’ content.
One thinks, for example, of the work of companies specialised in cyber-intelligence that do not limit themselves to detecting and reporting potential forgeries, but attempt to act in training and educating on the use of digital platforms, stimulating a critical and aware approach. It is a right, and at the same time a duty, to learn to decode messages, to check sources and to look critically at the images and videos we encounter online.
The ‘new narrative’ is also coloured by aesthetic and anthropological implications. The question is not only how to stop DeepFakes, but also how to interpret them: if we believe that AI can be a creative and performative tool, artificially generated content assumes a crucial role in the development of cultural and artistic products. It is no coincidence that exceptions are already looming in draft laws so that blatantly fictitious, satirical or artistic works are somehow exempted from mandatory marking. This openness reveals a willingness to make the rules permeable to forms of experimentation, as long as they do not directly and irreparably harm the reputation or image of individuals.
Finally, one cannot overlook the debate on the increasingly blurred boundary between reality and post-truth: DeepFakes expand the availability of tools to manipulate collective opinions and sentiments. But it is precisely the proliferation of such productions, paradoxically, that could push society to become more shrewd in its assessment of information. At some point, marking AI-generated content might not be enough: the distinction between true and false might become an exercise in critical analysis, an evolutionary step in the way we use the infosphere.
The outlook: between corporate reputation and adaptation to a changing digital identity
Looking at the corporate world, the increasing prevalence of DeepFakes is not only a matter of cybersecurity in the strict sense, but rather part of a broader issue of reputation management and the definition of defensive and proactive strategies. Within a competitive environment, where digital innovation determines the image of companies and their leaders, DeepFakes can become a powerful weapon for competitors or discrediting actions, with potentially disastrous effects on the trust of customers, investors and stakeholders.
In a scenario where the boundary between reality and manipulation becomes porous, it becomes crucial to act upstream, developing interdisciplinary skills that combine technological knowledge and psycho-social sensitivity. Companies and managers must equip themselves with constant reputational monitoring tools and intelligence services capable of reacting in real time to the dissemination of false or manipulated information. Timely crisis management also requires staff training and the preparation of response plans that include, for example, fact-checking and institutional rectification.
Corporate reputation is today exposed to exponential risks, as the creativity of attackers knows no limits. Therefore, in addition to the defensive component, many organisations are beginning to consider a more ‘coexistence’ approach with the profoundly disruptive innovation of artificial content generation. Rather than aiming for the total elimination of DeepFakes, a future is envisaged where companies integrate authentication mechanisms, digital signatures or watermarking, as well as disclosure practices aimed at making customers and the public aware of the characteristics of official material.
An intriguing example of a proactive response is the promotion of a corporate culture of transparency and training. Employees in communications and marketing departments could be trained to produce, detect or expose manipulated content, anticipating and mitigating reputational panic dynamics. At the same time, crisis plans must contemplate the possible use of DeepFake for blackmail or defamation purposes, putting in place prevention and technological solutions that become an integral part of organisational processes.
Some companies specialised in reputation monitoring and protection are already reasoning on the opportunity to also exploit the positive aspects of generative AI, using crisis simulations to train managers and top management to recognise critical situations. In practice, experimental laboratories are being created in which DeepFakes become training tools on how to deal with discredit campaigns and sophisticated phishing or social engineering attacks. In this sense, paradoxically, the integration of AI becomes a resilience factor, shifting the emphasis from simply countering a seemingly uncontrollable phenomenon to an adaptation strategy.
Ultimately, as legislators refine new regulations and society questions how to brand truth, the corporate world has already realised that it is on the front line. Digital identity, a central component of a brand, is facing a radical metamorphosis that no longer leaves room for clear boundaries between fiction and authenticity. If DeepFakes are a symptom of an epochal transformation, intent on redefining roles and responsibilities, then the response, at least for companies, will have to go beyond mere repression. It will have to embrace the ‘new reality’ as an ever-changing environment, developing technological, communicative and cultural antibodies capable of protecting reputations and, at the same time, seizing the creative potential of the digital future.