Addressing the Rise of AI Content Similarities

The frequent occurrence of AI-generated content resembling real individuals raises concerns about copyright and ethical standards in creative industries.

The Frequent Occurrence of AI Content Similarities

As the application of generative AI in artistic creation deepens, the phenomenon of content resembling real individuals has garnered attention. Characters in AI-generated short dramas often bear striking resemblances to well-known actors, including their appearance, voice, and even mannerisms. Additionally, AI actors may blend features from multiple celebrities, making them easily recognizable to audiences. Ordinary internet users are not exempt either; some have unexpectedly discovered themselves portrayed as villains in AI short dramas, with their likenesses and outfits directly copied from images they shared online. While the production of such AI short dramas is convenient and spreads rapidly, it has sparked ongoing discussions about copyright infringement, disorder, and creative ethics.

On the surface, this phenomenon differs from the previously debated topic of AI “modification.” Modification typically involves reprocessing segments or plots from existing films, primarily focusing on the legality of using others’ works, which is a copyright issue. In contrast, the so-called “similarity” involves using another person’s face and voice to generate content, directly making them characters in AI works, thus touching upon portrait and voice rights. According to China’s Civil Code, these rights are legally protected and relate to personal identity, reputation, and social evaluation, meaning that others cannot use them without permission. Furthermore, for public figures such as actors, their likeness and voice hold commercial value, and unauthorized misuse could affect their commercial partnerships and earnings.

From a creative perspective, these two types of legal usage behaviors point to the same trend: whether classic films or personal features such as faces and voices, they have all become editable, callable, and reorganizable “creative materials.” From the typified writing of online literature to the formulaic production of short videos, modular and combinatorial production is a distinctive feature of online arts. In the context of artificial intelligence, this characteristic is further amplified, as even personal symbols like portraits and voices are included in a reusable “material library.” The questions of “whether it can be used and how it can be used” become more prominent and complex.

It is important to note that the misuse of such technology impacts not only individual rights but also broader societal issues. Highly realistic “face-swapping” and “voice imitation” content could be used for identity theft, online fraud, and other illegal activities. When seeing is no longer believing, the public will struggle to assess the authenticity of online content, leading to a crisis of trust and ethical disorder, which could disrupt the normal flow of information.

Thus, establishing a new order for the use of “creative materials” has become a necessary step for the development of new public arts. In fact, related actions have already begun. Earlier this year, the National Radio and Television Administration launched a special campaign to clean up thousands of AI-modified videos that violated regulations and dealt with related accounts. Recently, the Actors Committee of the China Broadcasting and Television Social Organizations issued a stern statement regarding the infringement issues related to AI “face-swapping” and “voice imitation,” clarifying the legal boundaries of such behaviors and urging platforms to strengthen review and authorization mechanisms. The construction of related order is gradually moving towards institutionalization and standardization.

However, in reality, governance of these issues is not straightforward. Artificial intelligence not only changes creative methods but also reconstructs the underlying logic of artistic production, necessitating a balance between creative vitality and rights protection.

On one hand, looking at future trends, using others’ images and voices for AI recreation is not merely “theft.” With legal authorization, it represents a new form of creativity: a person’s appearance, voice, and even expressive style can be transformed into reusable creative resources. Compared to the traditional model that relies on actors to perform in specific times and spaces, AI creation allows for a more flexible combination of valuable personal elements across different works, thereby breaking temporal and spatial limitations, enhancing creative efficiency, and expanding the possibilities of artistic expression. Therefore, solving this dilemma does not lie in simple prohibition or indulgence, but in regulating usage boundaries.

On the other hand, balancing and accommodating these interests is not easy. First, determining infringement is more complex. In reality, different individuals may have similar appearances and voices, and one cannot simply conclude infringement based on “some resemblance.” However, if generated content resembles a specific individual to a significant degree, enough for the public to clearly associate it with that person and perceive it as using their likeness, it may constitute infringement. Drawing the line between “unintentional resemblance” and “malicious infringement” is a challenge in practical judgment. Secondly, the responsibility structure is more dispersed. In the process of AI-generated content, multiple stages are involved, from data collection and model training to content generation and platform dissemination, making responsibility attribution no longer correspond to a single entity, further complicating governance. For instance, some producers may use “AI generation” as a pretext to cover their motives of deliberately exploiting others’ images for traffic.

Moreover, it is worth noting that the gap between the cost of compliant usage and the efficiency of AI creation is widening. Artificial intelligence has made it easy to “use others’ appearances or voices to create new content,” but creators often face real obstacles such as information asymmetry and complex processes when trying to obtain legal authorization. When the cost of compliance is significantly higher than production costs, creators are often forced to choose between “borderline infringement” and “abandoning creativity.”

Thus, the key issue is to make “compliant usage” itself feasible: it is essential to establish more convenient authorization mechanisms that allow creators to use materials legally and to leverage AI to enhance matching efficiency, transitioning from traditional one-to-one authorization models to more intensive and intelligent authorization systems, stimulating diverse expressions and creativity. Additionally, there is a need to strengthen ethical awareness in creation, ensuring that creators consciously respect others’ images and reputations, and even when authorized, avoid distorting or harming others’ images, achieving a unity of “usable” and “properly used.”

For new public arts, the real challenge does not lie in the technology itself, but in whether clear rules and consensus can be formed in new creative practices, achieving a dual focus on protecting creativity and respecting individuals. Only in a regulated and orderly environment can the empowerment of artificial intelligence in artistic production transform into robust and lasting creative vitality.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.