How Much Do You Need to Disclose? Media Art and Korea's AI Act
by LED.ART EDITORIAL
How Much Do You Need to Disclose? Media Art and Korea's AI Act
by LED.ART EDITORIAL
Generative AI has become an indispensable tool in media art production. Artists use it to quickly visualize concepts, generate textures and patterns, or refine segments of video work. The artist's idea remains the starting point — AI assists somewhere along the way. But when Korea's AI Basic Act came into effect in January 2026, this familiar creative process suddenly faced a new question: "If AI was involved in making this work, does it need to be disclosed?"
AI Is Everywhere in Creative Work — But Public Perception Hasn't Caught Up
The use of AI tools has already spread across the creative industry. According to a 2024 survey by the Korea Creative Content Agency (KOCCA), 46.8% of writers and 41.4% of planning professionals in content production are already using AI tools. And that's the 2024 figure. Given the pace at which AI has proliferated since, the actual numbers in 2026 are almost certainly much higher. Media art is no exception.
Yet public perception remains caught in a contradiction. Attitudes are shifting — 54% of new art collectors now view AI art positively, and 67% of art enthusiasts expect the AI art market to grow (Hiscox, 2024). Artists who have hands-on experience with AI tools are increasingly likely to see them as instruments rather than threats. Familiarity, it seems, breeds acceptance.
But the "source attribution effect" has not disappeared. Studies show that when people view AI-generated works without knowing their origin, they often prefer or rate them equally to human-made works. The moment a label reading "made by AI" appears, however, perceived creativity drops and emotional engagement cools (Frontiers in Psychology, 2025). The eye cannot tell the difference, but the mind still judges differently. Perception is changing — but it hasn't fully arrived yet.
The Law Exists, But the Lines Are Still Blurry
In January 2026, Korea became the first country in the world to enforce a comprehensive national AI law. While the EU passed its AI Act earlier, the actual enforcement of its high-risk AI regulations is not scheduled until August 2026 — making Korea the first to bring AI legislation into real-world application. That makes Korea's AI Basic Act more than a domestic policy story. It may well set the tone for how global standards around AI disclosure take shape.
Article 31 of the Act requires that AI-generated content be labeled as such. But how far this obligation extends in practice is more nuanced than it first appears.
The primary duty falls on AI service providers — companies like Midjourney or Adobe Firefly — who are required to embed watermarks or metadata into generated content. For outputs that are difficult to distinguish from reality, such as deepfakes, end users are also required to disclose AI involvement. However, the law includes a notable exception. The original text reads:
"In cases where the output constitutes an artistic or creative expression, or forms part of one, it may be disclosed or labeled in a manner that does not impede the viewing or appreciation of the work."
This means there is no legal obligation to display an AI label directly on a media artwork playing in a physical space. The exception exists precisely to protect the integrity of the viewing experience. Under the enforcement ordinance, disclosure via a website or on-site signage is recognized as sufficient compliance.
That said, the gray areas remain significant. What if AI was only used to generate early concept imagery? What if just one segment of a video was processed with AI? What if the final work was made entirely by the artist, but AI was used during the ideation phase? The law does not yet provide clear answers to these questions. The government has acknowledged this, announcing a grace period of at least one year before penalties are enforced.
The law's origins offer useful context. It was driven by real social harms — deepfake election videos, AI-generated sexual exploitation content. This is not a law designed to regulate media art. That distinction matters.
So What Should Media Artists Actually Do?
Before formal guidelines are established, the most practical step is to define your own standards first. There is no obligation to label works on screen, but proactive disclosure through your website or artwork description pages both satisfies the legal requirement and builds trust with your audience.
We propose the following framework based on the degree of AI involvement:
AI-assisted — AI was used only during the concept or ideation stage; the final work was created directly by the artist.
AI-collaborative — AI generated a significant portion of the work, with the artist providing direction, curation, and editorial judgment to bring it to completion.
AI-generated — The work was produced entirely by AI.
These three distinctions are more than compliance labels. They are a language for articulating the artist's creative role — and ultimately, a way for media art to define itself in the age of AI.
According to Edelman's 2025 survey, 68% of respondents said they trusted a brand more when it proactively disclosed its use of AI. Transparency does not diminish value. It builds the foundation for trust.
Korea's AI Basic Act has posed an uncomfortable question to the media art world — but it has also created an opportunity. Before the legal standards are fully settled, there is space for artists and platforms alike to shape the language and define the norms. LED.ART is committed to being part of that conversation, building a standard of curation that is both transparent and meaningful.
How Much Do You Need to Disclose? Media Art and Korea's AI Act
by LED.ART EDITORIAL
Generative AI has become an indispensable tool in media art production. Artists use it to quickly visualize concepts, generate textures and patterns, or refine segments of video work. The artist's idea remains the starting point — AI assists somewhere along the way. But when Korea's AI Basic Act came into effect in January 2026, this familiar creative process suddenly faced a new question: "If AI was involved in making this work, does it need to be disclosed?"
AI Is Everywhere in Creative Work — But Public Perception Hasn't Caught Up
The use of AI tools has already spread across the creative industry. According to a 2024 survey by the Korea Creative Content Agency (KOCCA), 46.8% of writers and 41.4% of planning professionals in content production are already using AI tools. And that's the 2024 figure. Given the pace at which AI has proliferated since, the actual numbers in 2026 are almost certainly much higher. Media art is no exception.
Yet public perception remains caught in a contradiction. Attitudes are shifting — 54% of new art collectors now view AI art positively, and 67% of art enthusiasts expect the AI art market to grow (Hiscox, 2024). Artists who have hands-on experience with AI tools are increasingly likely to see them as instruments rather than threats. Familiarity, it seems, breeds acceptance.
But the "source attribution effect" has not disappeared. Studies show that when people view AI-generated works without knowing their origin, they often prefer or rate them equally to human-made works. The moment a label reading "made by AI" appears, however, perceived creativity drops and emotional engagement cools (Frontiers in Psychology, 2025). The eye cannot tell the difference, but the mind still judges differently. Perception is changing — but it hasn't fully arrived yet.
The Law Exists, But the Lines Are Still Blurry
In January 2026, Korea became the first country in the world to enforce a comprehensive national AI law. While the EU passed its AI Act earlier, the actual enforcement of its high-risk AI regulations is not scheduled until August 2026 — making Korea the first to bring AI legislation into real-world application. That makes Korea's AI Basic Act more than a domestic policy story. It may well set the tone for how global standards around AI disclosure take shape.
Article 31 of the Act requires that AI-generated content be labeled as such. But how far this obligation extends in practice is more nuanced than it first appears.
The primary duty falls on AI service providers — companies like Midjourney or Adobe Firefly — who are required to embed watermarks or metadata into generated content. For outputs that are difficult to distinguish from reality, such as deepfakes, end users are also required to disclose AI involvement. However, the law includes a notable exception. The original text reads:
"In cases where the output constitutes an artistic or creative expression, or forms part of one, it may be disclosed or labeled in a manner that does not impede the viewing or appreciation of the work."
This means there is no legal obligation to display an AI label directly on a media artwork playing in a physical space. The exception exists precisely to protect the integrity of the viewing experience. Under the enforcement ordinance, disclosure via a website or on-site signage is recognized as sufficient compliance.
That said, the gray areas remain significant. What if AI was only used to generate early concept imagery? What if just one segment of a video was processed with AI? What if the final work was made entirely by the artist, but AI was used during the ideation phase? The law does not yet provide clear answers to these questions. The government has acknowledged this, announcing a grace period of at least one year before penalties are enforced.
The law's origins offer useful context. It was driven by real social harms — deepfake election videos, AI-generated sexual exploitation content. This is not a law designed to regulate media art. That distinction matters.
So What Should Media Artists Actually Do?
Before formal guidelines are established, the most practical step is to define your own standards first. There is no obligation to label works on screen, but proactive disclosure through your website or artwork description pages both satisfies the legal requirement and builds trust with your audience.
We propose the following framework based on the degree of AI involvement:
AI-assisted — AI was used only during the concept or ideation stage; the final work was created directly by the artist.
AI-collaborative — AI generated a significant portion of the work, with the artist providing direction, curation, and editorial judgment to bring it to completion.
AI-generated — The work was produced entirely by AI.
These three distinctions are more than compliance labels. They are a language for articulating the artist's creative role — and ultimately, a way for media art to define itself in the age of AI.
According to Edelman's 2025 survey, 68% of respondents said they trusted a brand more when it proactively disclosed its use of AI. Transparency does not diminish value. It builds the foundation for trust.
Korea's AI Basic Act has posed an uncomfortable question to the media art world — but it has also created an opportunity. Before the legal standards are fully settled, there is space for artists and platforms alike to shape the language and define the norms. LED.ART is committed to being part of that conversation, building a standard of curation that is both transparent and meaningful.