Press Releases

Institute for Risk and Strategic Studies (Salceda Research) Calls for Mandatory AI Content Disclosure in Multimedia

August 10th, 2025

Manila, Philippines — Joey Sarte Salceda, Chair of the Institute for Risk and Strategic Studies (Salceda Research), today urged the government to require institutions and content producers using generative artificial intelligence in multimedia content such as images, video, and audio to clearly declare that AI was used in creating the material. The goal is to ensure that audiences do not confuse AI-generated material with authentic human-made content.

Salceda emphasized that “the ability of AI tools to produce highly realistic images, videos, and voice recordings presents both opportunities and risks. Without clear disclosure, the public can be misled, whether intentionally or unintentionally, into believing that synthetic content is real.”

Under Salceda’s proposal, all institutions, platforms, and production houses that use generative AI in producing multimedia material for public consumption would be required to include a visible and legible declaration, such as “This content contains AI-generated elements,” at the beginning or alongside the content. This disclosure rule would apply to newsrooms, educational institutions, advertising agencies, government agencies, and other organizations that publish public-facing materials.

Salceda cited that “this disclosure approach is already a best practice in reputable institutions such as the University of the Philippines Los Baños, where certain courses allow the use of AI for schoolwork but require students to disclose when AI tools have been used. This is a model for responsible adoption that balances innovation with transparency.”

Salceda also stressed that “online platforms should be required to flag when something is AI-generated or contains AI-generated components. Some platforms already attempt to do this, but it is neither comprehensive nor consistently effective. The public needs a reliable, standardized system for such labeling.”

“This is about transparency, not about banning AI,” Salceda said. “We want to promote responsible use. AI is a powerful tool for creativity and productivity, but when it comes to content that shapes public perception, especially political, historical, and news-related materials, the public has the right to know when what they are seeing or hearing was created by a machine.”

Salceda warned that “AI-generated deepfakes, voice clones, and photorealistic forgeries could be used for disinformation, reputational harm, or market manipulation if left without regulation.” He noted that “similar disclosure frameworks are already being adopted in the European Union and parts of the United States, and that the Philippines should move in the same direction.”

“This is not censorship. It is the equivalent of a food label,” Salceda added. “People can still consume the content, but they deserve to know what it is made of.”

The Institute for Risk and Strategic Studies will be sending its policy paper outlining the legal and technical framework for an AI content declaration requirement, including recommended penalties for non-compliance, to both houses of Congress and relevant regulatory agencies within the month.

Other Press Releases
Ninoy: From Ambition to Authentic Humanism
Read More
Indonesia’s B50 Decision: Implications for the Philippines
Read More
TESDA: The Most Critical Agency for National Survival and Prosperity in the Age of AI
Read More
FOOD SECURITY CHAIR SALCEDA WANTS FARMERS TO SELL DIRECTLY TO GOVERNMENT AGENCIES ONLINE UNDER SECTION 11 OF SAGIP SAKA
Read More