YouTube has recently added rules to combat online violence and harassment, particularly strictly prohibiting “realistic simulation” content involving minors and the narration of violent experiences.
Social media platforms are actively applying new rules to protect users from online violence and other negative factors.
YouTube’s new rules target a concerning trend where online content depicting violent crimes emerges, with artificial intelligence (AI) being used to create realistic violent scenarios, providing detailed descriptions of abusive or violent behaviors that victims have endured, including those involving children.
Some videos use the voices of children to narrate violent behaviors that have occurred in famous criminal cases. Families of the victims portrayed in these videos vehemently oppose such content, declaring them truly ‘horrifying.’
YouTube’s new regulations aim to prevent harmful content from appearing on public channels and restrict the user experience on its platform. Enforcement measures will escalate depending on the level of recurrence by the owners of video production channels.
For a first-time policy violation, YouTube will limit the user’s ability to upload videos for one week. If the violation is repeated within 90 days, penalties will escalate, and the channel may be permanently removed.
Social media platforms, including YouTube, have introduced content creation tools supported by AI in recent months, accompanied by new rules to prevent the risk of confusing content generated by AI for users.
TikTok currently requires content creators to tag corresponding AI-generated content. Meanwhile, YouTube has announced strict rules regarding the replication of voices of musicians created by AI and a set of general rules for other related issues.
@Vietnamnet