TikTok is the latest social media company to announce its plan for mitigating misleading and violent content in the wake of the Israel-Hamas war. A point-by-point blog post details recent steps, such as the creation of a command center “that brings together key members of our 40,000-strong global team of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis.” The company’s statement follows similar ones from Meta and X — both companies had received letters from the European Union’s regulatory commissioner detailing misinformation concerns.
Additional steps outlined by TikTok include hiring “more” moderators who speak Arabic or Hebrew and regularly updating its automatic detection systems to identify graphic or violent content so as not to expose users or moderators. To that end, TikTok has expanded the well-being care available for frontline moderators. Notably, a moderator sued TikTok in 2021 for mental trauma, alleging that she would view between three and ten videos at once that featured horrific events like school shootings and cannibalism.
Users should also now see opt-in screens over graphic imagery that is being kept on the platform for “public interest” reasons and further restricted Live eligibility in an attempt to limit misinformation. Speaking of falsehoods circulating on the internet, TikTok reiterated that it removes spliced content that users have edited to be misleading.
TikTok reportedly took down about 500,000 videos and ended 8,000 livestreams occurring in Israel and Gaza between the initial attacks on October 7 and the statement’s October 15 release. Moving forward, the company plans to roll out misinformation warnings in English, Hebrew and Arabic when certain terms are searched.
Leave a Reply