A bug in Facebook’s software accidentally put content moderators at risk after exposing their names and identities to suspected terrorists, leading at least one to go into hiding for a period of time. According to The Guardian, “The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.”
The bug showed a moderator’s personal info in the activity logs for Facebook groups where the target was an admin. If that admin was removed from Facebook, remaining group admins could see the moderator’s personal details. Six people who worked in a counterterrorism unit in Dublin were determined to be “high priority” victims of the bug. One went into hiding after discovering that members of a suspected terrorist group he had banned from the platform had viewed his personal profile.
The moderator also told The Guardian that Facebook’s moderation system requires users to log in using their personal Facebook profile, an incredibly shortsighted dog-fooding technique for people working on countering terrorism and extremism. Facebook said it is now testing a moderation system that does not require use of personal profiles.