I wonder why I couldn’t find any mention of this in the New York Times???
SAN FRANCISCO – Facebook has acknowledged that a security hole in its software may have exposed the identities of its online monitors to suspected terrorist groups and others whose pages were removed for inappropriate content.
A Guardian report says more than 1,000 people, which Facebook calls moderators, were affected. Facebook says it fixed the error, which occurred last year, and it has no evidence that those affected received any threats.
The Guardian says six people likely had their profiles viewed by suspected terrorists. It spoke to one of these people, whom it did not name, who went into hiding following the incident. This person was worried that sympathizers of the Islamic State group may have viewed his profile.
Facebook says it didn’t find that any profiles were viewed by suspected IS members.
Speaking with The Guardian, one of the six employees, an Iraqi-born Irish citizen who asked to remain anonymous, stated that seven people linked to an Egyptian terrorist group sympathetic to Hamas and ISIS had seen his profile.
“The only reason we’re in Ireland was to escape terrorism and threats,” he said, revealing how numerous members of his family had been beaten and executed in Iraq.
The moderator, who worked as a contractor for Facebook on behalf of Cpl Recruitment, fled the country shortly after over fears of retaliation.
Although Facebook initially “offered to install a home alarm monitoring system and provide transport to and from work” to the high-priority moderators, the Iraqi-born man felt he had become too vulnerable.
“When you come from a war zone and you have people like that knowing your family name you know that people get butchered for that,” he added. “The punishment from ISIS for working in counter-terrorism is beheading. All they’d need to do is tell someone who is radical here.”
After five months in eastern Europe, the moderator returned to Ireland in May after running out of money.
“I don’t have a job, I have anxiety and I’m on antidepressants,” he said. “I can’t walk anywhere without looking back.”
The moderator has now filed a legal claim against both Facebook and Cpl Recruitment, seeking compensation for the physiological issues faced since the security breach.
The Guardian report also revealed how content monitors, who, according to the moderator, “come in every morning and just look at beheadings, people getting butchered, stoned, executed,” were seemingly required to use their own personal profiles while doing their work.
“They should have let us use fake profiles,” he said. “They never warned us that something like this could happen.”
In a statement confirming the incident, Facebook asserted it had taken technical steps to stop such an issue from occurring in the future.
“We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”
In total, the software bug remained active for up to a month and retroactively exposed profiles that had flagged terrorist content as far back as August 2016.