Victims’ associations and rights advocates say the violence was ramped up by Facebook’s algorithms, saying they play up extremist content that encourages harmful disinformation and hate speech.
“Many Rohingya tried to report anti-Rohingya content via Facebook’s ‘report’ function” but to no avail, “allowing these hateful narratives to proliferate and reach unprecedented audiences in Myanmar,” Amnesty said in its report.
It noted the revelations from the whistle-blower “Facebook Papers” divulged in October 2021, indicating that company executives knew the site fuelled the spread of toxic content against ethnic minorities and other groups.
Three legal suits have been lodged against Facebook by Rohingya representatives, in the US and Britain as well as with the OECD group of developed economies, under its guidelines for responsible business conduct.
“Meta’s refusal to compensate Rohingya victims to date — even where the community’s modest requests represent crumbs from the table of the company’s enormous profits — simply adds to the perception that this is a company wholly detached from the reality of its human rights impacts,” Amnesty said.
The NGO urged Facebook to undertake “proactive human rights due diligence” across its platforms, but also called on national authorities to step up their oversight.
“It is imperative that states fulfil their obligation to protect human rights by introducing and enforcing effective legislation to rein in surveillance-based business models across the technology sector,” it said.
Facebook has vowed to revamp its corporate values and operations in response to pressure to clamp down on false information, particularly with regard to politics and elections.
The company has forged partnerships with several media companies, including AFP, intended to verify online posts and remove those that are untrue.