Google filter for antisemitism searches ‘not fit for purpose’

Report by Community Security Trust and Antisemitism Policy Trust found the SafeSearch function had no impact on amount of racist images found

Google (Photo by Mitchell Luo on Unsplash)

Google’s SafeSearch function for filtering offensive images is “not fit for purpose” a new report has claimed –  after discovering that antisemitic images and “Jew jokes” remain online.

A report produced jointly by the Community Security Trust (CST) and Antisemitism Policy Trust, found the SafeSearch function had no impact on the amount of antisemitic images found in Google image searches.

The function is promoted as a tool parents can use to block inappropriate content from children.

Based research by data scientists at Cambridge University’s Woolf Institute, the report’s key findings were that searches for “Jewish jokes” and “Jew jokes” return a high proportion of antisemitic images regardless of whether Google’s SafeSearch function is switched on or off.

It also emerged that Google has software that enables developers to identify explicit or offensive content – sometimes categorized as “spoof” – but it is not yet capable of accurately identifying antisemitic images.

The report also revealed Google has no available function that enables users to filter out content that is likely to be antisemitic.

The new study added too research by Big Data expert Seth Stephens-Davidowitz which found that the most common antisemitic Google searches in the United Kingdom are for jokes mocking Jews.

There is a direct correlation between these searches, the report found, and those mocking other minorities: someone who searches for “Jew jokes” is 100 times more likely to also search for “n****r jokes”.

Tuesday’s Queen’s Speech is expected to introduce legislation to tackle online harm

Danny Stone MBE, Chief Executive of the Antisemitism Policy Trust said: “We expect the Online Safety Bill to be brought before parliament imminently.

“This report proves why we urgently require that bill to place a Duty of Care on companies like Google. Safety by Design, risk assessments, and efforts to counter hate should be at the very centre of platform thinking from the outset, not an afterthought.”

Dr. Dave Rich, Director of Policy of the Community Security Trust, added: “Google is the world’s most popular search engine, yet our research shows that their own so-called ‘SafeSearch’ function is incapable of accurately identifying and blocking antisemitic images.

“This is yet another example of Internet companies simply not doing enough to proactively stop the spread of hateful material online. There is an urgent need for government regulation to force these companies to properly filter hateful content and protect users from abuse.”

A Google spokesperson said: “In Google Search, we strive to provide open access to information while also not exposing people to potentially shocking or offensive content if they have not explicitly searched for it. While SafeSearch can be used to block explicit content from search results, it is not designed to block offensive or hateful results.

“When people are looking for images online, search engines largely rely on matching the words in the query to the words that appear next to images on the web page. In some cases, these searches match content on web pages that contain offensive and hateful images. We’ve made considerable improvements to address low quality content, and will continue to improve our systems.”

read more:
comments