FACEBOOK DOES NOT GUARANTEE FREEDOM OF SPEECH NOR ARTISTIC EXPRESSION
Art fans will be familiar with the steady drip of stories about the social media giant taking works offline for obscenity. It’s become such a common story that we decided to investigate the questions raised. What’s behind this continuous stream of problems? How has the company evolved in relation to complaints from the art community? And what is Facebook — which otherwise seems to be bent on gathering as much data about its users as possible — doing to better identify artists and art institutions, to avoid such embarrassing incidents?
To get at the root of the problem, it helps to know how Facebook goes about identifying and removing obscene content in the first place. When an image depicting “sexually explicit” content gets reported by a Facebook user, it heads to the “abusive content” department, one of four teams that work around the world and around the clock to monitor time-sensitive material (the process is detailed by a chart on the website NakedSecurity.) The team then measures the photo against Facebook’s community standards, which define what type of content is prohibited, including content containing violence and threats, self harm, bullying and harassment, and “graphic content,” which among other things includes nudity and pornography.
If the image is found to have violated a standard, the team will issue a warning. A second offence causes the account to be disabled. There is no algorithm or auto-delete that searches for offensive content, save for a software called PhotoDNA, which polices the platform for child pornography.
Read the rest of the article at loveartnotpeople.org