Making Hate Count For Something
By Brett McKenzie on Feb 20, 2018
One Club Members POSSIBLE battle hate on Twitter
While social media has been a fantastic tool to connect different people and viewpoints from all across the globe, here is also an unfortunate and potentially dangerous underbelly. Platforms such as Twitter and Facebook have the power to amplify voices, and sometimes those voices are set on demeaning, insulting and even threatening individuals and groups due to their race, religion, sexual orientation and more.
One Club for Creativity Corporate Members POSSIBLE are looking to tackle this serious issue by tapping into their social media ingenuity to create #WeCounterHate, a tool that converts opjectionable tweets into donations for Life After Hate, a charity that assists people in leaving hate groups and gangs.
We had a chance to speak with the team at POSSIBLE about how the campaign came to be, what they learned about combatting online extremism, and what's in store for the future.
The past few years seem to have brought a darker tone Twitter and other social media channels. What were some of the conversations you were having amongst yourselves like, the ones that led to this project?
Matt Gilmore, Creative Director: The thing with Twitter, and social media in general, is that it’s very much a double-edged sword. It can be an amazing force for good and positivity in the world, but then there’s the darker flip side to that. Those that seek to spread hate can use it just as effectively. In fact, social media has given those that seek to spread hate a megaphone that, from a historical perspective, is the most powerful in human history. Those terrible messages can reach the minds of millions of people in seconds.
Shawn Herron, Creative Director: We had seen examples of things people around the world were doing in an attempt to blunt the power of hateful acts by making them result in a positive outcome. There’s a neo Nazi march that’s held in a small town in Germany every year. And, while the townspeople can’t stop them from exercising their right to free speech, they can treat the march as a walkathon that raises money for a charity that helps people exit hate groups. It’s sort of an involuntary walk-a-thon.
So, the question became, what can we do at scale that could empower everyday people in a similar way? How can anyone, anywhere, participate in a simple but profound way? The answer was WeCounterHate.
Matt: The magic happens when someone looking to retweet hate speech sees our reply stating that if they do it’ll result in a donation being committed to a non-profit fighting for equality, diversity and inclusion. That’s how we give pause and hopefully have them reconsider retweeting and ultimately slow the spread of hate.
How did the idea to track and tag tweets come about?
Jason Carmel, Chief Data Officer: The team arrived fairly early on at the conclusion that we would need some sort of natural language processing technology to parse the insane volume of conversation and help us identify those messages that met the criteria for hate speech. That being said, this specific application of machine learning is not something that we ever thought we’d be building at our agency. Defining a conceptual framework for hateful language and turning that into a usable set of classifiers was very much a learning process. It may seem counterintuitive, but our lack of background in this specific application made it less painful to fail and forced our approach to be iterative. We also knew that while we hoped the machine would do a lot of the heavy lifting, we wanted final moderation and decision making from an actual human being. Having a person at the end of the journey to give final approval gave us a bit more freedom to experiment with the AI.
Projects such as these often have unexpected challenges, things you never thought about until you were deep in the mix. What were yours, and how did you overcome them?
Jason: We had a fairly naïve and narrow definition of hate speech at the beginning, and that evolution has been a fascinating challenge. None of us on this team is particularly hateful, so we were only able to draw on a general understanding of hate speech when we trained the machine initially. It’s been truly eye-opening to work with Life After Hate, our partners in the @WeCounterHate campaign, because of the direct and personal knowledge they have of the space. We were able to work with former white supremacists to help us make our machine smarter at identifying hate speech in ways that we would have never considered.
Ray Page, EVP, Executive Creative Director: We definitely have a deeper understanding of the manifestation of hate speech now. But, we now understand more of the path of why an extremist builds an identity around hate. It was definitely an illuminating experience to work with former far right extremists. Trying to understand their backgrounds, experiences and perspectives have been jarring and uncomfortable at times. But, I firmly believe when you’re out of your comfort zone something special will happen. And, it did. Working on this project has been a life altering experience and a career highlight by far.
"Trying to understand their backgrounds, experiences and perspectives have been jarring and uncomfortable at times. But, I firmly believe when you’re out of your comfort zone something special will happen."
Your algorithms scour for hate, and then people decide which tweets should be tagged. Tell us more about this human element, since this has become a factor with policing social media.
Ray: When we first jumped in to start identifying hate speech online it was quickly apparent there was a long road ahead of us. Not all hate speech is created equal. Sure, there are forms of hate speech that are undeniably an attack against a person or group based on religion, race, gender, disability or sexual orientation. The challenge is in the sea of grey area. Context is everything.
Our machine learning tool can flag hate with an upwards of 80%+ of accuracy. But you still want a human to validate because hatred can be extremely subtle and subjective.
How successful has this project been so far?
Jason: It’s still early days, but we’ve been really pleased at the results thus far. We’ve seen retweets of hate reduced by about 40%. Around 20% of the posts we respond to delete the hate speech altogether. For us, that’s the best outcome.
Tell us more about Life After Hate. What was it like partnering with them?
Matt: Early on we had a long list of possible partners that we thought might be a good fit to work with and have benefit from what we were working on. Through many conversations and late night debates the list got shorter. Finally, Life After Hate was the only organization on the list. They just so clearly align with what we’re doing. Knowing that every dollar committed when someone retweets hate is going to an organization that’s fighting hate with compassion is extremely powerful. It closes the loop in such a perfect way.
Shawn: As we’ve built a relationship with them, they’ve done an amazing job of getting into the weeds with us. As “formers,” the deep knowledge they have has proven to be incredibly valuable. We’ve had working sessions with them and have started using what they know to help inform and refine our AI capabilities. So now we can more reliably find more of the hate we’re looking to try and slow the spread of.
They also helped open our eyes to opportunities throughout what we’re doing to give people living a life of hate an off ramp. We can present them with ways to exit the path they’re on with readily available resources. So we’re not just an anti-hate platform, we’re also a helping hand.
"Knowing that every dollar committed when someone retweets hate is going to an organization that’s fighting hate with compassion is extremely powerful. It closes the loop in such a perfect way."
What's next, both for this project and for yourselves?
Jason: From a technical perspective, we’re hoping to further explore image recognition, since a lot of hate speech is conveyed in meme format. We’re also collecting a lot of data about hate speech itself and are considering ways to use that to respond to hate speech, to understand how it evolves linguistically, and to track how it travels among social groups.
Ray: We’re also leaning into the organic awareness the platform has received in recent days. Frankly, we didn’t anticipate the interest and hand raising from other organizations who are offering help to evolve the platform. It’s a true testament how humanity wins every time.
Oh, and we’re also launching an influencer campaign in the coming weeks and building out more campaign elements help deepen our support base.
One Club for Creativity Members get featured here on the One Club website and across our social media channels. Have a new project you'd love to share? An upcoming exhibition and you'd like us to help spread the word? Drop us a line at firstname.lastname@example.org. Not yet a Member? Join today!