Triggr is a third-party reporting tool for social media sites. It scrapes a user’s social media and filters out all the messages containing hate speech and allows users to generate an aggregated report of all the negative messages so that they can submit it to either law enforcement or the given social media customer service.
Currently on any social media platform, e.g. Facebook, if a user receives multiple negative messages, each message needs to be reported individually, and each time the user has to complete a short questionnaire explaining the abuse. The benefit with our project is that users can mass report multiple comments, and hopefully create a stronger case this way.
This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:
We began by conducting user research in the form of surveys, qualitative interviews and contextual inquiries.
The survey was to get a better idea of the statistics of how often, where, and to and by whom cyberbullying occurred. One of the most interesting takeaways from our survey was that 61.7% of responders said that they knew who was cyberbullying them.
"I wish I could just give my phone away for a week to have someone scrub it clean."
"While we shift blame on to platforms to fix it, this is actually a human glitch."
After affinity mapping responses from our interviews and collating our design requirements, we found that a solution that could help address this issue of cyber bullying was app that detected toxic messages on users' social media account and allowed them to report it. This brought us to the first iteration of the product, a paper prototype.
How might we create tools to support likely and actual targets of cyber hate and cyber harassment?
If you like what you see and want to work together, get in touch!
jane@portfolio.comTriggr is a third-party reporting tool for social media sites. It scrapes a user’s social media and filters out all the messages containing hate speech and allows users to generate an aggregated report of all the negative messages so that they can submit it to either law enforcement or the given social media customer service.
Currently on any social media platform, e.g. Facebook, if a user receives multiple negative messages, each message needs to be reported individually, and each time the user has to complete a short questionnaire explaining the abuse. The benefit with our project is that users can mass report multiple comments, and hopefully create a stronger case this way.
This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:
We began by conducting user research in the form of surveys, qualitative interviews and contextual inquiries.
The survey was to get a better idea of the statistics of how often, where, and to and by whom cyberbullying occurred. One of the most interesting takeaways from our survey was that 61.7% of responders said that they knew who was cyberbullying them.
Following this, I conducted three qualitative interviews with people who had been cyberbullied in different ways: mass trolling, cyber stalking, and cyber harassment.
"I wish I could just give my phone away for a week to have someone scrub it clean."
"While we shift blame to platforms to fix it, this is a human glitch."
Our final product was an app that scrapes the users social media profile for abuse using Perspective API, and allows the user to mass report the abuse at once.
Questions from user testing
This was our first high-fidelity prototype, created using Figma. The goal of this home screen was to allow users to see which messages had been flagged as positive, negative, or 'unsure'.
What we learned from user testing here is that users tended to spend more time reading through the negative messages, causing them to feel re-traumatized. This led us to improve on the design further, to instead initially show users only the flagged users, and not messages.
This led us to our final iteration. We also modified the color scheme, as we felt that the original design had a daunting feel to it, and the goal of our product was to make users feel comfortable with a friendly and comforting design. This final design was created using HTML, CSS and Bootstrap.