Triggr

MOBILE
How might we create tools to support likely and actual targets of cyber hate and cyber harassment?

Product

Triggr is a third-party reporting tool for social media sites. It scrapes a user’s social media and filters out all the messages containing hate speech and allows users to generate an aggregated report of all the negative messages so that they can submit it to either law enforcement or the given social media customer service.

Currently on any social media platform, e.g. Facebook, if a user receives multiple negative messages, each message needs to be reported individually, and each time the user has to complete a short questionnaire explaining the abuse. The benefit with our project is that users can mass report multiple comments, and hopefully create a stronger case this way. 

Role

This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:

Product Manager
User Researcher
UX Designer
Prototyper

Research

We began by conducting user research in the form of surveys, qualitative interviews and contextual inquiries.

The survey was to get a better idea of the statistics of how often, where, and to and by whom cyberbullying occurred. One of the most interesting takeaways from our survey was that 61.7% of responders said that they knew who was cyberbullying them.  

Following this, I conducted three qualitative interviews with people who had been cyberbullied in different ways: mass trolling, cyber stalking, and cyber harassment.
"I wish I could just give my phone away for a week to have someone scrub it clean."
"While we shift blame on to platforms to fix it, this is actually a human glitch."
Our user research was summarized into the following two personas:

Design

Paper Prototype

After affinity mapping responses from our interviews and collating our design requirements, we found that a solution that could help address this issue of cyber bullying was app that detected toxic messages on users' social media account and allowed them to report it. This brought us to the first iteration of the product, a paper prototype.

We ran 4 user testing sessions with this paper prototype, and here are some of the questions we received:
  • What does 75%, 25% on Screen 3 mean? Is it a confidence interval of accuracy? Or is it a percentage of hate speech within the comment?
  • Can I add to report without selecting anything first?
  • Why are comments labeled as “flagged” if there are percentages associated to all comments?

Iteration 2

Which led to our next iteration on the design shown on the right here, which was created in Figma.

The goal of this home screen was to allow users to see which messages had been flagged as positive, negative, or 'unsure'.

What we learned from user testing here is that users tended to spend more time reading through the negative messages, causing them to feel re-traumatized. This led us to improve on the design further, to instead initially show users only the flagged users, and not messages.

Final Design and Takeaways

This led us to our final iteration. In trying to ensure the user went through a positive emotional experience using this product, we updated the positive messages section to instead be 'Positive Reflections'. This new design was intended to account for situations in which the user might not have new positive messages to go through, and this page would instead update with a stream of videos and memes based on the user's likes.

We realized that the limitations of sentiment analysis using machine learning may also cause the application to detect false negatives or false positives, so we included an additional section for the user to review messages that received a negativity score below a certain threshold. This would allow the algorithm in place to grow as well as adapt the user's needs.

In addition to the changes described above and from testing Iteration 2, we also modified the color scheme. We felt that the original color palette had a daunting feel to it, and the goal of our product was to make users feel comfortable with a friendly and comforting design.

This final design was created using HTML, CSS and Bootstrap.

Triggr

How might we create tools to support likely and actual targets of cyber hate and cyber harassment?

Triggr

Triggr

User Research and UX Design
Project Overview
“Triggr” is a product a third-party reporting tool for social media sites. It scrapes a user’s social media and filters out all the messages containing hate speech and allows users to generate an aggregated report of all the negative messages so that they can submit it to either law enforcement or the given social media customer service.
My Contributions
Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.
Webflow
Brand Designer
Apr 2014 - Mar 2015
Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.

Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.

Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean.

Triggr

How might we create tools to support likely and actual targets of cyber hate and cyber harassment?

Triggr

How might we create tools to support likely and actual targets of cyber hate and cyber harassment?

Product  

Triggr is a third-party reporting tool for social media sites. It scrapes a user’s social media and filters out all the messages containing hate speech and allows users to generate an aggregated report of all the negative messages so that they can submit it to either law enforcement or the given social media customer service.

Motivation

Currently on any social media platform, e.g. Facebook, if a user receives multiple negative messages, each message needs to be reported individually, and each time the user has to complete a short questionnaire explaining the abuse. The benefit with our project is that users can mass report multiple comments, and hopefully create a stronger case this way. 

Role

This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:

  • Product Manager
  • User Researcher
  • UX Designer
  • Prototyper

The Research

We began by conducting user research in the form of surveys, qualitative interviews and contextual inquiries.

The survey was to get a better idea of the statistics of how often, where, and to and by whom cyberbullying occurred. One of the most interesting takeaways from our survey was that 61.7% of responders said that they knew who was cyberbullying them.

Survey Response

Following this, I conducted three qualitative interviews with people who had been cyberbullied in different ways: mass trolling, cyber stalking, and cyber harassment.

"I wish I could just give my phone away for a week to have someone scrub it clean."
"While we shift blame to platforms to fix it, this is a human glitch."

Personas

Our research and discovery was summarized into the below personas.

Iterations & User Testing

Our final product was an app that scrapes the users social media profile for abuse using Perspective API, and allows the user to mass report the abuse at once.

Paper Prototype
User Testing

Questions from user testing

Iteration 1

This was our first high-fidelity prototype, created using Figma. The goal of this home screen was to allow users to see which messages had been flagged as positive, negative, or 'unsure'.

What we learned from user testing here is that users tended to spend more time reading through the negative messages, causing them to feel re-traumatized. This led us to improve on the design further, to instead initially show users only the flagged users, and not messages.

Iteration 2

This led us to our final iteration. We also modified the color scheme, as we felt that the original design had a daunting feel to it, and the goal of our product was to make users feel comfortable with a friendly and comforting design. This final design was created using HTML, CSS and Bootstrap.

Home Screen
Flagged Users
Review Report
Positive Reflections
Review Messages
Wanna Chat
Call Somebody