Glossophobia

VIRTUAL REALITY
A virtual reality application to help users overcome their fear of public speaking.

Product

Our product aims to help people with glossophobia, the fear of public speaking, by using virtual reality to offer a form of exposure therapy. The participant is exposed to an auditorium where they have control over different parameters of the environment, such as the audience size, script difficulty, audience reactions, and timers. We have even incorporated elements of anxiety reduction therapy using breathing exercises and vibrations from the Oculus Quest controllers.

The application area for this is the health tech world, since this tool can be used by people to try and overcome their fears with a virtual platform where they can prepare for their real-world scenarios. The scope of this product can extend to people who simply want more practice while preparing for speeches.

Role

This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:

Product Manager
UX/UI Designer
Environment Designer
Unity Developer

Research

I began research in the are of phobias in general to better understand how people cope with them, and what sort of treatment options are available to them. Below are some of the key takeaways gathered through online research.

  1. Exposure therapy is one of the most common treatment methods.
  2. The therapy is conducted by exposing the patient to progressively stronger stimuli.
  3. When people experience their triggers, they practice what’s called an “escape response”. The goal of exposure therapy is to remove/reduce that escape response.

The above research raised a few questions and requirements that we needed to consider before building our product.

We need ‘levels’ in order to expose users to stronger and stronger stimuli.

If we want to reduce escape responses, do we still want to offer the option to 'remove' triggers in the middle of an experience? 

Design

MVP

This research brought us to the first iteration of the product. Our MVP brought users into an auditorium modeled off of the Bloomberg Auditorium at Cornell Tech. The primary trigger we addressed here was having an audience. By default, the auditorium was empty, and users could increase or decrease the number of people. They also have the option to turn on subtitles from a pre-programmed script to read from.  

With this version, we experimented with using head tracking within the Oculus Quest to see if it was more intuitive to the user as they were scanning the room while speaking.

We found that it was too much of a cognitive load to focus on the script with the head tracking cursor always in the immediate line of sight. By testing this with users, we started to receive a lot of questions and comments that led us to our next set of features such as:
  • "How do I know I'm getting better each time?"
  • "I pay too much attention to people's reactions when I'm speaking and it distracts me."
  • "Can I add my own script?"

Iteration 2

Which led to our next iteration on the design shown on the right here, which was created in Figma.

The goal of this home screen was to allow users to see which messages had been flagged as positive, negative, or 'unsure'.

What we learned from user testing here is that users tended to spend more time reading through the negative messages, causing them to feel re-traumatized. This led us to improve on the design further, to instead initially show users only the flagged users, and not messages.

Final Design and Takeaways

The final product was made using Unity and the Oculus Quest. Our first key step was making the entire environment much more realistic using textures and 3D assets of characters in the audience. We also implemented many features to allow the users full control over the environment, such as including a settings screen to allow users to manipulate the audience size, audience reactions, script difficulty and whether they would like to see an active timer for each round.

We tried to address the question about seeing improvements over time. As of now since this is a single player experience, we introduced a self-rating scale. After each round, the user is ask to score themselves out of 5, which then saves to a Practice History screen where they can begin to see not only their time for each round but how much confidence they are building with their own scores.

Finally, we realized that while we were introducing the ability to control potential anxiety triggers while public speaking, we didn't have a feature that would help the user calm down as we increased difficulty levels. So we created an option to use built in anxiety reduction techniques such as breathing exercises.

Glossophobia

A virtual reality application to help users overcome their fear of public speaking.

Glossophobia

A virtual reality application to help users overcome their fear of public speaking

Glossophobia

3D UX/UI Designer, VR Development
Project Overview
 Our project aims to help people with glossophobia, the fear of public speaking, by using virtual reality to offer a form of exposure therapy. The participant is exposed to an auditorium where they have control over different parameters of the environment, such as the audience size, script difficulty, audience reactions, and timers. We have even incorporated elements of anxiety reduction therapy using breathing exercises and vibrations from the Oculus Quest controllers.
My Contributions
This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:

Product Manager
UX and UI Designer
Unity Developer
Environment Designer
Prototyper
Webflow
Brand Designer
Apr 2014 - Mar 2015
Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.

Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary regelialia. It is a paradisematic country, in which roasted parts of sentences fly into your mouth.

Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean.

Glossophobia

A virtual reality application to help users overcome their fear of public speaking

Product  

Our project aims to help people with glossophobia, the fear of public speaking, by using virtual reality to offer a form of exposure therapy. The participant is exposed to an auditorium where they have control over different parameters of the environment, such as the audience size, script difficulty, audience reactions, and timers. We have even incorporated elements of anxiety reduction therapy using breathing exercises and vibrations from the Oculus Quest controllers.

Motivation

The application area for this is the health tech world, since this tool can be used by people with glossophobia to try and overcome their fears, by creating a virtual platform where they can prepare for their real-world scenarios. We also realize that the scope of this product can extend to people who simply want more practice while preparing for speeches. This idea is inspired from the fact that Virtual Reality experiences are highly enjoyable and realistic when stationary, and can provide a personal space for people to conquer their fear of public speaking.


Role

This project was a team effort between myself and three other students at Cornell Tech. I took on the following roles during this process:

The Research

We began by conducting user research in the form of surveys, qualitative interviews and contextual inquiries.

The survey was to get a better idea of the statistics of how often, where, and to and by whom cyberbullying occurred. One of the most interesting takeaways from our survey was that 61.7% of responders said that they knew who was cyberbullying them.

Survey Response

Following this, I conducted three qualitative interviews with people who had been cyberbullied in different ways: mass trolling, cyber stalking, and cyber harassment.

"I wish I could just give my phone away for a week to have someone scrub it clean."
"While we shift blame to platforms to fix it, this is a human glitch."

Personas

Our research and discovery was summarized into the below personas.

MVP

Our MVP brings users into an auditorium modeled off of the Bloomberg Auditorium at Cornell Tech. We began the session by offering them the option to turn on subtitles from a pre-programmed script that they can read from. By default, the auditorium is empty because having people watching is the main trigger for people with glossophobia. This is the primary feature we addressed in this MVP. Using the controls on the right hand side of the screen, the user can increase the number of people in the audience as they get more and more comfortable in the space.

Final Product

The final product was made using Unity and the Oculus Quest. We implemented many features to allow the users full control over the environment, and made it much more realistic than the original MVP. Here, we also incorporated some anxiety reduction techniques. All the features are described in detail below.

Start Menu

The user begins their experience with a Start Menu, where they opt to change the settings. We've intentionally kept this menu translucent so that wherever possible, the user can see the changes taking place within the environment. Once they customize the settings, they can click on Ready to Play to begin the experience.

Customization

Using this screen, the user can control the following settings:

  • Increase or decrease audience size from empty, low occupancy, medium occupancy, to full.
  • Turn the timer on or off.
  • Turn the audience reactions on or off.
  • Change the difficulty level of the practice script.

Speech Practice

Once the user starts the game, they can begin clicking through lines of the script as they practice. If enabled, the timer begins running and the audience reactions begin to randomly appear. As we can see from the image (right side) the reactions come up as emojis next to the audience member. The microphone on the podium can also be picked up if the user wants a more authentic experience.

Self Evaluation

Once the user reaches the end of the script, they are prompted to evaluate how they performed this practice round from a scale of 1-5. This way they are able to track their progress as they continue practicing.

Practice History

With this table, the user can see the details of every practice round they have done, including all of the customizations implemented for each round. This will help with isolating which triggers have the most impact on their performance.

Anxiety Reduction Exercises

Finally, we've implemented an aspect of EMDR therapy to help the user with any anxiety they may have around public speaking. This form of therapy involves rhythmic right-left haptic feedback as the user focuses their thoughts on things that could help them reduce anxiety in relation to certain situations. The haptic feedback was done using the Oculus controllers, which can't be seen very clearly in the video but if you look closely you might see the hands vibrating slightly. We began this exercise with a few rounds of deep breathing to help the user clear their head to focus on the situation at hand.