Verizon Connect | Personality Quiz

Objective

Over all 10 weeks: To create a more personalized app experience

Over last 8 weeks: To balance two counterparts to an entire new app experience

Over last 3 weeks: To incorporate artificial intelligence

MY ROLE : UX Researcher & UX Designer

TIMEFRAME: June 4, 2018 - Aug 10, 2018

COMPANY: Verizon Connect

TEAM STYLE: Agile Scrum; 2 designers & 2 developers

TEAMMATES: Vi Nguyen (designer), James Lu (developer), Maanasa Ghantasala (developer)

TOOLS USED: RealTimeBoard, UX Lean Canvas, AEIOU, Illustrator, Photoshop, Sketch, InVision, Optimal Workshop, UserTesting, Google Forms, Survey Monkey, and PowerPoint

BIGGEST TAKEAWAY: Move fast, make mistakes, iterate again, and everyone has a critical role to make it happen all over again!

 
 

Our given Problem

During the first week, the team and I were quickly introduced to the current problem held by one of Verizon Connect’s products. For confidentiality, our team renamed the application to our project nickname, Animal Dashing, and the product will be referenced as such, or as “the App” on screenshots.

As most applications do at some point, Animal Dashing was losing user engagement and retention rates. Our team was tasked with increasing the engagement through personalization and improving the application’s experience.

In order to really understand the problem, every intern on the team spent time with the current version of the product.

 
 

Initial research & Ideation

Icon made by Smalllike from thenounproject

Icon made by Smalllike from thenounproject

Individually, the team looked at the current application with these questions:

  1. What exactly is this product?

  2. What does the product offer?

  3. Who uses this product?*

  4. What do we (as users) think of this product?*

  5. Where can the product improve?

*This information also came from research decks given by the company, along with our individual research

 
A teammate adding one of the many ideas among the categorized cluster.

A teammate adding one of the many ideas among the categorized cluster.

Along with research, everyone was tasked with coming up with ideas - including the developers! In total, the team generated about 30 different ideas.

I lead the team in a clustering method, similar to affinity mapping, in order to help categorize the ideas and to determine which ideas had the most merit.

This was the first time where I was the teammate with the most experience of the design process and had the opportunity to involve and teach others. It was especially rewarding to learn my teammates had fun along the way!

 

Narrowing Down & Validation

After presenting our concepts to our professional team leads, we considered their suggestions as well time constraints, skill limitations, and personal passion in order to narrow down our concepts. In the end, we selected two ideas that would be counterparts to each other to create one elaborate and elegant redesign of the current application.

In order to validate the two ideas Vi and I walked through a lean UX canvas exercise. This also created basic scaffolding for our two concepts and would help guide us through the development.

Example of UX Lean Canvas

Example of UX Lean Canvas

 

Solution Overview

Screenshot of our proposal

As mentioned already, the team decided on two different concepts that would ultimately create one solution. One idea dealt with on-boarding, as we discovered users were having difficulty with starting app and were quickly losing interest due to the lack of customization; this idea was named Personality Quiz. The other idea changed the current app’s structure from a static display to an interactive and dynamic feed; this idea was named Dynamic Dashboard.

Even created the ASCII logo for the code. I dedicated a whole morning (after my work was done of course) to complete it. Now our fun project name will live on with the product forever!

Even created the ASCII logo for the code. I dedicated a whole morning (after my work was done of course) to complete it. Now our fun project name will live on with the product forever!

This is how the name of the solution came to be. The Personality Quiz was heavily inspired by Nintendo’s Animal Crossing game, specifically when the character is asked questions in the beginning of the game to determine what their character looks like. The Dynamic Dashboard was inspired by social media feeds and the ever-constant update of information. Now if you combine the inspiration and names, you come out with: Animal Dashing. Even naming the project was a fun bonding moment for the team and we felt a very personal tie to the project throughout the rest of the internship.

Now let’s get into how Personality Quiz came to be. To read more about Dynamic Dashboard click this link.

 

Personality Quiz

User Empathy Research

Illustration from dlrtoolkit.com/aeiou/

Illustration from dlrtoolkit.com/aeiou/

In order to better understand our users and also to incorporate previous research from Verizon, I created an AEIOU analysis of the users. An AEIOU delves into the mind of the users at 5 different levels: Activities, Environment, Interactions, Objects, and User(s). To really gain insight, this method was used on every user individually. The other designer, Vi, participated in the analysis and helped complete a well-balanced amount of information for each user. This activity helped Vi and I to gain more empathy for each user than before. Together, we discovered there was about 5 main users with additional, minor outlier user groups. We utilized RealTimeBoard to cooperatively work on the analysis and to easily send the results to the in-house UX research team.



 

Branch Logic & Low-Fidelity Wireframes

Vi’s sketches combined with the logic from the AEIOU results

After understanding the users more, Vi and I created a quiz for the users to walk through that would create a “customized” end result. This “customized” result would be 1 of 5 resulting options. Each option was based on each user type discovered from the AEIOU. I continued to work more on the logic and iterating the language of the quiz while Vi started drafting screens.

At this point, we actually paused on this part of the project to focus on the other half, but we knew our next steps. From here, we needed validation for our quiz results. However, in order to validate with accuracy, we needed more analytical skills than either Vi nor I had. So, we call in for some help from a fellow intern, Adrian, from another project who could do just that.

 

Calling in for help & User testing

With binary questions, there was no “elbow” result in the number of clusters. That dip at #8 doesn’t count and we had to fix the survey fast.

With binary questions, there was no “elbow” result in the number of clusters. That dip at #8 doesn’t count and we had to fix the survey fast.

Before we started validation, we asked for suggestions from our professional team leads on how to target the testing. They recommended we validate our user groups first and determine if our groups were existential and accurate. We found we could examine the groups through their preferred feature set from the app; users had different priorities, thus their feature set would theoretically be different. All the while, we were able to simplify the features, and add our newly generated features in the survey (further explained on the Dynamic Dashboard page). This way, the results could be applied for the current version of the app (by removing the new features’ results) and for Dynamic Dashboard which we were working towards in parallel.

3 clusters exist! Very exciting to see and let us know that we needed to focus on 3 frequent user groups.

3 clusters exist! Very exciting to see and let us know that we needed to focus on 3 frequent user groups.

This turned into a huge learning lesson. Initially, we sent out a survey with binary questions, asking survey participants which features they considered their priority. The results were very cluttered and created no distinguishable clusters of users. Luckily, we turned the survey quickly into a likert scale-style survey, asking participants their opinion for every feature. Very little time was wasted with the determined work of our helpful extra intern and the team. In turn, the latter provided more accurate and relatable results.

 

Qualitative & Quantitative Face-off

Though we knew there were 3 definitive user groups, we had two small issues. According to another research intern, Katie, and her work, there actually existed another user group, here forth called “group E,” that we were not able to detect. We knew this based on the demographics of our survey participants - there were simply not enough participants who were a part of the group E to make a difference in our team’s results. Also there was the issue of what happens when a user doesn’t fall into any of the cluster-based user groups?

Vi and I dedicated set time to solve this riddle. We decided that the quiz could incorporate both group E, since it was already validated to exist through other research, as well as the doesn’t-fit-anywhere user. For the latter, we used the average results from the clustering analysis to create a “generic” result that would serve as a one-size-fits-all answer. To best communicate the layout of questions, we created a visual of the branching logic and how each question lead to specific user groups.

So much mental effort was put into such a seemingly simple quiz. It was more like Pandora’s box!

So much mental effort was put into such a seemingly simple quiz. It was more like Pandora’s box!

Learning from this experience, had we more time and resources, we should have sent another survey that requested more diverse demographics and a larger amount of users. More diverse results would have given us more confidence in our results, as well as confirm how prominent the group E would have been.

 

Final Look: Personality Quiz

Intro screen. The users would be seeing this when they launch the app the first time.

An ideated level of customization was to choose a favorite color which would in turn change the accent color in the app.

Confirmation page for the users so they know that the quiz has been completed and that the answers will create a change on their app.

 

Next steps - Validation

For Animal Dashing’s Personality Quiz, the concept needed user testing validation. A study should be performed to rate how satisfied users were with their results. Additionally, this would determine if the project met its objective of increasing user retention rate and user engagement. Select users would receive the beta version and experience it for the duration of the study; at the end, they may choose to keep the beta version or revert back. This study could be done in two parts:

PART TWO

  • Users: New & Current

    • New users will be introduced to the idea of being in control of settings from the beginning and more likely to tamper with them

    • Current users will most likely be more familiar with the app’s features than new users, possibly effecting their tendency (or lack of) to change the settings; most likely to be more honest about feedback from previous survey interactions

  • Observations: record in-app interactions

    • How often are users returning to the app

      • Targeting main objective

    • How long users keep Quiz-generated results

    • How many times users change settings

    • Which features are being reordered

    • How many users kept beta version/reverted back

  • Length: 3 months

    • Enough time for daily activities and anomalies to both be recorded

PART ONE

  • Users: Current

    • Current users will already have experience with app and will notice the change from the results of the Personality Quiz

    • Excluding new users as they would not have enough experience with the app to determine immediate (and accurate) satisfaction ratings

  • Survey: target questions on satisfaction and appeal of Personality Quiz

    • “From 1 - 5 … How satisfied were you with your results?”

    • About 5 - 7 questions total to make the survey low effort for users to complete

  • Length: 2 weeks for response time

    • Possibly add in promotion for discount or some other monetary incentive


Thank you for reading all the way through! If you have any questions or need clarifications due to the confidentiality, please email me at courtney.a.storm@gmail.com