Verizon Connect | Dynamic Dashboard

Objective

Over all 10 weeks: To create a more personalized app experience

Over last 8 weeks: To balance two counterparts to an entire new app experience

Over last 3 weeks: To incorporate artificial intelligence

MY ROLE : UX Researcher & UX Designer

TIMEFRAME: June 4, 2018 - Aug 10, 2018

COMPANY: Verizon

TEAM STYLE: Agile Scrum; 2 designers & 2 developers

TEAMMATES: Vi Nguyen (designer), James Lu (developer), Maanasa Ghantasala (developer)

TOOLS USED: RealTimeBoard, UX Lean Canvas, AEIOU, Illustrator, Photoshop, Sketch, InVision, Optimal Workshop, UserTesting, Google Forms, Survey Monkey, and PowerPoint

BIGGEST TAKEAWAY: Move fast, make mistakes, iterate again, and everyone has a critical role to make it happen all over again!

 
 

Our given Problem

During the first week, the team and I were quickly introduced to the current problem held by one of Verizon Connect’s products. For confidentiality, our team renamed the application to Animal Dashing, and the product will be referenced as such.

As most products do at some point, Animal Dashing was losing user engagement and retention rates. Our team was tasked with increasing the engagement through personalization and improving the application’s experience.

In order to really understand the problem, every intern on the team spent time with the current version of the product.

 
 

Initial research & Ideation

Icon made by Smalllike from thenounproject

Icon made by Smalllike from thenounproject

Individually, the team looked at the current application with these questions:

  1. What exactly is this product?

  2. What does the product offer?

  3. Who uses this product?*

  4. What do we (as users) think of this product?*

  5. Where can the product improve?

*This information also came from research decks given by the company, along with our individual research

 
A teammate adding one of the many ideas among the categorized cluster.

A teammate adding one of the many ideas among the categorized cluster.

Along with research, everyone was tasked with coming up with ideas - including the developers! In total, the team generated about 30 different ideas.

I lead the team in a clustering method, similar to affinity mapping, in order to help categorize the ideas and to determine which ideas had the most merit.

This was the first time where I was the teammate with the most experience of the design process and had the opportunity to involve and teach others. It was especially rewarding to learn my teammates had fun along the way!

 

Narrowing Down & Validation

After consulting with our professional team leads, we considered their suggestions as well time constraints, skill limitations, and personal passion in order to narrow down our concepts. In the end, we selected two ideas that would be counterparts to each other to create one elaborate and elegant redesign of the current application.

In order to validate the two ideas, the team designers, myself and Vi, walked through a lean UX canvas. This also created basic scaffolding for our two concepts and would help guide us through the development.

Example of UX Lean Canvas

Example of UX Lean Canvas

 

Solution Overview

Screenshot of our proposal

Screenshot of our proposal

As mentioned already, the team decided on two different concepts that would ultimately create one solution. One idea changed the current app’s structure from a static display to an interactive and dynamic feed; this idea was named Dynamic Dashboard. The other dealt with on-boarding, as we discovered users were having difficulty with the app and losing interest due to the lack of customization; this idea was named Personality Quiz.

Even created the ASCII logo for the code. I dedicated a whole morning (after my work was done of course) to complete it. Now our fun project name will live on with the product forever!

Even created the ASCII logo for the code. I dedicated a whole morning (after my work was done of course) to complete it. Now our fun project name will live on with the product forever!

This is how the name of the solution came to be. The Personality Quiz was heavily inspired by Nintendo’s Animal Crossing game, specifically when the character is asked questions in the beginning of the game to determine what their character looks like. The Dynamic Dashboard was inspired by social media feeds and the ever-constant update of information. Now if you combine the inspiration and names, you come out with: Animal Dashing. Even naming the project was a fun bonding moment for the team and we felt a very personal tie to the project throughout the rest of the internship.

Now let’s break down Dynamic Dashboard. To read more about Personality Quiz click this link.

 

Dynamic Dashboard

Reimagining the application

We knew we wanted to change the structure of the application to something more dynamic, something constantly updated, and also something that called the user back to interact with. In order to properly restructure the app, we started the process by investigating the users and the organization of current app’s features.

 

User Empathy Research

Illustration from dlrtoolkit.com/aeiou/

Illustration from dlrtoolkit.com/aeiou/

In order to better understand our users and also to incorporate previous research from Verizon, I created an AEIOU analysis of the users. An AEIOU delves into the mind of the users at 5 different levels: Activities, Environment, Interactions, Objects, and User(s). To really gain insight, this method was used on every user individually. The other designer, Vi, participated in the analysis and helped complete a well-balanced amount of information for each user, all the while gaining empathy for each user as well. Together, we discovered there was about 5 main users with additional, minor outlier user groups. We utilized RealTimeBoard to cooperatively work on the analysis and to easily send the results to the in-house UX research team.



 

Organizing Content - Card sort

Similarity matrix results of the card sort (feature names removed intentionally). While groupings were present, there was some bleeding over of certain features between groupings that would later be tested.

Similarity matrix results of the card sort (feature names removed intentionally). While groupings were present, there was some bleeding over of certain features between groupings that would later be tested.

During our personal research into the current experience and the subsequent brainstorming, the team realized there were minor additions that could be made with a more dynamic version of the app. With forward thinking, we included easy-to-add features in our next step: card sorting.

Altogether, we had about 30 total features for our card sort. Optimal Workshop was utilized to send out an online, easy-to-fill form for participants; UserTesting was used to record their thoughts during their process. From the results, it was clear that users were expecting flexibility with navigating to content.

This was also an exciting time for myself and the professional UX team. It was the first time I was using Optimal Workshop while the UX team was trying to prove that a full version of Optimal Workshop (note: the team had full access with UserTesting already, which was very convenient to work with). So, in a way, I was able to experiment with a really helpful resource, while also improving the UX team’s resources for the long run. As I know it, they moved forward to a full version of Optimal Workshop.

 

Confirming Layout - TreeJack & ChalkMark

Result example of chalkmark testing. This result showed that while most people were drawn to the middle icon on the navigation bar, the second icon was contributing to some confusion.

Result example of chalkmark testing. This result showed that while most people were drawn to the middle icon on the navigation bar, the second icon was contributing to some confusion.

An example of a treejack result. Original results not posted for confidentiality.

An example of a treejack result. Original results not posted for confidentiality.

Taking full advantage of Optimal Workshop, we also confirmed our new layout and initial iconography through treejack and chalkmark tests.

The treejack provided us insight on how to tweak the content layout according to the card sort, while the chalkmark proved that titles and icons were required to best help users navigate the application.

Bringing these results in front of other designers and researchers showcased how visual data can bring about more fruitful conversations rather than relying on numeric data alone.

 

New Challenge - Add a Smart bot

With 3 weeks remaining in the project, our manager tasked the team with adding in artificial intelligence to the app. The aim for adding in AI was to make the app easier, smarter, and more reliable for users. The benefits of adding the AI also included advancing the app into a more competitive app on the market, as well as create the basis for voice assistance. So to best suit the users’ needs, we brainstormed as a team where a smart bot could best help in the app.

After deliberation of whether the bot should be added behind the scenes (maybe subtly structuring the application to the user’s preferences or maybe characterizing their drive patterns into more suitable features) or directly accessible to the user for any possible question that needs an immediate answer, the latter was chosen as the best option by the stakeholders. As such, the team generated a brand new in-app smart messaging system which we displayed live in the final demo. The bot could answer questions about the app’s user history, the user’s vehicle, and the in-app services.

 

Consistency Matters

As we were wrapping up the project, and the developers were working on the final bugs and implementations, I took this opportunity to go through all of the designs we had created and made them consistent. By consistent, I mean all of the designs followed a coherent design standard. Every text had the correct header font type and size, every spacing between the postings and blocks had a reason, and every picture was the expected size.

With this last contribution, it created a moment to full-proof the designs. I had conversations about why we choose to have certain elements look the way they did. I caught bugs in the implementation version. I found broken page flows.

From consistency, I brought the team together on the same page about all of our design choices, from the interactions, to the visuals, and to the infrastructure.

 

Final Result: Dynamic Dashboard

While I can’t reveal the full final result, I can say when presenting the designs to the higher execs in the final demo, the designs showed how important having relevant information in the forefront for users and allowing for personalization were critical implementations for the next generation of the product.

Had we had a longer timeframe with this project, we would have loved to send a beta version out in the field to see what users thought of the new dynamism, especially compared to the older version.


Thank you for reading all the way through! If you have any questions or need clarifications due to the confidentiality, please email me at courtney.a.storm@gmail.com