Project Melo

Smart and Personalized Assistant

Project Melo is a personal assistant that learns about you through your conversation. More things you share with your assistant, more things it can do for you.


Smart assistant, although technically usable, is not utilized to its full extent due to user’s lack of sentimental attachment and motivation to use an assistant, and limited personalization of an assistant. Many products are currently out in market, but they feel mechanical and unnatural, discouraging users to be motivated.


Current voice assistant feels like a branded product and feels the same no matter who uses it. Also, unnatural ways of communication hinders users to use it in everyday life, and the information it provides is quite distant from what users can use.


We designed a personal assistant that users can attached to who provides custom information that user needs and use. Project Melo learns and adapts to what users need and wants to give greater value to users.

Timeline : Nov 2018 - Dec 2018
Team : Bo Kim, Stephanie Chun,                  Haewan Kim
My Role : Product Designer
  • Led Research Process
  • Translated research findings into insights
  • Created low-fi and high-fi wireframes

Product Overview

How does Project Melo Work?
Please turn on sound

Meet your assistant like how
you meet your other people

First impressions matter. Introducing your personal assistant should feel organic as it is with getting to know new people. The name of the assistant, its look and feel are initialized through your words. Now your assistant exists not only vocally but also visually.

Your assistant reflects your preference, interests and needs

Current voice assistants feel like branded products. For Project Melo, every assistant looks, talks, and behaves differently, reflecting on how the user interacts with the assistant. If you are a fun, enthusiastic and witty person, your assistant will learn and adapt to your characteristics over time and will interact with you in similar manners.

Get personalized and custom recommendation and suggestion

Instead of asking what the weather is and receiving a series of digits, what if your assistant can tell you more? For instance, Melo assistants can analyze your photo album to present what you wore on a similar weather and provide a more valuable and contextualized recommendation.

Your assistant understands your mood and condition

You assistant can understand you sentiments, mood, and condition through conversing with you like any other people you know. You assistant can detect the change in your tone of voice, talking speed, and other unusual cues and will try to relieve your concerns like your best friend.

Audience of Project Melo


First-time user

Sayid is not motivated to use the assistance because his typing and using hands were much faster to do what he intended to do with the voice assistance.

Why voice assistant like Project Melo?

Opportunity for better relationship between user and an assistant

Many companies have invested to create a ‘smart’ assistant, but their products currently do not deliver satisfying experiences for users.

Also, enhancing the user experience of a CUI(conversation user interface), and gathering extensive amounts of unique, personalized data of individual users can bridge the gap of creating AI or creating a new platform of interacting with a machine.

Our Process

Understanding the voice assistant

In order to understand how current voice assistants are used, we individually used voice assistants (Google Assistant and Siri) for a week. From this experience, we wanted to examine when we were motivated to use the current voice assistant.

We discovered that hands-free, task-oriented interactions are main motivators for using these assistants. Thus, users were not conversing with the assistant, but used it as an alternative keyboard. There was also a limited opportunity for personalization, and consisted a very product-like initialization process.

Pain points of voice assistants

Users' current motivations to use voice assistants are too weak.

We currently use voice assistants when we have a demand for hands-free interaction, but there's more that can be done without using the technology.

Smart assistant is not utilized to its full extent due to lack of sentimental attachment

It is hard for users to feel attached to smart assistants when the current smart assistant feels too much like a branded product and their voices sound cold and unemotional.

Recommendation and suggestion is not personalized enough to deliver value

If users ask for some information, the voice assistant directs user to either different apps or brings up a general information, which they can do for themselves in couple of clicks and touch.

Users don't find the interaction with the voice assistant natural

While interacting with the voice assistant, it is hard to know when the conversation ends. And after each interaction, users have to call for the voice assistant to have another interaction, which people don't do.

Identifying Design Goals

Make the conversation natural

Non-Verbal Cue

Make the assistant
Smart and Personalized

Mapping insights to product features

How might we make the the interaction more natural?

Mimic how people interact with other people like having a clear indicator of ending of the conversation, seamless transition in taking turns.

How might we increase motivation and sentimental attachment?

Have a Character that users can talk to that communicates non-verbal cue.
“I loved how it has a face. There is something I can talk to now.”
Create a unique, more friendly way of
on-boarding process
“Calling the name several times… felt like bringing this thing into existence.”
“Being able to name the assistant is definitely a personal touch”

How might we increase usage of voice assistant?

Connect with personal data so that assistant can deliver better recommendation and suggestion
“I like that it suggested me what to wear. So ideally it could do much more right?


Understanding how we start a conversation
We initially mapped out the aspects that made the interaction awkward or comfortable/natural, then created a scenario for the on-boarding process We conducted three rounds of conversations with strangers to investigate how people talk to each other when they meet for the first time.

Then through a post-conversation interview, we gained insights on when people felt comfortable or awkward and paid particular attention to the colloquial techniques we often overlook that current voice assistants lack.

We discovered that there has to be a seamless transition in taking turns between two people, a clear indicator for ending of the conversation, and discovery of a common topic to facilitate a comfortable conversation with a new person.
How should we converse with an assistant?
Using our drafted script we conducted a role play through the phone as if one was a smart assistant and the other person was a user trying the smart assistant for the first time.

Our initial scenario attempts to build a rapport through asking the user about their music taste. Then, by letting the user change the assistant's voice to one of their favorite singers, communicate the assistant's personalization capabilities.

From testing, we learned that we should refine our scenario by setting clearer expectations to the user about the onboarding process. Also, we discovered that altering the assistant's voice didn't feel like a natural introduction to the assistant's personalization capabilities, and decided to instead add a visual character to better communicate gestures and feedback.
Prototyping the on-boarding process
Iteration 1 on creating an avatar. The idea is that your basic avatar would come out of the box.
Iteration 2 on creating an avatar. Now, the background forms into an avatar everytime its name is called.


Using the Wizard of Oz technique with our high fidelity prototype, we conducted three rounds of testing to evaluate the success of our design.

The goal of testing the initialization process was to analyze whether our solution increases interest and motivation for first-time users, and potentially influence the way they talk to the assistant.

Design Decisions for Project Melo

Introduce an avatar

When we talk to each other, we gather lots of meta data from our facial expression, tone, and voice. Having an avatar allows users to grasp the facial expression that we cannot get from the current voice assistant.

“I loved how it has a face. There is something I can talk to now”

Call the name to initiate

Saying the names couple times is currently a required step for most assistance to get the voice and tone from the user. However, we wanted give impression that user is contributing in the birth of the assistance.

“Being able to name the assistant is definitely a personal touch”
“Calling the name several times… felt like brining this thing into existence”

Integrate personal data and make it useful

When you want to know the temperature, what do you really need? You would want how the temperature will affect you, not the actual number. So Melo provides you with what you need instead of just numbers.

Two buttons to make seamless conversation

In order to suffice the potential technical limitations of voice assistants, we added certain details to help the user have a comfortable conversation. We added an edit button for certain parts of the conversation where accurate recognition is essential. We also added an "I don't want to talk about this" button so the user can make a comfortable escape whenever he or she feels uncomfortable discussing or providing certain information.

Check out my other projects!


Key Echo