Google Look- turning offline world into searchable spaces.

OVERVIEW

Look is an AI-empowered searching tool including an earphone product and an AR app to help you save your object search histories by looking at it, you are basically googling the nearby with your smart assistant friend.

TIMELINE

Sep 2018 - Dec 2018 ( Reworked )

CLIENT
MY ROLE

UX Design Lead

TEAM

3D architect, Content writer

Understand the problem- envisioning future new tools for creators

Google Daydream team was interested in future seamless transitions between the real and digital worlds by uncovering new hybrid tools for creators in 2023. I decided to dive deeper into emerging technology, searching and recording potentials beyond current use cases.

FINAL DESIGN

Video Prototype

01. Look and ask

Look assists you along your way exploring the environment : search and answer your questions in real-time.

02. Smart Comparison

Users can simply ask our friend "Look" to hear about personalized opinions based on users' preferences and behaviors. Users do not need to sort out complex data out of lots of searching results.

03. Search to create

By talking with Look, users can create their own search results based on what they see and hold. For example, I want to know what recipes that I can make based on what ingredient that I have now.

How to interact with Look?

What you do is that, you put up this I/O voice earphone piece. For example, you are looking at two pots, the ear pieces will answer your questions along your shopping like how you shop with a smart assistant.

You can ask for the voice assistant while it learns your behaviors at the same time. Look help you make decisions intelligently and it will go back to your previous search histories to analyze with you.

Understand the problem- envisioning future new tools for creators

Google Daydream team was interested in future seamless transitions between the real and digital worlds by uncovering new hybrid tools for creators in 2023. I decided to dive deeper into emerging technology, searching and recording potentials beyond current use cases.

Creating seamless transitions between the real and digital worlds with emerging technology
PROBLEM

Who are Creators - interviewed a group of people curious of making things

By interviewing with experts in diverse fields who are crafting things, I learnt to understand how they create things out of their hand from their backgrounds. It was interesting to find that they have one thing in common is that before making a decision on researching, designing, or shopping, they want to see, feel, touch in real life. Searching online is not enough.

Searching 2D images is not enough, you want to see, feel, and touch in the reality before making a decision.


Immersive information gathering

"Researcher is like a detective, we search from details of your your everyday life. We want to meet the user to get guided through in physical world instead of reading papers."
-ADP Researcher


Searching 3D object efficiently

"As a product designer, I wish I can search 3D objects without just going online. I want to see it and feel it before designing. "
-Lego Industrial Designer

Users are struggled of getting quick and meaningful responses

Summarizing the needs and goals from the daily creators who need a contextual searching in 3D world, I found those touch points from the research. People are not able to search in a real-time with their needs.  I decided to break them down based on the 3 use cases.

1. I want to search and compare offline- online shopping quickly.
2. I want to discover making new things by searching online.
3. I want to collect and share my searching results more efficiently.

Ideation & Brainstorming

What if online search can be as natural as asking a close friend by your side?

Public Space

Imagine you are shopping in the market with Look, you found two different BBQ sauces and they both look quickly great. You want to know which is better and recommended. Look found that the left has better rating from other people reviews.

1. I shop freely with LOOK navigating.

2. LOOK compares the two BBQ sauces in my hand.

3. LOOK recommends the left one by pointing at the bottle.

Private Space

Next one, imagine you are back to home after the grocery shopping, you look up the ingredients in fridge and wonder what to make tonight.Look recognizes the context in front of you and guide you through the journey of the cooking.

1. I want to make a dinner by asking LOOK.

2. LOOK sorted out recipes based on my daily preferences.

3. I decided to cook scrambled sausages tonight and LOOK guides me step by step.

Shareable information

Imagine you see something that you are interested but not able to search from other people. Since Look can only recognized objects, it will ask for your access. They can share information quickly to your bookmarks.

1. I like my friend’s clothes and wonder if she can share me a link.

2, She allows LOOK to share her clothes to me.

3. I received the information and save it to my phone.

Voice Flow- taking it to next level

I also explored the natural motion from the AI and how it can express emotions to users from natural way and how voice can be routed from AI side. With the smart camera, our AI can be smart and has unique personality.

Like how you talk to your friend.

A friend who will help you choose wisely.

A friend who will teach you new skills.

A friend who will share information together.

Look app helps you to save your results.

Remember that we have looking at the pots, app can also act as an input. You can turn your phone as an album to save search results. That is how people prefer visual data instead of voice data. The app’s data transcribing all your voice data into text and visuals hints.

Scan & Buy
By simply scanning the objects in front of you, users can quickly grasp the online links based on what they see. They can save it and re-access it later.

Smart Assistant
Look app serves to assist your searching journey seamlessly without pausing at the moment of looking up online. Look curates your search results.

Save and Compare
Look helps you compare your searching results by asking your AI assistant in your look earphone.

Results & Testing with Unity

While working with new technology is always hard to test, I looked up through lots of online tutorials by learning IBM Watson and Unity Vuforia to try image recognition through voice. Meanwhile, I brought the device to my friends who just had a baby born at home, we tested it in a real environment- acting out the scenario and see everyones’ responses.

Google look successfully responded to her questions with ideal answers naturally.

Object Recognition

Object + Environment

Object + Environment

Reflection on process

01

Don’t be feature-driven, be user driven as my goal

Voice user interfaces allow the user to interact flexibly with a system through voice commands and multi-modal gestures that are not limited to buttons.

02

Be proactive and be collaborative to understand capability from engineers

I was trying to bring things to pixel-perfect when presenting to engineer. But the most important thing is to get to the needs and then refine it by different rounds.

Thanks for scrolling!  💛

Here is my contact information