Intro to the Google Assistant: Build Your First Action (Google I/O’19)

if you're new to developing for the Google assistant you have come to the right talk and if you're an experienced assistant developer don't worry we're gonna tell you what's new our mission for the Google assistant is to be the best way to get things done that is an ambitious mission I think it sort of rhymes with organize the world's information and make it useful and accessible and just like Google searches mission from 20 years ago our mission today for the assistant requires a large and vibrant ecosystem of developers and that's where all of you come in so whether you're joining us here the amphitheater shoreline watching on the livestream or on YouTube later this talk is going to tell you how you can make your experiences shine on the Google assistant if you're a Content owner this talk is going to tell you about schema.org markup and templates and how you can make your content look great on the assistant and if you're an Android developer wait I hear there's a few Android developers here today where are the Android developers where are you yes thank you if you're an Android developer this talk is going to tell you how you can use app actions to voice enable your Android apps and if you're an innovator in this new generation of conversational computing this talk is going to cover interactive canvas and conversational actions how you can use HTML CSS and JavaScript to build rich immersive actions for the assistant and if you're among the few the proud the hardware developers at i/o any hardware developers a couple this talk is going to tell you about the new innovations in the smart home SDK but before we do any of that Naomi is gonna tell us a little bit about why now is the right time for you to invest in the Google Assistant we're gonna start by going back in time I want you to think about when you first use the computer for some of you it was the 80s maybe you played a game or store to receipt or if it was the mid-80s you even used a word processor for others it may have been the 90s games are a little bit better and you navigated via the mouse instead of the command line ten years after computers entered our home cell phones entered many of our hands and by the mid to late 90s communication was more portable than it had ever been before but it was still very early you remember the days when text messaging was a game of back and forth between the inbox and the sent folders yes we've come a long way now another 10 years later in about 2007 the first smartphones entered the market and then mobile computing exploded so you may notice a trend here about every 10 years or so there's a fundamental shift in the way that we are computing from desktop to mobile and now to conversation so what does these chips mean for all of you as developers well it means that you have a lot to think about a lot to build on and a whole lot to build with and this is because each new wave is additive to the one that came before we're still clicking we're still typing we're still tapping and yes now we're also talking we're looking for more assistance in our daily lives and we're turning to the devices around us to get things done now Google's approach to this era of conversational computing is the Google assistant it's a conversation between the user and Google that helps them get things done in addition to the Google assistant there are also assistant enabled devices like Google Home Android phones and more and finally there's actions on Google which is our third party platform this enables developers to build their own experience on the assistant it's an entirely new way to engage with your users as they're using conversation to get their things done and it was announced on this very stage just three years ago now in just a few short years we've seen an incredible evolution in terms of how users are talking to the assistant and this presents opportunities for developers across the globe now think back to those first use cases on conversational platforms they were very very simple and straightforward they were limited to things like turn on the music turn on the lights turn off the music turn off the lights again simple straightforward commands that fulfilled users very low and limited expect a shion's but there have been three incredible shifts in querying that have occurred over the last couple of years first we're seeing that users are having longer conversations in fact query strings on the assistant are about 20% longer than similar queries on search second they're more conversational they're 200 times more conversational than search so queries are going from whether nine four zero four three two something like do I need an umbrella today like you might ask a friend a family member or even a real-life assistant and third queries are action-oriented it's 40 times more action-oriented than search so users aren't just searching for information but they're actually looking to get things done they're finding that restaurant for Friday night and they're booking that dinner reservation and the evolution of the query is due to a couple things happening simultaneously so first we're seeing that technology is improving natural language processing and understanding improvements have actually decreased the word error rate which is a key metric for speech recognition it's now better than what humans can understand simultaneously the number of assistant ready devices has soared so it's turned this new way of computing into an ambient one it's always there when we need it no matter what environment we're in or what device were on it's magical but it poses a really new challenge for all of us which is how do we reach the right user with the right experience in the right moment all at the same time so sync – a pretty typical day we'll talk about some of the touch points where the assistant might be helpful so first you wake up good start now if you're anything like me you really would love to keep your eyes shut for that extra 20 seconds but you also need to kick start your day and find out where you need to be and when well the assistant can help with that now you're waiting for the subway you're in a crowded loud station you have a couple of moments of idle time before that train comes to maybe pre-order your cup of coffee or by your friend that birthday gift you've been meaning to send them you're the assistant on your mobile or your watch can help in those moments as well and finally you're sitting on the couch the end of a long day your laptop or your mobile phone are probably not too far away but neither is your Google home it's there to help you so across these moments and more Google is handling the orchestration that's required to deliver that personalized experience for the user with the context appropriate content so you as the developer don't have to worry about which experience which device which manner you can just leave that to us so what does this mean for the developer well you have more ways than ever to be there for your user you can reach users across the globe in over 19 languages across 80 countries on over 1 billion divisive today with over 1 million actions but more than that it's actually easier than it's ever been and this is something we're all really excited about I know we're all balancing far too many projects for the numbers of hours in a day so today we're going to talk about how you can get started if you have an hour a week or even an entire quarter to build for the assistant we'll talk about how to use existing ecosystems as well as how to build net new for the assistant and we'll focus on four major pillars so first we'll talk about how to use your existing content this leverages what you're already doing in search so web developers we're gonna be looking at you for this one second we'll talk about how to extend your existing Android investments leverage the continued investments you're making in your mobile apps and app developers I heard you before so you're gonna want to pay attention to that section third we'll talk about how to build net new for the assistant so if you're an innovator in the conversational space we will share how to get started and finally hardware developers saw a few hands go up before where if you're looking to control your existing device cloud our smart home section will appeal to all of you within each section we'll talk about what it is and how to get started but before we do Dan is gonna tee up a very sweet example that we will use throughout the rest of our presentation so a single unifying example that shows all the different ways you can use Google assistant now this gets me thinking about two things one I love s'mores I have a sweet tooth and I'm also an engineer here in Silicon Valley home of tech startups of all kinds and the tech hub of the world so how can I combine my love of s'mores and my love of Technology talking it over with my colleagues Brad and Naomi we thought of the idea of using a fictional example company that you as a developer can show all of the different ways that assistant can help you and your company with things like building a global brand through Google assistant increasing your global sales customer growth and acquisition and even things like user re-engagement like the very important metric of daily active users and so the first pillar that we have is how you can leverage your existing content with Google assistant so like many of your companies s'more s'mores has a lot of existing content that's ready for the assistant they have a website they have a podcast and of course they have recipes so that we can all understand how to make that perfect s'more at our next bonfire now also just like you they spend a great deal of time optimizing their site for search so we're gonna talk about how they and of course how you can extend existing efforts and optimizations to the Google assistant now Google's presented ways to optimize your content for search since the 90s we work hard to understand the content of a page but we also take explicit cues from developers who share details about their site via structured data structured data is a standardized format for providing information about a webpage and classifying that page content for example on a recipe page you can disambiguate the ingredients from the cooking time the temperature the calories and so on and because of this markup we can provide users with richer content on the search results page answers to questions and a whole lot more and this brings Google search beyond just 10 blue links and over the last year we've been hard at work to enhance the Google search experience and enable developers to extend their content from search to other Google properties like the assistant so for sites with content in popular areas like news podcasts and recipes we have structured data markup to make your content available in richer ways on search same optimizations that you make for search will also help your content people with discoverable and accessible on the assistant and it's just standard RSS you've seen this before and that's the approach we've always taken we're using industry standards and insuring those optimizations are ready for search and the assistant to and I'm so excited now to announce two brand new programs that we're adding how-to guides and fa Q's so the additional optimizations that you make for search will soon yields a richer experience and an automatic extension to the assistant let's dive into each so how-to guides enable developers to mark up their how-to content and make it discoverable to users on both search and the assistant what displays is then a step-by-step guide to the user on anything from how to change a flat tire to of course how to make that perfect some more so on the left here you can see a nice image based preview of the how-to content in some more some more site it allows the user to engage with your brand further upstream in their journey and it differentiates your results on the search results page and if you don't have images don't worry we have a text-based version of this feature as well now on the right you can see the full guided experience on a home hub device again all powered by the same markup on some more some more site now the best thing about creating a step-by-step guide is that you actually don't have to be technical to do so now I know we're at i/o and i/o is the Developers Conference but if you have one hour to dip your toes in the assistant pool and you don't have a developer who can devote the time to adding the markup don't worry we have ways for you to get your content onto the assistant even as simply as using a spreadsheet so now you can combine your existing YouTube how-to videos and a simple spreadsheet and actions console to get a guided how to experience across many assistent enabled devices so s'mores s'mores calm now has two ways for how they can get their step-by-step guides on how to make that perfect some more onto the assistance if they have a developer with some extra time they can add the markup or they can use a simple spreadsheet to extend their existing YouTube content now we're gonna switch gears a little why to think about how times you turn to search for answers to questions maybe some of you are even doing it right now that's okay maybe you're trying to find out the return policy to your favorite store or if there's a delivery fee for your favorite restaurant blackout dates for travel the list truly goes on and on while our epic use markup enables a rich answer experience on search giving users answers directly from your customer service page so the same optimization will then enable queries on the assistant to be answered by the mark-up you already did and it's so easy to implement so when a user queries something like what's more smores delivery fee on the assistant a Google will soon be able to render the answer from that same markup on your customer service page and here are some developers that have already gotten started with FAQ s and how-to guides and we'd love to have you join us tomorrow at 10:30 in the morning to learn more about how to enhance your search and assistance presence with structured data of course the talk will also be live streamed or you can catch it later on YouTube so as you've seen there are three main ways that s'mores s'mores calm and you can leverage existing content first you can ensure the hygiene of your structured data markup for podcasts news or recipe content you can add the new epic use markup to your customer service site or you can use a new template to bring your content to the Google assistant we're so excited about the ways that we're making it even easier for developers to extend existing content from search to the assistance but we're also making it even easier for companies to engage their existing ecosystems like Android so let's talk more about app actions all right thank you how about that Sun huh you enjoying the Sun in shoreline I can't see anything without these so I'm gonna go with this so we're my android developers again Android developers yes just like many of you the s'mores s'mores company has a popular Android app they want to make it as easy to order s'mores as it is to enjoy them but just like many of you they face the high cost of driving app installs coupled with the reality that users are you using fewer and fewer apps each year the sea of icons found on many users phones might be a contributing factor it's hard for users to remember your icon much less find it what we need is a new way for users to find and launch your android apps one that's focused more on what users are trying to do rather than what icon to click last year at i/o we gave a sneak peek at app actions a simple way for Android developers to connect their apps with the helpfulness of the assistant and with the Google assistant on nearly 1 billion Android phones app actions is a great way for you to reconnect with your users app actions uses Google's natural language understanding technology so it's easy for users to naturally invoke your application and finally app actions doesn't just launch your app it launches deeply into your app so we fast forward users right to the good parts now to help you understand this concept let's walk through an example of course using our s'more s'mores app let's first take a look at how this looks the traditional way so first of course I find the s'more s'mores app in the sea of icons does anybody see it next I select the cracker okay that that makes sense and then I have to choose a marshmallow all right and then I get to pick the chocolate and the course a toast level okay I've got to say how many I want that's important too and then finally I can review my order and confirm now that's a long intent to fulfillment chain it's a long way from I first had the desire for a warm delicious s'mores before I got it successfully ordered and that means there's opportunities for drop-off all along the way now let's take a look at what this looks like once s'more s'mores has enabled app actions first you'll notice I get to naturally invoke the app with just my voice I can say order one milk chocolate from s'more s'mores and then immediately we jump right to the good part of this application confirming that order notice we got all the parameters correct and then we just confirm and we're done we're ordered it's a short path from when I had the intent for the warm delicious more before I got it ordered but of course we didn't build app actions just for stability s'mores we had a few partners that have already started to look at app actions so for example I can say to the assistant order a maple glazed donut from Dunkin Donuts of course I might need to work that off so I can say start a run on Nike Run Club and I might want to settle that bet from last night by saying send $50 to naomi on paypal so let's take a look though at what enables this technology what's going on under the covers here foundationally at Google we connect users that express some intent with third parties that can fulfill it and app actions is the mechanism that you as app developers can use to indicate what your Android app can do each app action each built in and it represents an atomic thing a user could want to do including all possible arguments for that so you just need to implement that built-in intent and I handle the arguments that we pass to you the cool thing about built-in intents is that they model all the different ways users might express an intent for example these are the ways users could say start an exercise notice as an app developer you don't need to handle all of this complexity with this different kinds of grammar we handle all of that for you you just implement the built in intent so speaking of which let's take a look at how it looks for you as a developer to implement that well the first thing you'll do is open up Android studio and add an actions xml file you notice on that second line there it's order menu item that is the name of the built-in intent that we have implemented for our s'more s'mores app and then in that fulfillment line you'll notice a custom schema URL so you could of course use an HTTP URL as well this just tells us where in the application we should fast forward into and then at the and then you'll notice we map the arguments there so the menu item name is the argument name from the built in intent and then notice our URL is expecting item name and then finally at the bottom there we're giving some inventory what are the kinds of things users might say for this application and just for brevity I put the usual now we just need to handle that in our on create function so very simply we parse that item name parameter out of the URL we check to see if it's that identifier and if so we just pre populate the UI with exactly what we want this is a very simple integration that you can get done very quickly on the assistant so the good news is that you can build and test with app actions starting today we're releasing built-in intents in these four categories in finance food ordering ride-sharing and fitness so if your app is in one of those categories you can build and test right now and of course the team is already working on the next set of intense so if you have ideas or thoughts on what intent would be great we love it if you gave us feedback at this URL but there's one more thing app actions and built-in intents it also enables slices slices is a way that you as an Android developer can create a declarative version of part of your application and embed it into Google surfaces like the Google assistant so in this case we're implementing the track order built-in intent and then you can see in line there that's our Android slice showing up right in line in the Google assistant making it quick and easy for users to get that information and then launch into your app if they need more advanced functionality so what did we see here you can enable users to invoke your app with just their voice with app actions there's a simple integration model all you need to do is map the intents in your app to us common set of built-in intents and then the good news is you can build and test starting today with more intense coming soon so we've seen how you can leverage your existing content with Google Assistant we've seen how you can integrate your Android applications with Google assistant now I want to talk about conversation specifically conversation actions that is how you can build new custom experiences for the Google assistant so why our conversation actions so important well for one it's a way that you can natively control the devices capabilities so if the device has a screen showing image if the device supports touch show a suggestion chip that they can tap it's a way that you can increase your brand awareness through things like custom dialogue and agent personas you can grow your user re-engagement through well-crafted conversation design and things like action links and furthermore you can drive habits and interactions with features like routines daily updates and push notifications now what exactly our conversation actions it's the idea that you as a developer have full control over the dialogue the back-and-forth of what's said this is distinctly different than that of app actions or content actions or even smart home actions where you have some type of fixed markup or maybe you're using a built in intent something that how defines already what your the material that you're trying to access Google assistant takes that fixed markup and ingests it applies its own natural language understanding and matches what the user says automatically to the access of that material with conversation actions that's flipped you as a developer define custom intents you define the types of phrases the that a user might say to match that custom intent you even define that information that you want extracted out of what they say for those same intents so with conversation actions we need some type of tool that can help us do this and that is dialogue flow out-of-the-box it provides two key elements the concept of user intent or what the user actually wants and the concept of entity abstraction the way that you glean information out of what they say let's dive in a little bit with a small example so if we take some oars I would like a large s'more with dark chocolate and I want it to go dialogue flow can take this phrase as a whole and match it to the user intent that they want to purchase a snack of some kind now you see here a few words highlighted in the sentence large dialogue flow can understand that they want a large snack s'more the type of snack dark chocolate the topping of that snack and they want it to go rather than for a delivery or for their so when we took it take a look at this at a sequence of dialogue and expand it a little bit more the user might say something like hey gee talk to s'more s'mores s'mores s'mores in this case is the invocation name of your action Google assistant takes that audio transcribes it into text applies its natural language understanding and invokes your conversation action from that point forward it's between you as a developer and dialogue flow that's controlling the responses back to the user and so let's take a look at a live demo here I have dialog flow and a few intents that I've already defined I have a small shop where you can order a snap you can order whatever you last ordered a gift card and so let's take a deeper look into ordering a snack when I look at this I have a few things I have my contexts I have my training phrases that I've already supplied these are the phrases that I think as a developer the user might say that app that matches the intent of them wanting to purchase a snack of some kind if I scroll down I can see the types of parameters and the relating entities of those parameters specifically things like delivery pickup the type of snack the size of the snack toppings etc now if I scroll down further you'll see the responses that I've created as well that reference the entities that I've also defined if I scroll down even further you'll see the fulfillment if I wanted to have a custom fulfillment I can have a standard webhook call for this type of intent now let's look at an example here if I say one large s'more with milk chocolate you'll notice instantly without additional input from me dialog flow has highlighted several key elements within this phrase large it knows that's the size of the snack that I want s'more the type of snack chocolate type of milk so there you go that's that's pretty powerful stuff now let's take a look at it in the phrase of a full conversation if I say I would like to order a large s'more with dark chocolate instantly it gets the information it has the various contexts it matched it to the intent of ordering of snack and we scroll down it also still has the various entities and parameters that it's extracted now the default response here is that I've defined a required parameter of delivery or pickup and so it's asking me will it be for delivery or pickup I respond delivery and there you go it understands you've ordered a large dark chocolate s'more and you want it for delivery there you go so this is powerful stuff for Google assistant now let's go back to the slides here what we have is a way to build conversation now Google assistant supports a huge array of devices and surfaces and we want to be able to scale across them all different devices support different capabilities a smart speaker is voice only a car the voice forward it has a screen but you still want to have voice control to intermodal like your cell phone to maybe your SmartWatch which is screen only we need some type of feature that can fully utilize the screen and in the case of s'more s'mores they want to be able to build a rich game something that's voice first custom full-screen visuals they want to build a turn-based battle system that's multiple surface and supports all these different kinds so today I'm happy to announce the brand new API called interactive canvas this is an API that enables pixel level control of rendering any HTML any CSS and JavaScript let me reiterate that is a full webview running on the device if it supports full screen visuals animations even video playback now it wouldn't be Google i/o without another live demo let's take a look here what I have is a hub and for this demo I'm gonna need some some audience interaction here I'm gonna be playing a trivia game and it's gonna be a really hard question so I need you guys all to scream out as loud as you can the answer to this question hey Google Play HQ University hears HQ University yes that's me the host who's khalifa host malone the trap Trebek and I'm here to welcome you to HQ niversity you've been accepting to this elite program to help smarten up and sharpen up your trivia skills to help you win HQ the rules are very simple my assistant Alfredo is gonna ask you a series of questions that just like HQ start easy and get harder you will respond with the answer you think is correct and the H universe will reward you let's get down to the nitty-gritty let's get this show on the road baby Alfredo pinin with Camaro numero uno ready question 1 if you put on a hoodie what type of clothing are you wearing your choices are cloak sweatshirt or cape so what's the answer sweatshirt yeah baby you did it we did it awesome next so that is interactive canvas a way that you can build fullscreen visuals and custom animations so how does something like this work well first off we have to start with a regular conversation action where the user says something to the device that in turn goes to the actions on Google platform and turn to dialogue flow and finally your custom fulfillment your fulfillment supplies a certain type of response in this case it needs to supply a rich response when we add the web application the canvas it supplies a immersive response which tells the Google home to load the web app your your web application in turn has a specific JavaScript library called the interactive canvas API let's look at some code for this when we look at this this is the custom fulfillment you can see here I have the default welcome intent that's supplying a new immersive response that supplies the URL of the web application and the initial state of that web application what does it look like on the web application side well when we see this there's two main elements that you need to include there's the CSS stylesheet this supplies the specific padding for the header on the devices and things like that and then the actual JavaScript library this library itself manages the state of your web application with the states of the conversation so that you can control both in unison so some key takeaways around conversation is that dialogue flow is the tool for developers to where you can build custom conversations you control the back-and-forth of your dialogue and – we've announced the interactive canvas API the pixel level control over the display for games where you can run HTML any CSS and any JavaScript now I want to switch it up a little bit talk about smart home the ability for you to control any hardware with your voice now traditionally smart home has been all about cloud to cloud communication so when I turn on my Phillips hue light bulbs what's actually happening is that Google assistant takes in the audio transcribes it into text applies natural language understanding and sends a specific response to philips hughes servers philips you in turn controls the light bulb and so now i'm glad to announce a new api the local home SDK this provides new local control over your devices with post assistants latency of under 200 milliseconds supports local discovery protocols like UDP broadcast mdns UPnP for actually controlling devices UDP TCP and HTTPS now with smart home there's device types and device traits it supports all of them out of the box with the exception of two-factor authentication now my favorite part is that it's come as you are meaning you don't need to change the embedded code on your device for the messages what's actually happening is that you develop a JavaScript application that runs on the home device that's pretty awesome so let's take a look at how this works when the user says something like hey G turn on the lights again that audio sent up to Google assistant transcribes it into text applies natural language understanding Google assistant creates a execution intent and sends this intent this structured JSON response down to the Google home where the home is running your JavaScript application your JavaScript application in turn understands the intent of trying to turn it on with the exact device ID things like that and constructs the specific message that your device support in turn turning on the light so I want to show more traits and more types today we're always adding more device types that we support with goop with the smart home api's and today we're releasing even more with things like open closed starts off with zones lock and lock even devices and pipes like your door boiler garage door now again we're adding these all the time and today we're releasing a huge number more so I want to show now how s'more s'mores can use smart home so what I have here is a toaster oven some of you might have already seen this up here and wondering what it's used for so I have a toaster oven here with inside of it some s'mores that I want to eat and I want to toast perfectly I also have a Google a ìwhy vision kit which inside of that is a Raspberry Pi zero and I'm using this to control the power to this toaster oven and so let's take a look hey Google turn on my s'mores toaster okay turning s'mores toaster on awesome so there you have it a smart home device that's being controlled via voice with Google assistant and so let's recap some key takeaways one we announced the local home SDK where you can control real world devices using local Wi-Fi with your voice and second we've announced a huge number of new traits and device types these these are available today that you can use in your own devices and so what to do next all right we have covered a lot today but we are so excited to share with you all of the ways you can get started building your first action so to quickly recap you can use your web content and leverage what you're already doing in search you can use app actions and leverage the Android ecosystem you're already participating in you can build a custom experience for the assistant by building a conversation action or you can extend your hardware and build smart home actions to control the devices around your home but this was just the beginning there are 12 more sessions this week that will dive into topics we only had a chance to introduce in our time together today so these additional talks will be geared toward Android app developers web developers hardware developers or anyone who just wants to learn how to build with conversation or some insights around this new ecosystem so please check them out live watch them on the live stream or of course tune in later on YouTube now for those of you here with us this week we have a sandbox out back office hours and a code lab now I think I heard our toasters ready so it's time for us to go enjoy some mores but visit our developers site talk to us on Twitter and we can't wait to see the rest of i/o thank you so much for joining us today [Applause] [Music] you [Music]

As found on YouTube

How to get one-meter location-accuracy from Android devices (Google I/O ’18)

[Music] we're going to show you how recent changes in hardware and standards make one meter location accuracy possible in some cases as soon as this year I'll give you a short overview now then Roy will introduce Wi-Fi round-trip time technology and standards and show you a live demo then way we'll explain the Wi-Fi api's to now return and talk about new GPS technology and api's at the end they'll be loading up for the next session so we'll take questions right outside that door and will be available at office hours at 1:30 p.m. today so it's a great time for location applications because technology hardware standards and Android api's are all evolving simultaneously to enable accuracy that has not been possible previously in phones so eventually this means high accuracy for everyone but today we want to take you under the hood of location because we want to give you the opportunity to get a head start on the future we also want to highlight the need to protect and respect the user the more people who use location the more people the more careful we and you have to be will highlight where you must get user permissions and we'll close with some guidelines for making great location apps so where are we today with indoor location accuracy if you think you've noticed that your phone seems to be more accurate when you're inside shopping malls and office blocks then it was a few years ago you're not imagining it with each release of the fused location provider we have had steady improvement of the Android algorithms and machine learning for Wi-Fi locations there continues to be improvement and you'll see indoor accuracy of better than 10 meters but round-trip time is the technology that will take us to the 1 meter level meanwhile what about GPS well terms of GPS accuracy in the open sky there has been not much change in the last few years you're out in the open sky your GPS accuracy from your phone is 5 meters and that's been constant but wrong measurements roar GNSS measurements from the phones you can now improve on this and with changes in satellite and receiver hardware the improvements can become dramatic now everyone's familiar with the blue dot but to get the blue dot you need the location provider of course and to get location you need measurements specifically range measurements from Wi-Fi access points or from GPS satellites today we'll show you how one meter measurement accuracy is available in phones the key technologies are Wi-Fi round-trip time GPS dual frequency and carrier phase measurements and we'll show you how to use accurate measurements to create accurate location now if you just want to wait a year or two this will find its way into the worldwide ecosystem and the fused location provider but we want to give you a chance for a one to two year lead by taking accurate measurements and turning them into accurate location we want to work with you to accelerate the future to take it and bring it closer to the present so you might wonder well why do I need better location accuracy anyway well let's just look at two instances where existing apps could use much better location accuracy so for indoor routing or navigation of the kind that you used to in your cars you need much better accuracy than you have outdoors you need one meter accuracy because indoor features like the distance between cubes or aisles only a few meters and even for the most loved outdoor applications such as directions and specially directions in traffic we could use higher accuracy than we have now for example when you came here this morning in a car you probably had the time estimated by the average traffic speed what you really want is the traffic speed in the lane that you're in so that you could ask how fast would it be if I could take the carpool lane they of course many other use cases and I'll mention a few before we finish but the important thing is that we are sure that you will have many more ideas than we have and that's the beauty of the open Android ecosystem so now here's Roy to tell you about Wi-Fi round-trip time thanks Frank and I'm very excited to be here today to tell you about a new positioning technology in Android P we call Wi-Fi round-trip time or RT t you'll hear me say that acronym a lot which is basically measuring the time of flight of RF signals it has the potential to estimate your indoor position to an accuracy of one to two meters now we're going to hit the ground running today before I tell you about the details of our TT and we're going to show you a video of indoor navigation based powered by OTT and I want to emphasize first of all that this is not a product but an internal prototype to explore the power of the technology and how it can also be used to support other context-aware applications this prototype also showcases some of the magic that Google could offer to its employees today so we're gonna roll the video and what you should keep in mind if this is a bit like car GPS except we're working indoors so in a moment you'll see there's an application it's a mapping application and we're searching for a conference room we found that conference room it's plotted the route that's the shortest route and now we're off we're following the route and as I make progress you can see the route is turning gray my position from our TT is the big rage lot and I'm deliberately making an error here so the system is rerouting and it's rerouting again if I get about 20 feet away it starts the rerouting process and I'm following the route and you can see the code of flying by and there and coming in and I've arrived at my destination conference room Tefo so that is the power of our TT and the thing to think about thank you the thing to think about here is that if you didn't have 1 to 2 meter accuracy then when that system rerouted it would jump potentially between aisles that were surrounding me and it would be a terrible user experience so that's why it's so important to have this kind of accuracy so before I get into the details of a Wi-Fi RTT I want to tell you about how we calculate location indoors now we use Wi-Fi RSSI which stands for received strict signal strength indication and basically we can calculate distance as a function of signal strength now the figure that you see on the left-hand side here the access point which is in the center has a heat map of signal strength the green is the strongest and the red is the weakest at the edges and I've placed two phones on this diagram at the transition between the weak and the strong notice that the phone on the right is further away from the access point than the phone on the left so say the signal strength different distance and it's this variability in distance for signal strengths that unfortunately makes it very hard to get accurate our readings from our SSI on a regular basis but there are lots of algorithms and tricks that we can pull to improve this but it can be improve further and that's where Wi-Fi RTT comes into place so Wi-Fi RTT round-trip time it uses time of flight instead of signal strength it measures the time it takes to send a packet a Wi-Fi RF packet from an access point to a phone and back again and because radio signals travel at the same speed as visible light if we multiply the total time by the speed of light and we divide by 2 we get distance round-trip time divided by 2 and we get the range from the phone to the access point so that's the basic principle now if you want to calculate position we have to use a process called multilateration and more on that in a minute but the key thing to think about here is the more ranges you have the more constraints you get in a more accurate position you can achieve and if you can use at least four ranges than we think you can get typically an accuracy of one to two meters in most buildings so why am I telling you about Wi-Fi RTT today why not last year or before well what I want you to take away is that 2018 is the year of Wi-Fi our RTT in android your takeaways are that we are releasing a public API in Android P based on the I Triple E eight to eleven MC protocol and furthermore we're also integrating aspects of this protocol into the fused location provider which is the main location API that people use to put location on a map and anytime there are our TT access points in the vicinity the accuracy of that position will be greater a little bit of history the 82 11 standard was ratified in 2016 the end of it and in early 2017 the Wi-Fi Alliance started doing Interop between silicon vendors to make sure the chips followed the protocol and that's when we started doing a lot of work to validate how it could be integrated into android and by the fall of this year of course we will release the API so that all of you can I have access and build your own applications around that technology so now diving into the principles of how Wi-Fi MTT works so the ranging process starts with the standard Wi-Fi scan the phone discovers the access points which are round and based on certain bits which is set inside the beacons and the probe responses we can figure out which of those access points are RTT capable and the phone chooses one of those two range – and it starts by making a request to the access point and as a result the access point will start a ping pong protocol back the ping is sent to the phone is called an FTM or fine timing measurement packet and the pond which sent back to the access point is an acknowledgment of that packet the time stamps are recorded at each end that each device records them but for the phone to calculate the total round-trip time it needs to have all of those time stamps so the access point sends one more packet third message which contains the missing to the phone then simply calculates the round-trip time by subtracting the time stamps from the AP and as its own turnaround time which all the time stands that it recorded so that leaves the time of flight we multiplied by the speed of light to get distance we divide by two and we get the range that we care about now it turns out if you do this process multiple times you will in fact get more accuracy and so that's what the protocol allows for it allows for a burst in Android we're typically doing a burst of about eight of these of these events and as a consequence the system can calculate statistics so the mean and the variance which allows us to more accurately plot a position on a map and knowing the accuracy also allows us to calculate a path miraculous well so now you have ranges how do you get a position so I just want to give you a feel for how you go about doing this now there's lots of different mathematical approaches and I'm just picking one because it's relatively easy to explain but this is where the power of developers comes in for you to figure out your own own way to do it so if you know a phone is at a certain range from an access point that tells you that it can be anywhere on the circumference of a circle of that radius that radius R and I've written the circle equation for that circumference on the right hand side that center x1y1 now if you want to find a position you've got to constrain it so if you take four ranges to four separate access points as I've shown on the diagram on the right hand side you then can see that if those ranges were accurate those circles would intersect at a single point how do you find that point programmatically well if you write those four equations out on the left hand side you see they're nasty circle equations it may be difficult but in fact it's actually very straightforward you take you pick one of them you subtract it from all the others and you end up with a set of line equations the square terms disappear and those lines are in fact drawn on this diagram as well and then it's very easy to find out where two lines intersect now there's one problem with what I've just told you and the problem is that we're assuming the measurements are perfect in reality no measurements are perfect everything has error and there will be no exact solution to that equation so let me give you a more realistic example so here we have several access points which we've ranged to and I've exaggerated the problem here and you can see some of those circles don't intersect how do you solve that well in fact you do the same thing as you did before you subtract the circles so you get the lines but this time they don't intersect in a point they intersect in a polygon in this case it's a it's a triangle and your phone lies probably somewhere maximum-likelihood in the center of that triangle and then we can apply some college math least squares solution and get a maximum likelihood you can find standard packages which do this on the net you can then also refine this position further by repeating this process particularly as the phone moves and then you can calculate trajectory and use filtering techniques like caramel filters and other things so that's the basic principles okay now like any new technology there are challenges and so we've experienced some of these early on little robots which you can see on the right hand side is used by us to measure the range from the phone which is carrying to an access point and it validates that range against the marks which are on the floor which provide us with ground truth what we find is that sometimes there is a constant range calibration offset may be as much as a half a meter and sometimes also you see multipath effects where the non line of sight path from the access point to the phone is actually received rather than the line of sight path that one can be solved by the vendor using something called antenna diversity but all of these things are algorithms which the vendors are improving and and basically we need to go through a sort of teething process of getting rid of these bugs and Google can help in this process by providing reference platforms and reference applications so the vendors can calibrate their own platforms before you guys even get to use them which will be the ideal situation now I've assumed that you want to start as an early adopter and start using this API but as we move into the relatively near future we expect you to just use the fuse location provider because we're going to be putting the RTG capability into it so at the moment fuse location provider uses GPS when is available cell signal strength Wi-Fi RSSI and also fuses with the onboard sensors no inertial navigation from accelerometer and gyro now we're adding Wi-Fi RTT into that mix and it will increase the accuracy whenever RTT capable access points are available okay so the one other thing to remember is that when you were doing it yourself you had to know the position of the access points in the fused location provider we will know those positions automatically for you we'll crowdsource those positions and so you won't have to worry about that and that would make life a lot more a lot easier for you to write applications okay so now we're going to take it up a notch and we're going to give you a live RTG demo in collaboration some of our colleagues in Gia so what I have over here on the podium is is a phone let's bring it back okay there we go which is running an RTG system in combination with Google mobile maps and what we're doing is we're using a number of access points which are around the room so you I'm only go you saw the blue boxes which are on the slide so these were provided by one of our partners and you can see them around the room towards the back on the side notes a couple in the center over here now the thing to bear in mind is that you would this phone because we're just in a tent would normally receive GPS signals so we've disabled GPS you're only using our cheechee with this phone and and what I'm going to do is I'm going to walk around the aisles and you can see on this car that I've already got a plot of where anger to go so I'm gonna start moving now and it's like going towards the corner of the stage and you should see the blue dot with my little man inside following me and we of course we expect an accuracy of 1 to 2 meters and so I'm walking on the aisle the aisle here is about it's about two meters across there abouts and you can see it's very nicely following within that accuracy I think the the demo environment has been very good to us so far so we're going along and it's a little bit of lag it's going around the back here and we're approaching a turnaround point where I'm going to walk up the aisle and we're rerouting as I come back to make my path a little bit shorter and you can see we're going very nicely it's still well within the one to two meters and if you had GPS shown here as well I mean typically you would be expecting to see five meters but that's accruals outdoors and indoors in a typical building you've only gone it you're only going to have indoor location technology such as this so now I'm approaching the corner of the stage and at this point I'll hand back over to way thank you very much thank you good boy he's gonna tell you about the details of the API hey thanks alright what a great demo so now you must be very good to try round-trip time rendering yourself let me walk you through the are TTIP ad in he to sing how you can add RTG in your own application so as I mentioned RTD measures the round-trip time between two Wi-Fi devices so both your mobile phone and your access points need to support either 2000 MC protocol and as you sell RTD can give you very fine location down to one meter accuracy so your application need to declare access sign location permission and of course post location and Wi-Fi scan need to be enabled on the mobile device okay so how do you know whether your mobile phone support a cutie in P we ID'd a new system feature called feature Wi-Fi RTT so you can simply check whether this is written true i a mobile device our pixel phones running p DV 2 and above where's about RG t so how do you know whether your access points support our duty as normal you will need to do a Wi-Fi scan and get a list of Wi-Fi scan results each right to sue this needs some work once a scan results and check whether this method is 802 11 MC responder return to this will tell you whether the access points support our duty so after you get a list of RTD never deities simple atom to scan request builder to build a scan request RTT is done by Wi-Fi RTG manager which you can simply get by getting the system serviced Wi-Fi RTP rendering service okay now we're ready to start our TV running by sending the RTD request to the RTP manager with the rhaenyra darker back our TV will start surely RTT text no more than hundreds of milliseconds and when the finishes you would get a list of information including the status RTT Mifare the MAC address which IP you have just arranged and also the most importantly the distance between the mobile phone and the access point so here is the list of information you can get from our key tyranny results you can get the distance it also gets distance and dimension which is a sensation from multiple ranges in multiple FTMs and you can also get number of attempted FGM measurements and number of successful moments so the ratio of successful measurements over account environments will give you an idea of how good the Wi-Fi environment is 43 reindeer okay so I mentioned our pencil devices were some hierarchy tea how about access points were beginning to some exit points supporting lmmse protocol in production and we are very excited to let you know Google Wi-Fi will soon support lmmse protocol yeah thank you by the end of this year of the sheriff Wi-Fi we have our TV enabled by default and worldwide were also beginning to sin the deployment of our DDI vs South Korea is actually leading the development of our of our TTIP s and of course this is just the beginning of the long journey we're very eager to sing a larger penetration rate of our TVs in upcoming years with that I'm going to hand over to Frank to talk about one meter GPS thank you okay so let's move to the great outdoors and speak about GPS I'm going to show you some basics of GPS just enough to explain what's new to the satellites and what's new in the phones and how you can exploit these changes to get better location accuracy from GPS when you're outdoors under open sky so GPS works like this it sends a code from the satellite and the code encodes the time at the satellite the net travels to you through space and arrives at you and your GPS receiver will compare that time with the time in its clock the difference between those two tells you how far you are away it's kind of like you have a tape measure where one ends at the satellite and you're holding the school at any moment you can look down and read a number which is the difference in these two times if you move further away you read a bigger number do you move a little bit closer you read a smaller number okay but now the actual GPS tape measure is kind of special first of all it's really long secondly the tick marks occur only every 300 meters because these bits of the code occur at a rate of one microsecond so one microsecond times the speed of light is about 300 meters so this is like a tape measure where instead of having all these inches on here you only have a mark every 300 meters and your GPS receiver essentially interpolates between those marks and there's your five meter accuracy okay but there's more to it than that because how does that code get through space in the first place well it's carried on a carrier wave radio wave which for GPS has a wavelength that is less than twenty centimeters and your GPS receiver can measure where you are on this wave and as long as it keeps tracking it can measure relative motion with great precision and this is because it the receiver will measure the face and as you move that phase will change but now what about getting your absolute location the trouble with the carrier phase ruler if you like is that it's it's kind of like a ruler with very precise markings on it but no numbers at all because one wavelength looks just like the next so your your receiver can tell you the phase of the wave you're on but it doesn't know are you the green dot or the red dot so how do you solve that problem well for that you need to introduce a new concept which is GPS reference stations so these are GPS receivers at fixed sites measuring the same thing at the same time they communicate that data to you with well-known algorithms you can combine this data and over some period of time you can work out where you are relative to the reference station with great precision with this carrier phase precision now you know where the reference station is so now you know where you are with great precision ok so this concept is not new this has been in commercial GPS receivers since the 1980s for surveying hence our a little surveyor there holding the GPS antenna on the stick what is new is the availability of these carrier phase measurements from phones and dual frequency measurements in phones right now all of your smart phones all smart phones everywhere have GPS or GNSS on one frequency band only it's known as l1 but there's a new frequency in town it's called l5 and it's supported by all these GNSS systems gps Galileo bado ku CSS and IRNSS and the availability of a second frequency means that you get much faster convergence to carry a phase accuracy if you're doing this kind of procedure and why well we just went through the ambiguity that you have on a single wave well now look what happens if you introduce a second wave at a different frequency immediately you can disambiguate because you could not have the same phase on that second wave as on both of those wavelengths so you could not be on the red dot if you're at the peak of B of the red wave and so you can disambiguate and get much faster convergence to the very high accuracy that you want all right so what about hardware well in the last few months several companies that produce consumer GPS chips have announced the availability of dual frequency l1 l5 GPS chips both for the automobile market and for the phone market and these chips are now being designed into cars and phones now let's talk about the measurements themselves and the AP is the the phone must support the GNSS measurements API and your app is going to need access fine location permission and location needs to be on so these are the basic requirements so how do you know if a particular phone supports these measurements will add a high level you can just go to a website that we maintain geo / GNSS tools as part of the Android developer site and there's just a table there that lists phones that support the GNSS measurements and also which characteristics they support so it'll tell you which phones support the measurements and which of those support the carrier phase measurements programmatically you do this as follows you need a method on status changed and it will return a ninja integer that tells you the capability of the phone either if the phone just does not support the measurements at all or if they supported but location is off or if they support it and location is on in that last case you're good to go so now let's get into some details of the api's the most relevant methods for what we're talking about here are the following three there's get constellation type which tells you which of the different GNSS constellations a particular satellite belongs to there's get carrier frequency hertz which tells you whether you're on the l1 or the l5 band for a particular signal and then most importantly there's get accumulated Delta range meters which is how far along that wave the receiver has tracked you since it began tracking the signal and then there's something else that I need to explain which is duty cycling so right now when you're navigating with your phone and you see the blue dot moving along for example when maybe when you're navigated here this morning you might think that the GPS is on continuously and it's actually not what's happening in the phone is that GPS will by default be on for a fraction of a second and then off for the remaining fraction of a second and then repeat and this is to save battery so you perceived that the GPS is on all the time because the blue dot will move along continually but actually it's duty cycling internally now for this carrier phase processing you have to continually track the carrier wave because remember the carrier wave is the ruler with no numbers on it so if the GPS was on and your receiver measured your phase and you get the data from the reference station you'd start processing if the GPS then goes off for a fraction of a second well now you've lost where you were it'll start again you'll reacquire you'll be a different phase on the react position you'll start again well you'll never solve the problem back right you need the tape measure to stay out and you need to process and to do that you need to disable duty cycling and you can do that in Android P with a developer option which I'll talk about some more in a minute okay so now add some details of the API what I've shown here on the right is a screen shot of an application that we've put out it's called GNSS logger and this is enables you to log the raw measurements in the phone now the nice thing about this app is it's a reference app the the code is open source and available to you on github so when you build your app please you could make use of our code and when you do build an app that needs role measurement you will need the Android location manager API with the method register GNSS measurements callback and this method requires you to pass it a GNSS measurements event callback shown here you construct this callback and then override the method on status changed and that will give you the integers status that we discussed to tell you if measurements are supported if they are you then override the method on Jia's on GNSS measurements received and this allows you to receive a GNSS measurement event every epoch for example every second and this event gives you the values we've been talking about constellation type carrier frequency and accumulated Delta range now for duty cycling that's a developer option so you access that through the developer page on your phone as you see there on P and this allows you to disable the duty cycling now keep in mind this introduces a trade-off between getting the continuous measurements and battery life there will be an impact on battery life how much well even when GPS is on continually it'll use less than 20% of the power that screen on users so that gives you a feel for the magnitude now this is a developer option precisely because it's a trade-off in in battery life and we're very concerned about maximizing battery life but if you and we together can prove that there's value in this option and people want it then it will be upgraded to a fully supported API in the future so here's a block diagram that shows the basic architecture that we expect if you implement an app for high accuracy than the bottom of the block diagram on the Left you've got the GPS GNSS chip the GNSS measurements come up through the api's we've just described and then your app lives at the top in the application layer you're going to need access to a reference network to get the data that the reference stations are tracking they are publicly available a reference networks I've listed one down the bottom the International GNSS service IGS org and you can get data from them free then you need to process that data in some kind of position library and that does all this carrier phase processing and that too is available as open source code there's another example down there our TK Lib org has an open source package for precise positioning and then you're good to go now I mentioned that dual frequency gives you much faster convergence to the high accuracy but you don't have to wait until the dual frequency phones come out you can start doing this with single frequency phones and here's an example of someone who's already done that this is an app created by the French Space Agency and they they're doing exactly what we show on the block diagram on the left and they're achieving sub-meter accuracy after a few minutes of convergence here's some more external analysis that's been done in a similar way this is from a paper called positioning with Android GNSS this is using one of those chips that I showed you the chip that goes in cell phones that does dual frequency and what's been shown here is the cumulative results over many different starts of the GPS and what you see is that most of the time the accuracy is better than a meter you see that on the vertical axis which is 0 to 1 meters the accuracy gets to better than a meter in less than one minute and then continues to converge as long as the phone continues to track that carrier phase continuously here's a another similar but different paper this is using one of the chips that's meant for cars and so this was tested in a car driving around that track there and what the plot here is showing is the accuracy after the initial convergence while the car was driving so you see with GNSS alone the accuracy is 1 to 2 meters and with this carrier phase processing it's at a couple of decimeters so for you to build this what are you going to need well of course you need the device location to be enabled and your app has to have location permissions so that's going to come from the user you need the basic GNSS measurements that's been available since Android n you also need this continuous carrier phase I've been talking about and that's available in P with the developer option it would be nice to have dual frequency for fast convergence and that's coming soon you need a reference network such as the one I already mentioned there are also commercial reference networks out there and commercially available software to do the same thing but I recommend you start with a free stuff and go from there and then finally there's the app from you so in summary everything we've been showing you here we have indoor and outdoor technology that's been evolving kind of in parallel in each case we have a new technology and Android P gives you something to ax let's talk about indoors again the new technology is Wi-Fi round-trip time and round-trip time enabled access point we give you public API to access these measurements but you need access point infrastructure so this is where some of you can do this this year because if you have a customer who owns or controls a venue they can upgrade their access points sometimes just a firmware upgrade and then you have the infrastructure Android P comes out later this year and you can implement something like what Reuters demoed and have indoor navigation or many other apps for example someone goes in a store where's the milk you can make the world a better place for all of us by saving us from the tyranny of having to ask directions from strangers okay and if you if you're not one of those people who has access to this now in a few years the infrastructure will naturally evolve as access points upgrade to Roger time and this will be available from fused location provider as Roy said now outdoors okay for this carrier phase process they can it's not just outdoors but outdoors with open sky and what do you need dual frequency and continuous carrier phase and we give you the API and the developer option to make use of that you will need reference station access as I mentioned and then applicate well what can you do outdoors with open sky well we already mentioned the traffic example and there's many other examples that readily come to mind where existing GPS accuracy doesn't cut it for example geocaching where people go look for treasures would be nice to have one meter accuracy precision sports monitoring imagine a snowboarder who wants to measure her tracks very precisely after the fact five meters not good enough one meter would be great speaking of sports they're more and more drone apps where you kind of follow me and the drone will fly along and video you well it'd be nice if the videos you and not the person next door to you and so on I'm sure there are hundreds of apps and you're probably thinking of some right now and that's the whole point we want you to do that and you and ask together bend the arc of Technology history closer to the present and I'm really looking forward to next year to see some of you back here and see what you've created and so finally I want to leave you with a couple of pointers when you build location apps please build great location apps you must have user trust please provide the user with transparency and control you're gonna have to ask for location permissions for this explain to them what you're doing how it benefits them when things go wrong make your app recover gracefully if these measurements are unavailable for some moment or something goes wrong you can fall back to fused location provider location so think about that and finally respect the battery life trade-offs that we've discussed so I must remind you to fill out your surveys please at that site and as I mentioned will be available outside the door here for any questions or so from all three of us thank you [Music]

As found on YouTube

Intro to the Google Assistant: Build Your First Action (Google I/O’19)

if you're new to developing for the Google assistant you have come to the right talk and if you're an experienced assistant developer don't worry we're gonna tell you what's new our mission for the Google assistant is to be the best way to get things done that is an ambitious mission I think it sort of rhymes with organize the world's information and make it useful and accessible and just like Google searches mission from 20 years ago our mission today for the assistant requires a large and vibrant ecosystem of developers and that's where all of you come in so whether you're joining us here the amphitheater shoreline watching on the livestream or on YouTube later this talk is going to tell you how you can make your experiences shine on the Google assistant if you're a Content owner this talk is going to tell you about schema.org markup and templates and how you can make your content look great on the assistant and if you're an Android developer wait I hear there's a few Android developers here today where are the Android developers where are you yes thank you if you're an Android developer this talk is going to tell you how you can use app actions to voice enable your Android apps and if you're an innovator in this new generation of conversational computing this talk is going to cover interactive canvas and conversational actions how you can use HTML CSS and JavaScript to build rich immersive actions for the assistant and if you're among the few the proud the hardware developers at i/o any hardware developers a couple this talk is going to tell you about the new innovations in the smart home SDK but before we do any of that Naomi is gonna tell us a little bit about why now is the right time for you to invest in the Google Assistant we're gonna start by going back in time I want you to think about when you first use the computer for some of you it was the 80s maybe you played a game or store to receipt or if it was the mid-80s you even used a word processor for others it may have been the 90s games are a little bit better and you navigated via the mouse instead of the command line ten years after computers entered our home cell phones entered many of our hands and by the mid to late 90s communication was more portable than it had ever been before but it was still very early you remember the days when text messaging was a game of back and forth between the inbox and the sent folders yes we've come a long way now another 10 years later in about 2007 the first smartphones entered the market and then mobile computing exploded so you may notice a trend here about every 10 years or so there's a fundamental shift in the way that we are computing from desktop to mobile and now to conversation so what does these chips mean for all of you as developers well it means that you have a lot to think about a lot to build on and a whole lot to build with and this is because each new wave is additive to the one that came before we're still clicking we're still typing we're still tapping and yes now we're also talking we're looking for more assistance in our daily lives and we're turning to the devices around us to get things done now Google's approach to this era of conversational computing is the Google assistant it's a conversation between the user and Google that helps them get things done in addition to the Google assistant there are also assistant enabled devices like Google Home Android phones and more and finally there's actions on Google which is our third party platform this enables developers to build their own experience on the assistant it's an entirely new way to engage with your users as they're using conversation to get their things done and it was announced on this very stage just three years ago now in just a few short years we've seen an incredible evolution in terms of how users are talking to the assistant and this presents opportunities for developers across the globe now think back to those first use cases on conversational platforms they were very very simple and straightforward they were limited to things like turn on the music turn on the lights turn off the music turn off the lights again simple straightforward commands that fulfilled users very low and limited expect a shion's but there have been three incredible shifts in querying that have occurred over the last couple of years first we're seeing that users are having longer conversations in fact query strings on the assistant are about 20% longer than similar queries on search second they're more conversational they're 200 times more conversational than search so queries are going from whether nine four zero four three two something like do I need an umbrella today like you might ask a friend a family member or even a real-life assistant and third queries are action-oriented it's 40 times more action-oriented than search so users aren't just searching for information but they're actually looking to get things done they're finding that restaurant for Friday night and they're booking that dinner reservation and the evolution of the query is due to a couple things happening simultaneously so first we're seeing that technology is improving natural language processing and understanding improvements have actually decreased the word error rate which is a key metric for speech recognition it's now better than what humans can understand simultaneously the number of assistant ready devices has soared so it's turned this new way of computing into an ambient one it's always there when we need it no matter what environment we're in or what device were on it's magical but it poses a really new challenge for all of us which is how do we reach the right user with the right experience in the right moment all at the same time so sync – a pretty typical day we'll talk about some of the touch points where the assistant might be helpful so first you wake up good start now if you're anything like me you really would love to keep your eyes shut for that extra 20 seconds but you also need to kick start your day and find out where you need to be and when well the assistant can help with that now you're waiting for the subway you're in a crowded loud station you have a couple of moments of idle time before that train comes to maybe pre-order your cup of coffee or by your friend that birthday gift you've been meaning to send them you're the assistant on your mobile or your watch can help in those moments as well and finally you're sitting on the couch the end of a long day your laptop or your mobile phone are probably not too far away but neither is your Google home it's there to help you so across these moments and more Google is handling the orchestration that's required to deliver that personalized experience for the user with the context appropriate content so you as the developer don't have to worry about which experience which device which manner you can just leave that to us so what does this mean for the developer well you have more ways than ever to be there for your user you can reach users across the globe in over 19 languages across 80 countries on over 1 billion divisive today with over 1 million actions but more than that it's actually easier than it's ever been and this is something we're all really excited about I know we're all balancing far too many projects for the numbers of hours in a day so today we're going to talk about how you can get started if you have an hour a week or even an entire quarter to build for the assistant we'll talk about how to use existing ecosystems as well as how to build net new for the assistant and we'll focus on four major pillars so first we'll talk about how to use your existing content this leverages what you're already doing in search so web developers we're gonna be looking at you for this one second we'll talk about how to extend your existing Android investments leverage the continued investments you're making in your mobile apps and app developers I heard you before so you're gonna want to pay attention to that section third we'll talk about how to build net new for the assistant so if you're an innovator in the conversational space we will share how to get started and finally hardware developers saw a few hands go up before where if you're looking to control your existing device cloud our smart home section will appeal to all of you within each section we'll talk about what it is and how to get started but before we do Dan is gonna tee up a very sweet example that we will use throughout the rest of our presentation so a single unifying example that shows all the different ways you can use Google assistant now this gets me thinking about two things one I love s'mores I have a sweet tooth and I'm also an engineer here in Silicon Valley home of tech startups of all kinds and the tech hub of the world so how can I combine my love of s'mores and my love of Technology talking it over with my colleagues Brad and Naomi we thought of the idea of using a fictional example company that you as a developer can show all of the different ways that assistant can help you and your company with things like building a global brand through Google assistant increasing your global sales customer growth and acquisition and even things like user re-engagement like the very important metric of daily active users and so the first pillar that we have is how you can leverage your existing content with Google assistant so like many of your companies s'more s'mores has a lot of existing content that's ready for the assistant they have a website they have a podcast and of course they have recipes so that we can all understand how to make that perfect s'more at our next bonfire now also just like you they spend a great deal of time optimizing their site for search so we're gonna talk about how they and of course how you can extend existing efforts and optimizations to the Google assistant now Google's presented ways to optimize your content for search since the 90s we work hard to understand the content of a page but we also take explicit cues from developers who share details about their site via structured data structured data is a standardized format for providing information about a webpage and classifying that page content for example on a recipe page you can disambiguate the ingredients from the cooking time the temperature the calories and so on and because of this markup we can provide users with richer content on the search results page answers to questions and a whole lot more and this brings Google search beyond just 10 blue links and over the last year we've been hard at work to enhance the Google search experience and enable developers to extend their content from search to other Google properties like the assistant so for sites with content in popular areas like news podcasts and recipes we have structured data markup to make your content available in richer ways on search same optimizations that you make for search will also help your content people with discoverable and accessible on the assistant and it's just standard RSS you've seen this before and that's the approach we've always taken we're using industry standards and insuring those optimizations are ready for search and the assistant to and I'm so excited now to announce two brand new programs that we're adding how-to guides and fa Q's so the additional optimizations that you make for search will soon yields a richer experience and an automatic extension to the assistant let's dive into each so how-to guides enable developers to mark up their how-to content and make it discoverable to users on both search and the assistant what displays is then a step-by-step guide to the user on anything from how to change a flat tire to of course how to make that perfect some more so on the left here you can see a nice image based preview of the how-to content in some more some more site it allows the user to engage with your brand further upstream in their journey and it differentiates your results on the search results page and if you don't have images don't worry we have a text-based version of this feature as well now on the right you can see the full guided experience on a home hub device again all powered by the same markup on some more some more site now the best thing about creating a step-by-step guide is that you actually don't have to be technical to do so now I know we're at i/o and i/o is the Developers Conference but if you have one hour to dip your toes in the assistant pool and you don't have a developer who can devote the time to adding the markup don't worry we have ways for you to get your content onto the assistant even as simply as using a spreadsheet so now you can combine your existing YouTube how-to videos and a simple spreadsheet and actions console to get a guided how to experience across many assistent enabled devices so s'mores s'mores calm now has two ways for how they can get their step-by-step guides on how to make that perfect some more onto the assistance if they have a developer with some extra time they can add the markup or they can use a simple spreadsheet to extend their existing YouTube content now we're gonna switch gears a little why to think about how times you turn to search for answers to questions maybe some of you are even doing it right now that's okay maybe you're trying to find out the return policy to your favorite store or if there's a delivery fee for your favorite restaurant blackout dates for travel the list truly goes on and on while our epic use markup enables a rich answer experience on search giving users answers directly from your customer service page so the same optimization will then enable queries on the assistant to be answered by the mark-up you already did and it's so easy to implement so when a user queries something like what's more smores delivery fee on the assistant a Google will soon be able to render the answer from that same markup on your customer service page and here are some developers that have already gotten started with FAQ s and how-to guides and we'd love to have you join us tomorrow at 10:30 in the morning to learn more about how to enhance your search and assistance presence with structured data of course the talk will also be live streamed or you can catch it later on YouTube so as you've seen there are three main ways that s'mores s'mores calm and you can leverage existing content first you can ensure the hygiene of your structured data markup for podcasts news or recipe content you can add the new epic use markup to your customer service site or you can use a new template to bring your content to the Google assistant we're so excited about the ways that we're making it even easier for developers to extend existing content from search to the assistance but we're also making it even easier for companies to engage their existing ecosystems like Android so let's talk more about app actions all right thank you how about that Sun huh you enjoying the Sun in shoreline I can't see anything without these so I'm gonna go with this so we're my android developers again Android developers yes just like many of you the s'mores s'mores company has a popular Android app they want to make it as easy to order s'mores as it is to enjoy them but just like many of you they face the high cost of driving app installs coupled with the reality that users are you using fewer and fewer apps each year the sea of icons found on many users phones might be a contributing factor it's hard for users to remember your icon much less find it what we need is a new way for users to find and launch your android apps one that's focused more on what users are trying to do rather than what icon to click last year at i/o we gave a sneak peek at app actions a simple way for Android developers to connect their apps with the helpfulness of the assistant and with the Google assistant on nearly 1 billion Android phones app actions is a great way for you to reconnect with your users app actions uses Google's natural language understanding technology so it's easy for users to naturally invoke your application and finally app actions doesn't just launch your app it launches deeply into your app so we fast forward users right to the good parts now to help you understand this concept let's walk through an example of course using our s'more s'mores app let's first take a look at how this looks the traditional way so first of course I find the s'more s'mores app in the sea of icons does anybody see it next I select the cracker okay that that makes sense and then I have to choose a marshmallow all right and then I get to pick the chocolate and the course a toast level okay I've got to say how many I want that's important too and then finally I can review my order and confirm now that's a long intent to fulfillment chain it's a long way from I first had the desire for a warm delicious s'mores before I got it successfully ordered and that means there's opportunities for drop-off all along the way now let's take a look at what this looks like once s'more s'mores has enabled app actions first you'll notice I get to naturally invoke the app with just my voice I can say order one milk chocolate from s'more s'mores and then immediately we jump right to the good part of this application confirming that order notice we got all the parameters correct and then we just confirm and we're done we're ordered it's a short path from when I had the intent for the warm delicious more before I got it ordered but of course we didn't build app actions just for stability s'mores we had a few partners that have already started to look at app actions so for example I can say to the assistant order a maple glazed donut from Dunkin Donuts of course I might need to work that off so I can say start a run on Nike Run Club and I might want to settle that bet from last night by saying send $50 to naomi on paypal so let's take a look though at what enables this technology what's going on under the covers here foundationally at Google we connect users that express some intent with third parties that can fulfill it and app actions is the mechanism that you as app developers can use to indicate what your Android app can do each app action each built in and it represents an atomic thing a user could want to do including all possible arguments for that so you just need to implement that built-in intent and I handle the arguments that we pass to you the cool thing about built-in intents is that they model all the different ways users might express an intent for example these are the ways users could say start an exercise notice as an app developer you don't need to handle all of this complexity with this different kinds of grammar we handle all of that for you you just implement the built in intent so speaking of which let's take a look at how it looks for you as a developer to implement that well the first thing you'll do is open up Android studio and add an actions xml file you notice on that second line there it's order menu item that is the name of the built-in intent that we have implemented for our s'more s'mores app and then in that fulfillment line you'll notice a custom schema URL so you could of course use an HTTP URL as well this just tells us where in the application we should fast forward into and then at the and then you'll notice we map the arguments there so the menu item name is the argument name from the built in intent and then notice our URL is expecting item name and then finally at the bottom there we're giving some inventory what are the kinds of things users might say for this application and just for brevity I put the usual now we just need to handle that in our on create function so very simply we parse that item name parameter out of the URL we check to see if it's that identifier and if so we just pre populate the UI with exactly what we want this is a very simple integration that you can get done very quickly on the assistant so the good news is that you can build and test with app actions starting today we're releasing built-in intents in these four categories in finance food ordering ride-sharing and fitness so if your app is in one of those categories you can build and test right now and of course the team is already working on the next set of intense so if you have ideas or thoughts on what intent would be great we love it if you gave us feedback at this URL but there's one more thing app actions and built-in intents it also enables slices slices is a way that you as an Android developer can create a declarative version of part of your application and embed it into Google surfaces like the Google assistant so in this case we're implementing the track order built-in intent and then you can see in line there that's our Android slice showing up right in line in the Google assistant making it quick and easy for users to get that information and then launch into your app if they need more advanced functionality so what did we see here you can enable users to invoke your app with just their voice with app actions there's a simple integration model all you need to do is map the intents in your app to us common set of built-in intents and then the good news is you can build and test starting today with more intense coming soon so we've seen how you can leverage your existing content with Google Assistant we've seen how you can integrate your Android applications with Google assistant now I want to talk about conversation specifically conversation actions that is how you can build new custom experiences for the Google assistant so why our conversation actions so important well for one it's a way that you can natively control the devices capabilities so if the device has a screen showing image if the device supports touch show a suggestion chip that they can tap it's a way that you can increase your brand awareness through things like custom dialogue and agent personas you can grow your user re-engagement through well-crafted conversation design and things like action links and furthermore you can drive habits and interactions with features like routines daily updates and push notifications now what exactly our conversation actions it's the idea that you as a developer have full control over the dialogue the back-and-forth of what's said this is distinctly different than that of app actions or content actions or even smart home actions where you have some type of fixed markup or maybe you're using a built in intent something that how defines already what your the material that you're trying to access Google assistant takes that fixed markup and ingests it applies its own natural language understanding and matches what the user says automatically to the access of that material with conversation actions that's flipped you as a developer define custom intents you define the types of phrases the that a user might say to match that custom intent you even define that information that you want extracted out of what they say for those same intents so with conversation actions we need some type of tool that can help us do this and that is dialogue flow out-of-the-box it provides two key elements the concept of user intent or what the user actually wants and the concept of entity abstraction the way that you glean information out of what they say let's dive in a little bit with a small example so if we take some oars I would like a large s'more with dark chocolate and I want it to go dialogue flow can take this phrase as a whole and match it to the user intent that they want to purchase a snack of some kind now you see here a few words highlighted in the sentence large dialogue flow can understand that they want a large snack s'more the type of snack dark chocolate the topping of that snack and they want it to go rather than for a delivery or for their so when we took it take a look at this at a sequence of dialogue and expand it a little bit more the user might say something like hey gee talk to s'more s'mores s'mores s'mores in this case is the invocation name of your action Google assistant takes that audio transcribes it into text applies its natural language understanding and invokes your conversation action from that point forward it's between you as a developer and dialogue flow that's controlling the responses back to the user and so let's take a look at a live demo here I have dialog flow and a few intents that I've already defined I have a small shop where you can order a snap you can order whatever you last ordered a gift card and so let's take a deeper look into ordering a snack when I look at this I have a few things I have my contexts I have my training phrases that I've already supplied these are the phrases that I think as a developer the user might say that app that matches the intent of them wanting to purchase a snack of some kind if I scroll down I can see the types of parameters and the relating entities of those parameters specifically things like delivery pickup the type of snack the size of the snack toppings etc now if I scroll down further you'll see the responses that I've created as well that reference the entities that I've also defined if I scroll down even further you'll see the fulfillment if I wanted to have a custom fulfillment I can have a standard webhook call for this type of intent now let's look at an example here if I say one large s'more with milk chocolate you'll notice instantly without additional input from me dialog flow has highlighted several key elements within this phrase large it knows that's the size of the snack that I want s'more the type of snack chocolate type of milk so there you go that's that's pretty powerful stuff now let's take a look at it in the phrase of a full conversation if I say I would like to order a large s'more with dark chocolate instantly it gets the information it has the various contexts it matched it to the intent of ordering of snack and we scroll down it also still has the various entities and parameters that it's extracted now the default response here is that I've defined a required parameter of delivery or pickup and so it's asking me will it be for delivery or pickup I respond delivery and there you go it understands you've ordered a large dark chocolate s'more and you want it for delivery there you go so this is powerful stuff for Google assistant now let's go back to the slides here what we have is a way to build conversation now Google assistant supports a huge array of devices and surfaces and we want to be able to scale across them all different devices support different capabilities a smart speaker is voice only a car the voice forward it has a screen but you still want to have voice control to intermodal like your cell phone to maybe your SmartWatch which is screen only we need some type of feature that can fully utilize the screen and in the case of s'more s'mores they want to be able to build a rich game something that's voice first custom full-screen visuals they want to build a turn-based battle system that's multiple surface and supports all these different kinds so today I'm happy to announce the brand new API called interactive canvas this is an API that enables pixel level control of rendering any HTML any CSS and JavaScript let me reiterate that is a full webview running on the device if it supports full screen visuals animations even video playback now it wouldn't be Google i/o without another live demo let's take a look here what I have is a hub and for this demo I'm gonna need some some audience interaction here I'm gonna be playing a trivia game and it's gonna be a really hard question so I need you guys all to scream out as loud as you can the answer to this question hey Google Play HQ University hears HQ University yes that's me the host who's khalifa host malone the trap Trebek and I'm here to welcome you to HQ niversity you've been accepting to this elite program to help smarten up and sharpen up your trivia skills to help you win HQ the rules are very simple my assistant Alfredo is gonna ask you a series of questions that just like HQ start easy and get harder you will respond with the answer you think is correct and the H universe will reward you let's get down to the nitty-gritty let's get this show on the road baby Alfredo pinin with Camaro numero uno ready question 1 if you put on a hoodie what type of clothing are you wearing your choices are cloak sweatshirt or cape so what's the answer sweatshirt yeah baby you did it we did it awesome next so that is interactive canvas a way that you can build fullscreen visuals and custom animations so how does something like this work well first off we have to start with a regular conversation action where the user says something to the device that in turn goes to the actions on Google platform and turn to dialogue flow and finally your custom fulfillment your fulfillment supplies a certain type of response in this case it needs to supply a rich response when we add the web application the canvas it supplies a immersive response which tells the Google home to load the web app your your web application in turn has a specific JavaScript library called the interactive canvas API let's look at some code for this when we look at this this is the custom fulfillment you can see here I have the default welcome intent that's supplying a new immersive response that supplies the URL of the web application and the initial state of that web application what does it look like on the web application side well when we see this there's two main elements that you need to include there's the CSS stylesheet this supplies the specific padding for the header on the devices and things like that and then the actual JavaScript library this library itself manages the state of your web application with the states of the conversation so that you can control both in unison so some key takeaways around conversation is that dialogue flow is the tool for developers to where you can build custom conversations you control the back-and-forth of your dialogue and – we've announced the interactive canvas API the pixel level control over the display for games where you can run HTML any CSS and any JavaScript now I want to switch it up a little bit talk about smart home the ability for you to control any hardware with your voice now traditionally smart home has been all about cloud to cloud communication so when I turn on my Phillips hue light bulbs what's actually happening is that Google assistant takes in the audio transcribes it into text applies natural language understanding and sends a specific response to philips hughes servers philips you in turn controls the light bulb and so now i'm glad to announce a new api the local home SDK this provides new local control over your devices with post assistants latency of under 200 milliseconds supports local discovery protocols like UDP broadcast mdns UPnP for actually controlling devices UDP TCP and HTTPS now with smart home there's device types and device traits it supports all of them out of the box with the exception of two-factor authentication now my favorite part is that it's come as you are meaning you don't need to change the embedded code on your device for the messages what's actually happening is that you develop a JavaScript application that runs on the home device that's pretty awesome so let's take a look at how this works when the user says something like hey G turn on the lights again that audio sent up to Google assistant transcribes it into text applies natural language understanding Google assistant creates a execution intent and sends this intent this structured JSON response down to the Google home where the home is running your JavaScript application your JavaScript application in turn understands the intent of trying to turn it on with the exact device ID things like that and constructs the specific message that your device support in turn turning on the light so I want to show more traits and more types today we're always adding more device types that we support with goop with the smart home api's and today we're releasing even more with things like open closed starts off with zones lock and lock even devices and pipes like your door boiler garage door now again we're adding these all the time and today we're releasing a huge number more so I want to show now how s'more s'mores can use smart home so what I have here is a toaster oven some of you might have already seen this up here and wondering what it's used for so I have a toaster oven here with inside of it some s'mores that I want to eat and I want to toast perfectly I also have a Google a ìwhy vision kit which inside of that is a Raspberry Pi zero and I'm using this to control the power to this toaster oven and so let's take a look hey Google turn on my s'mores toaster okay turning s'mores toaster on awesome so there you have it a smart home device that's being controlled via voice with Google assistant and so let's recap some key takeaways one we announced the local home SDK where you can control real world devices using local Wi-Fi with your voice and second we've announced a huge number of new traits and device types these these are available today that you can use in your own devices and so what to do next all right we have covered a lot today but we are so excited to share with you all of the ways you can get started building your first action so to quickly recap you can use your web content and leverage what you're already doing in search you can use app actions and leverage the Android ecosystem you're already participating in you can build a custom experience for the assistant by building a conversation action or you can extend your hardware and build smart home actions to control the devices around your home but this was just the beginning there are 12 more sessions this week that will dive into topics we only had a chance to introduce in our time together today so these additional talks will be geared toward Android app developers web developers hardware developers or anyone who just wants to learn how to build with conversation or some insights around this new ecosystem so please check them out live watch them on the live stream or of course tune in later on YouTube now for those of you here with us this week we have a sandbox out back office hours and a code lab now I think I heard our toasters ready so it's time for us to go enjoy some mores but visit our developers site talk to us on Twitter and we can't wait to see the rest of i/o thank you so much for joining us today [Applause] [Music] you [Music]

As found on YouTube