WWDC 2020 Special Event Keynote — Apple

["Daydreamer" by Aurora playing] Good morning. And welcome to WWDC. WWDC is an incredibly important event
to Apple, our developers and our users. It's here that we bring
some of our biggest innovations to life. And we have not stopped innovating, doing the work that will enrich
people's lives for years to come. Because we're all looking forward
to a more hopeful tomorrow. That's why we believe it's so important
to have WWDC this year. And while it cannot possibly
feel the same in here without you, I can assure you
that we have a great show ahead of us. This year we're delivering the conference
in a whole new way to all of you around the world,
directly to your home. And we want to welcome you
to our home here at Apple Park.

I'd like to first talk to you
about two big things that are happening in the world right now. To start, I want to address the topic
of racism, inequality and injustice and to recognize the pain being felt
throughout our nation, especially in
our Black and Brown communities, after the senseless killing
of George Floyd. And while the events of this past month
are sadly not new, they have caused us to face long-standing institutional inequalities
and social injustices. This country was founded on the principles
of freedom and equality for all. For too many people and for too long,
we haven't lived up to those ideals. We're inspired and moved by the passionate people around our nation
and around the world who have stood up to demand change. We must all aim far higher to build
a future that lives up to our ideals.

This means taking action. Two weeks ago, we announced Apple's
Racial Equity and Justice Initiative with a commitment
of one hundred million dollars. Starting in the United States,
and expanding over time, this initiative will challenge
systemic barriers that limit opportunity
for communities of color in the critical areas of education,
economic equality and criminal justice. We also announced something important
for this community, the new Developer Entrepreneur Camp
for Black developers. We want to do everything we can to foster
the brightest lights and best ideas. At Apple, our mission has always been
to make the world a better place, and we're committed
to being a force for change. Right now our world is
also battling a virus that is affecting
the daily lives of billions of people.

We want to thank the dedicated people
everywhere, especially our health-care workers, who have made tremendous sacrifices
to take care of those in need. We've also seen the profound impact
our products have had. People are relying on them more than ever
to remain connected to family and friends, to do their work,
to express their creativity, to be entertained
as well as to entertain others. Today the world is counting on all of us, and on the products and experiences
that we create, to move forward. Because throughout history, great challenges have been met with great
creativity and important breakthroughs. That's why we're so excited
about this year's conference.

This is going to be truly a unique week, delivered unlike any
that we've done before. There will be more than a hundred
engineering-led video sessions, one-on-one consultations
with Apple engineers, developer forums and so much more, delivered to you from different locations
right here at Apple Park. And this year,
the conference will be available to our entire community
of 23 million developers, as well as anyone who is interested,
for free. Presenting the conference in this way
allows us to be more inclusive than ever. Perhaps this will inspire
the next generation of developers. So even though
we can't be together in person, in some ways we're going to be
more together than ever.

Today we're going to push
each of our platforms forward in some exciting and breakthrough ways. With that, let's get started
by sending it over to Craig. Good morning. Great to have you here. As you can see, we've got a lot to cover,
so let's get started with iOS. Together with iPhone, iOS is central to how we navigate
our lives and stay connected. And now we're making it
even more powerful and easier to use. Our new release is iOS 14. This year we've spent time rethinking some of the most iconic elements
of the experience on iPhone. Now it all started here, with a carefully considered Home Screen
that has truly stood the test of time. Of course, over the years, we've kept
the fundamentals largely the same but carefully added features
like folders for organizing your apps, widgets for quick information and personalized experiences
powered by on-device intelligence that serve up just the right thing
at just the right moment.

It's hard to imagine iPhone
without these features now. Well, that brings us to this year. We're doing more on our iPhones today
than ever before, so we've rethought some of
the core elements of iOS to reflect this. Let me give you a quick peek. ["Like This Like That" by
BEGINNERS & Night Panda playing] This is gonna be amazing.

Let's dig in,
starting with the Home Screen. Today's Home Screen works great,
but as we get more and more apps, we can end up with this: lots and lots of pages. And we tend to forget
what's beyond the first couple. Wouldn't it be great if there were a way to organize
all of those apps without doing a thing? Well, this year we're doing just that
with something called the App Library. It's a new space
at the end of your Home Screen pages that automatically organizes all your apps in one simple and easy-to-navigate view. Let me show you. Here's my Home Screen. Now, like you, I have muscle memory built
for the first page or two, but when it comes to all of these pages, well, honestly, I've lost track
of where a lot of things are. And that's where the App Library comes in. You can see that all of my apps
are automatically organized here. In fact, now with the App Library, I actually don't need all those pages
for all my apps. So we created an easy way
to hide app pages.

I just go into jiggle mode,
tap the dots at the bottom, and check this out, I get a zoomed-out view
of all my app pages. I can simply tap
to hide the pages I no longer need. Just like that. And now with those pages hidden, App Library is always
just a swipe or two away. So here in App Library, getting to the app
I'm looking for is really easy. Up at the top I have the Search field, and I get all my apps
organized from A to Z. Now over here on the upper left
I have Suggestions. It uses on-device intelligence to show me the apps
that I'm likely to need next. And on the right is Recently Added, giving me access to the apps that I've
recently downloaded from the App Store. And below are
intelligently curated categories. So I can tap into a category
like Apple Arcade and see all of my apps in that category. Now let's go back. You may notice
that in each of these categories the apps I use most are right here
at the top level, so I can launch one of these directly
with just a tap.

So that's App Library. We think this is gonna make it
easier than ever to get to your apps. Next, let's turn to Widgets. Today, widgets help you get information
at a glance. But a lot has changed
since we first introduced these. Now we have Apple Watch, where we're able
to surface so much information on a small screen
that you wear on your wrist. Well, this year,
we're taking all that we've learned to create a completely reimagined
experience for widgets.

To start,
they're more beautiful and data rich. And we're introducing different sizes, so you can choose one
that best fits your needs. Let's take a look at them in iOS 14. So let's swipe over to Today View
and take a look at our new widgets. They're just beautiful. And the new designs are
more data rich than ever. And you can see
they now come in a variety of sizes. So you can pick just the right level
of information for each one. Now we like these new widgets so much, we wanted to make them
even more accessible. So check this out. I'm just gonna tap and hold
on the Weather widget, and I can drag it out of Today View
and onto my Home Screen. And watch, as I move it around, the apps just dance out of the way
to make space for my new widget. Let's add a second one. Just gonna tap the plus
here in the upper left and grab on to Podcasts. I can drop it just like that.

Now I'm gonna swipe over to page 2 here
and bring back up the Widget Gallery. The gallery is a great place
to explore widgets. Now when I tap on one, I can actually page through
all of the different sizes available. Just like this. But, you know, right now what I want to do
is grab this widget up top. It's a really special one.
It's called the Smart Stack. Just gonna tap it and drop it here. With the Smart Stack,
I can easily swipe through widgets to pick just the one I want
for the moment. But what's really cool is that the Smart Stack can
actually do this for me automatically. So in the morning,
I can get my news briefing. Throughout the day,
find out when I have a meeting coming up. And in the evening, I might get
a summary of my activity for the day. So that's widgets on the Home Screen. We're excited to see how everyone
will customize them in their own way.

Next, we're also bringing
Picture in Picture to iPhone. So you can access apps on your iPhone while watching video
or talking on a FaceTime call. Let me show you. So here on my Home Screen, the Smart Stack
is showing me the TV widget. So I can just tap to start playing a show. Now check this out.
When I swipe to go Home, the video automatically goes
into Picture in Picture right over the Home Screen. And when I launch another app, like Notes,
I can keep watching. Now I can drag the picture
to another part of the screen. If I want to make it bigger,
I can even pinch to zoom.

And as I move between applications
it stays with me. And what's cool is
I can also swipe it to the side, and the audio keeps playing
when it's off-screen. Now here on the Home Screen
I can bring it back out if I want. And I have controls
to get back to full-screen playback, or I can just tap the "X" to close it. And that's Picture in Picture video. It's a great way
to continue enjoying your video while tapping into everything else
your iPhone can do for you.

And that's a quick look at the updates
to the core elements of iOS. We think these new features are gonna
make iOS even more helpful in the moment. Another iconic experience
that's getting a major update is Siri. As much as Siri has advanced
over the years, the visual interface for interacting
with it has remained largely unchanged. When you use Siri, your iPhone
switches to this full-screen UI, obscuring your current context. So this year, we've completely redesigned
the Siri experience with a new compact design. It makes tasks like launching apps
incredibly seamless. For example, if you say, "Open Safari," Siri pops up at the bottom of the screen
and instantly launches the app. Or if you ask for information,
like the weather, results appear at the top of the screen
just like a notification.

Now this is especially great when there's information
you want to reference on-screen. For example, you could ask Siri
to add to your grocery list. So that's the new Siri design in iOS 14. But the UI is only part of the story. To tell you more about
how we're making Siri smarter than ever, I'll hand it off to Yael Garten. Thanks, Craig. Siri's getting smarter
and even more helpful every day, and I'm really excited to share
the latest updates with you. Siri helps you in a ton of little ways
every day: playing the morning news,
ordering a coffee, getting directions, setting the alarm before going to bed
and so much more.

In fact, Siri's helping so many of you with a staggering 25 billion requests
each month. And Siri's getting more helpful every day. Siri's always been great
for getting information and now has over 20 times more facts
than just three years ago. For more complex questions like "How do hybrid cars work?"
or "What causes seasons?", we recently introduced answers
from websites across the Internet, enabling Siri to help you
find even more answers.

Another way Siri helps is
with communication, like sending messages. This year, you can now ask Siri
to send an audio message, and Siri will start recording. This is great when you really want to have
the emotion of your voice come through. Another popular way to send messages
with your voice is using dictation. Keyboard Dictation uses
the same speech recognition as Siri. And leveraging
the power of the Neural Engine, we are now able
to run dictation on-device. This provides great accuracy and privacy. When communicating with someone in another
language, Siri can help with translations.

This year we're expanding
to support many new language pairs. This is hugely popular, but we know our users want
more than just translating phrases. They want to have entire conversations. And we believe conversations
between languages should feel natural and easy
and have the ability to stay private. That's why we're introducing
a new app called Translate. It is designed to be the best
and easiest-to-use app for conversations. And it can work completely off-line,
keeping your conversations private. Using advanced on-device machine learning
and the powerful Neural Engine, you can translate your text and voice between any combination
of these 11 languages.

So I could have a conversation
with someone in Mandarin, and they could have a conversation
with someone in Russian. Just tap on the microphone and say,
"What are your store hours?" [Siri reads Spanish translation] [Yael] You get back
the text and audio right away. And just turn the phone to landscape
to open conversation mode. We've designed a side-by-side view that's easy for two people to know
which side to follow in the conversation. This mode is incredibly intuitive,
with just a single microphone button because the app intelligently detects
the language spoken and shows translation
on the correct side of the screen. Translate will make communicating
between languages easier than ever before, connecting people in new ways.

And we can't wait for you to try it. Thanks, Yael. Next up, Messages. Messages is how many of us communicate
with people most important to us. And now we're using it more than ever. Compared to just a year ago, we have a record number of users
sending a record number of messages. And we've seen people use Messages
more and more to keep in touch
with their closest groups. This year, we're introducing a new way to stay connected
with your most important conversations, giving you new ways
to express your identity with Memoji and making big changes
to how we communicate in groups. To tell you more, here's Stacey Lysik. Thanks, Craig. First, let's get started
with Conversations. From the beginning, Messages was designed to make it really easy
to get to your newest messages.

But with so many active conversations, sometimes it can be tough to get to
the ones that are most important to you. So we are introducing a new way to let you stay connected
to your most important conversations: by letting you pin them
at the top of your list so you can always get to them. And you can see messages as they come in
with a beautiful animation on the pin. Next, let's talk about expressing yourself
with Memoji. There are over one trillion ways
to customize your identity with Memoji. In iOS 14, we're adding even more ways
to create your look with over 20 new hair and headwear styles to let you reflect your hobby, profession
and personality. We've also added something that's
even more relevant today: face coverings. And we're adding more age options too. My favorite way of using Memoji
is with Memoji stickers. And now we have
three brand-new Memoji stickers that let you send a hug, a fist bump
or even a blush to your friends.

Last, let's chat about groups. When you're talking to a group,
sometimes there's so much going on, it can be hard to keep track
of the conversation. So this year, we're gonna help you
bring order to the chaos. First we're adding inline replies that let you reply
directly to a specific message. You can view replies
in the full conversation or you can view them as their own thread
so you can focus in on the specific topic.

To make it even more clear
who a message was meant for, we're introducing mentions. With mentions,
you can just type someone's name to direct a message to them. And now you have the ability
to only be notified when you're mentioned
in the group conversation. And check out
the top of this conversation. We have an all-new design
for how groups appear. It lets you see
all the members of your group, where the most recently active people
are shown largest. And, for the first time ever, you can create a unique visual identity
for your group by setting a group photo or customizing your group's look
with an emoji. Inside the conversation you see group members' photos
around the image. Of course it looks great as a pin. You know who's most recently commented
in the group because their photo will appear
around the outside of the pin. And that's what's coming to Messages
in iOS 14: all-new pinned conversations,
fun updates to Memoji and powerful improvements to groups. Thanks, Stacey.

Next, let's take a look at features
that help us while we're out and about. Now we know that life looks very different
for many of us right now, but it won't always be this way. And as things start to open up, we have a new set of features
that will help us explore the world again, starting with Maps. Apple Maps is the best way to navigate
and explore the world, all while protecting your privacy. Over the past several years,
we've added many great features, and of course we've been
rebuilding our map from the ground up.

Our new map finished rolling out
across the US earlier this year and brought with it better navigation
and far richer detail for roads, pedestrian paths,
landcover and more. The new map also offers
more accurate information for places and allows us to build incredible features
like Look Around. Maps has come a long way,
and people have noticed. Just look at this quote from Fast Company. "Apple Maps has improved
by leaps and bounds and is a formidable rival to Google Maps. It's also arguably got the better UI, and by far– by far–
the better privacy policy." We're excited to announce we're bringing our new map
to more countries later this year, including the UK, Ireland and Canada. In addition to the rich detail
and improved accuracy, the new map serves as the foundation
for many great new features. In iOS 14, we're adding things that will make it easier
for people to find places they love and help them get to where they're going in ways that are better
for the environment. To tell you more,
I'll hand it off to Meg Frost. Thanks, Craig.

First, let's talk about
finding great places. We have millions of people
coming to Maps every day to discover great new places, whether they're planning
their next big vacation or just looking for
something to eat nearby. In iOS 14, the Maps team will be working with
some of the world's most trusted brands to offer amazing Guides. Guides for great places to eat, shop, meet friends or explore
in cities around the world. You can save Guides
so you can easily get back to them later, and the best part is they automatically
update when new places are added, so you always have
the latest recommendations.

In addition to helping you
discover great new places, Maps helps you get there
in a way that's better for the planet. For years,
Maps has made it easy to navigate using environmentally-friendly options
like public transit and walking. With iOS 14,
we're introducing great new features to help our users
reduce their carbon footprint, and our first one is
also our most requested: It's Cycling. We've built an incredible
cycling experience that helps you get around town
on your bike. We're adding a dedicated cycling option
to Maps which allows users to ride their bike
along bike lanes, paths and roads. Maps takes elevation into account to let you know if you're in for
a challenging uphill workout or a leisurely flat ride. You can also see if your route
includes quiet or busy roads.

We'll even let you know
if you have a steep passage coming up or if you'll need to carry your bike
up the stairs. You can also choose
to avoid stairs altogether. With iOS 14, we're bringing cycling
to New York City, LA, the San Francisco Bay Area, along with a number of cities in China
like Shanghai and Beijing. And we'll be adding many more cities
in the coming months. For environmentally conscious drivers,
we're also introducing EV routing. If you have an electric car, Maps is going to help eliminate
range anxiety.

With iOS 14,
Maps will track your current charge and factor in things
like elevation and weather to automatically add charging stops
along your route. And Maps will know
which type of charger works for your car, making sure to only route you
to compatible stations. We're working with a number
of manufacturers to support EV routing in their vehicles, including BMW and Ford, and we'll be adding many more
in the near future. Cities around the world are also working
to improve air quality and reduce traffic, so we're adding
congestion and green zones to Maps to easily see where they are
along with alternate routing options. In addition, drivers in China can securely store their
license plate number on their iPhone, and Maps will let them know which days
they can enter congested city centers based on that number. And those are just some
of the great new features coming to Apple Maps in iOS 14, making Maps the best product to help users explore
and navigate their world. Thanks, Meg. And now, on to CarPlay, which has transformed
the driving experience for iPhone owners by being the smarter, safer way
to use the apps you love in your car.

CarPlay is everywhere,
and it's incredibly popular. Here in the US,
it's available on basically every new car. And worldwide, it's available
on over 80% of new cars sold and has quickly become the default
in-car experience for so many people. People love CarPlay, and we get
some really passionate reactions. Joanna Stern says
it makes her life "infinitely better." We have some great updates for iOS 14. First up, we have new wallpaper options
perfect for the car. And we're adding support
for new categories of CarPlay apps: parking, EV charging
and quick food ordering. In addition to this,
we're really excited for the next step in how we're transforming
your relationship with your car by rethinking car keys. They've been around
for over a hundred years, but they've become big, bulky
and ripe for reimagining. To tell you more about
what we have planned, let's go to the garage
with Emily Schubert.

Thanks, Craig. I'm excited to introduce
a digital version of car keys. Now you can leave your keys at home and unlock and start your car
with your iPhone. And the very first car to support this
will be the new 2021 BMW 5 Series. Let me show you how it works. It's super simple. -It uses NFC, and you just tap to unlock.
-[beeps] And I place my phone on the charging pad
and then push to start.

[chimes] But this goes beyond just one less thing
you have to keep in your pocket. Digital keys have security benefits. They live in the Secure Element
of your iPhone, and if it goes missing, you can turn off
your keys remotely via iCloud. They're even easier to share
than a physical key. Copies don't involve
trips to the dealership. And you can share
from wherever you are with iMessage. Let's give Craig a key so he can
drive home after we're done here. With each key you share,
you can set options, like a restricted driving profile,
perfect for teen drivers. Which is tempting, but we'll give Craig full access. [phone chimes] -[phone chimes]
-Full access? Thanks, Emily. The new BMW will be available
to customers next month. In addition to adding this feature
to iOS 14, we're also enabling it in iOS 13, so customers can use their car keys
even sooner. Of course,
we want this to work in any car, so we've been working on standards
with industry groups. And this is just the beginning. We're working on technology
that will leverage our U1 chip, which uses Ultra Wideband technology
for precise spatial awareness.

So you'll be able to leave your iPhone
in your bag or pocket and still securely
unlock and start your car. We expect to see support for this standard
starting in new cars next year. Now, let's turn to the App Store. Twelve years ago, we revolutionized the industry
with the launch of the App Store. Today we have so many amazing apps
that offer a rich set of experiences, we can truly say
that for everything we want to do, "There's an app for that." So now it's time for us to extend
the success of the App Store and make apps available and accessible
in whole new ways.

What if you could have the right app
you needed at just the right moment? Let's look at what it would be like
if you did. [narrator]
Today, no matter what you want to do,
there's an app for that. But what if you don't have the app
you need right when you need it?
Like when you need to pay for parking.
Well, now there's an App Clip for that.
[car chirps] Ooh. Look, a new coffee shop. There's an App Clip for that. Or a friend sends you a message
with a print you like.
There's an App Clip for that. That looks nice. Looking for somewhere to eat nearby?
App Clip.
Very health conscious of you. Hey, there's a scooter.
Let's take it for a spin.
Yep. App Clip. Mmm. Ice cream. Wait.
There isn't one for that yet?
Well, soon there could be
an App Clip for that.
And that. And that. And that.And that.
-[martial artist] Hi-yah! An App Clip is a small part of an app.

It's light and fast and easy to discover, so you can quickly get what you need
right when you need it. Everything about App Clips
is designed for speed. They start with this card
which quickly pops up. And with just a tap,
you can launch the App Clip. You don't need to enter
credit card numbers because App Clips
can use Apple Pay for payments. And you don't have to manually
log into an account because it can take advantage
of Sign in with Apple.

App Clips won't clutter your Home Screen and will only stay along
as long as you need them. But you can easily launch recently used
App Clips from the new App Library. It's always easy to download the full app, and this makes App Clips
an easy way to discover more of what the App Store has to offer. And discovery is key. App Clips are all about getting to a part
of an app at the moment you need it, so it was critical that we made them
really easy to find. App Clips can be easily discovered
and launched from the web. You can launch App Clips from Messages
when friends share them with you. When you want to order takeout
from a restaurant in Maps, you can launch an App Clip
right from a place card.

You'll be able to tap on NFC tags out in
the world, on things like parking meters. Or you can scan QR codes to launch App Clips
that work with products you purchase. The best way to discover App Clips will be
with the new Apple-designed App Clip code. So when you see one, you'll know
that there's an App Clip waiting for you. They incorporate
both a visual code and NFC, so you tap on them or scan them with
the camera to bring up an App Clip. App Clips will be great for businesses
that already have apps. But we want to be able
to use App Clips everywhere, including smaller spots
that may not have their own app. So we made it possible for apps like Yelp,
which support multiple businesses, to create App Clip experiences
for each of the places they work with. Developers will create App Clips
from a part of an app, using Xcode and the full power of the SDK. To ensure that they launch quickly, they'll need to be
less than 10 megabytes in size.

And that's App Clips. Immediately discoverable, small in size, so they launch fast, integrated Apple Pay for easy payment, Sign in with Apple
for quick and privacy-friendly login and the option to download the full app
from the App Store when you want to keep it around. We can't wait to see all the App Clips
developers will create. And that's iOS 14. It's a huge release that transforms
the core experience of iPhone, with redesigned widgets
right on the Home Screen and a new way to organize your apps
with the App Library. It adds incredible updates
to some of the most popular apps, with powerful improvements
to Messages and Maps, and introduces a whole new way
to tap into apps with App Clips.

And next up, iPadOS. ["S.S. Luker's Mom" by Oh Sees playing] Oh, hey, we made it. All right. Well, let's jump right in. iPadOS builds on
all the amazing features of iOS while adding unique capabilities that deliver a distinct experience
for iPad. Like using Apple Pencil for taking notes,
markup and illustration. A reimagined track pad experience that lets you interact with iPad
in a whole new way. And unmatched AR experiences with ARKit and the amazing new LiDAR Scanner
in iPad Pro. All of this combines to put iPad
into a class of its own. iPad excels at "every type of input" and is a "product to do anything
and everything." Which brings us to this year
and our new release, iPadOS 14. Let's start with experience. This year iPadOS delivers
unique made-for-iPad designs that take great advantage
of the iPad's large, multi-touch display. Now iPad has always been about the apps. In the beginning, we focused on
giving the ecosystem of iPhone apps a larger canvas
to deliver new and unique experiences.

This quickly sparked
an entirely new set of apps, designed for iPad first, with immersive experiences that transform this magical sheet of glass into whatever you needed it to be. We're proud of the over one million apps
on the App Store today designed just for iPad. With customers continuing to push
their iPads further than ever before, we're extending
the design language of iPad to make apps more streamlined
and more powerful. To give you a live look
at these enhancements, I'll hand it over to Josh Shaffer. Thanks, Craig. Let's take a look at
some of the enhancements to iPadOS. The first thing that you'll notice are the same redesigned widgets
that you saw in iOS 14.

They look great on iPad as well, and they give you information at a glance
whenever you go Home. But let's see some of the improvements
in the apps, starting with Photos. iPad is the perfect device
for browsing your photos. Its large canvas lets you immerse yourself
in all your favorite memories. And this year, we're making it even easier
to browse and organize your photos with an all-new sidebar. With just a tap of this button,
I can reveal the sidebar, with all the core functionality of the app
in a single location. My photos remain front and center, but now I can quickly tap to move
between parts of the app. The sidebar is a really powerful way
to organize your photos too. I can easily drag a photo to the sidebar and then just drop it
to add it to an album. We've brought this sidebar to many apps
across iPadOS.

Like Notes, where it provides quick and easy access
to all your folders. And Files, where we've consolidated
navigation into the sidebar for a streamlined new design. We've also streamlined the toolbars, adding new drop-down menus
that consolidate functions into a single, easy-to-access button. I can just tap to change views like this. And for even quicker access, I can just tap and drag to change
the sort order, all in a single motion. You'll find this same approach
across other apps, like Calendar, where we've brought controls
into a single toolbar at the top, providing more space for your content and a single unified place to access
all the app's functionality.

Finally, Music has been updated to take even better advantage
of iPad's large screen. The sidebar in Music makes it easy
to move between views. I can quickly jump between
the new Listen Now and my playlists. -And once I start playing a song…
-["Caution" by The Killers playing] …I can bring up the brand-new
full-screen player, where I can see rich album art,
transport controls and lyrics, -all in one single view.
-[song fades] And these are just
some of the enhancements that are coming to apps in iPadOS. We're really loving these new app designs. But there's more, starting with Siri.

The new compact Siri design
that you heard about in iOS 14 is especially useful on iPad. Results appear at the bottom right corner, allowing you to easily reference the app
while using Siri. And we applied this same approach
to other parts of the experience… like calls. Now today when you receive a call
on iPad, you see this. [beeps] Whatever you were working on
is suddenly completely covered with the incoming call screen. Not cool. Wouldn't it be nicer
if instead you saw this? [beeps] Well, that's much better. Now an incoming call is presented
with a compact notification that doesn't take you out of context. And you can simply tap to answer
or flick it away to dismiss. And this applies to all calls,
including those from your iPhone or third-party VoIP apps like Skype.

[rings] And of course we're bringing this
to iOS as well. We think our iPhone customers
are going to love it. Now there's one more key experience
we've redesigned for iPad this year, and that's Search. Today Search is a full-screen experience, and sometimes you can lose track
of your context. So we've redesigned Search
with a new, compact design. You can start a search from anywhere,
like the Home Screen or over any app.

And this makes it easy
to find what you need without feeling like you've left the app
you're working in. But we didn't just redesign it. We've rebuilt Search from the ground up
to be Universal, becoming a single destination
where you can start all of your searches. First, we made it better than ever
as an app launcher. You just start typing a few characters, and you can instantly get
to where you're going. It's also great for finding contacts
to message or call or documents. You can even search directly into apps
like Keynote, Messages, Mail or Files. Or look up information
about people or places. And it's also a great place to start
all of your web searches as well. As soon as you start typing, you get relevant suggestions
to complete your search. And you can get to your web search results
with just a tap. And Search now makes navigating
to your favorite websites just as easy as launching an app.

Just type a few letters and the top hit
will take you right to Safari. So those are some of our updates
to the iPad experience. And of course,
you also get the new widgets and all the other great app enhancements
from iOS 14. Next, we want to push forward your ability
to express yourself creatively with improvements to Apple Pencil. Apple Pencil is a game-changing tool that turns iPad into
a professional drawing canvas, a great way to mark up and sign documents and the ultimate note-taking device. What many people love most
about taking notes with Apple Pencil is how they can express themselves
in a free-form way. Mixing handwriting and drawings can be
the best way to capture your thoughts. Now the challenge is
when you want to change things afterwards. Here, working with handwriting
just isn't as easy as with typed text. Now, we sometimes take it for granted, but with typed text
it's so easy to select, copy and paste into another document, or even just make space for more text.

Well, this year, we're going to make
handwriting just as easy and just as powerful. But that's not all. Our customers tell us that once they have
an Apple Pencil in their hand, they don't want to put it away. So this year,
we're bringing Scribble to iPad. So you can handwrite into any text field, and it will automatically
be converted to text. To show you all of this in action, I'd like to welcome Jenny Chen
for a live demo. Thanks, Craig. I'm really excited to show you
some great new features we have for Apple Pencil and iPadOS this year. One of the great parts about taking notes
with the Apple Pencil is that it really lets you work
in a free-form way.

I can just start writing anywhere. And it's not just about text. I can also express myself
with drawings or shapes. But sometimes you want that
more professional, cleaned-up look. So now, when I draw a simple shape
and pause at the end, it'll automatically convert
to that ideal shape. And we're smart about it,
retaining the same size and angle that you drew it at. In addition to shapes, we've also made huge improvements
to our handwriting recognition. So now, when I write,
I can easily make a selection using the same gestures
that I use for typed text. I can double tap to select a word
or double tap again to select a line.

Thanks to our advanced,
on-device machine learning, you'll notice how we can
select the handwriting while avoiding the drawings nearby. Now that I have the selection, I can easily change the color
or move it around the document. It's also perfectly easy for me
to make space for more room to write. We think that this will make note-taking
with the Apple Pencil even better. Now, you don't even have to put it down
when you want to do something else. Let's say you want to search
for "Edison bulbs" in Safari. Using Scribble, I can just write
directly into the text field…

And it automatically
gets converted to typed text. It also works in any text field, so I can easily add a new Reminder to my shared Reminders list
with my husband. I've also been learning Chinese, so I want to surprise him
with some of my progress and skills. I'll use Scratch to delete "lights." And then I can use Scribble to write "new"
and then "light fixture" in Chinese. You'll notice how Scribble recognizes
both English and Chinese in the same line. And what's awesome is that
we can build on this technology to deliver other great features,
like Data Detectors. We can automatically detect
what you write, like phone numbers,
so I can make a phone call.

Or addresses, so I can look up directions. We can use these features together
to do even more with your handwriting. Let's say I wanted to use my handwriting
in another app. I can easily select what I want, tap the new Copy as Text
from the callout bar, and then paste it into an app like Pages. And it's automatically converted
to typed text. We're really excited
about these awesome new features. And we think it will let you do
even more with the Apple Pencil. Thanks, Jenny. So those are the enhancements to Pencil. Just one part of an amazing release, with Scribble for handwriting
into any text field, a whole new way to work
with your handwritten notes, broad enhancements to the app experience, and of course, iPad users also benefit from the great features
you already saw in iOS 14 and much more that
we didn't have time to talk about.

So that's iOS and iPadOS. Next, let's talk about AirPods. From the one-click setup to the automatic pairing
with all your devices to how they pause your audio
when you take them out, people love the magical experience
that AirPods deliver. Now we're bringing even more magic
to AirPods. To tell you all about it,
here's Mary-Ann Ionascu. Thanks, Craig. We have some amazing updates
coming to AirPods, starting with Automatic Switching.

AirPods will now seamlessly move
between your devices without you having
to manually switch them. Let's say you just finished
listening to a podcast and you pick up your iPad to watch a show. AirPods will magically switch over. And later you start a video conference
on your Mac. AirPods will automatically switch again. -[ringing]
-And if a phone call comes in, the audio in your AirPods
will route right back to your phone. We also have an exciting new feature
coming to AirPods Pro: spatial audio.

You know the experience
of being in a movie theater with a state-of-the-art
surround sound system, one where the sound
not only comes from in front of you… -[tone pulsing]
-…but also from the left, the right, behind, and even from above you? Well, we are thrilled to bring that same
immersive experience to AirPods Pro. But it turns out it's a lot harder to do when you only have a single earbud
in each ear. So our team created advanced
spatial audio algorithms for AirPods Pro that replicate
the movie theater experience. By applying directional audio filters and subtly adjusting the frequencies
each ear receives, we can place sounds
virtually anywhere in space… [high-speed vehicles passing] …creating an immersive
surround sound experience. But to truly deliver on this promise,
we had to factor in real-life situations.

First, people move their heads. For an authentic
surround sound experience, you need the sound field to stay fixed so the voice feels like
it's coming from the actor and not some random point in space. So we use the accelerometer
and gyroscopes in AirPods Pro to track the motion of your head, remapping the sound field
so it stays anchored to your device, even as your head moves.

And it's not only your head that can move, but you might move
your iPad or iPhone as well. That's why we constantly compare
the motion data from your head and your screen to understand how they are moving
in relation to each other. So if your bus turns the corner
or your plane banks, the sound stays in sync. The result is a surround sound experience that keeps you
in the middle of the action, no matter where you go. Spatial audio for AirPods Pro will work
with content encoded in 5.1, 7.1, and even Dolby Atmos. Thanks, Mary-Ann. I've been using the new spatial audio, and I think you're all
really gonna love it.

All right. Next let's head
to the Fitness Center to hear the latest on watchOS from Kevin. ["I'm Getting Tired"
by Jacknife Lee playing] [song ends] Thanks, Craig.
Since we launched Apple Watch, it's completely redefined
what a watch can do, and this has been
incredibly meaningful work. Apple Watch not only helps you
stay connected and active, it's become an intelligent guardian
of your health, enabling you to take an ECG, detect falls
and call emergency services for you. It's impacting lives in ways
that were inconceivable five years ago.

The power of Apple Watch
is not only its built-in features, but how developers
have personalized it for you. We introduced the App Store in watchOS 6, and there are now over 20,000
watchOS apps available. These apps bring the information
you care about most to just a glance at your watch face. We're taking this even further
in watchOS 7, starting with complications. Until today, an app could appear in only
one spot at a time on a watch face. In watchOS 7 developers can
enable multiple complications, making even more richly personal
watch faces.

So if you like to use
Dawn Patrol for surfing, you can create your own surf watch, including water temperature,
swell and wind speed predictions for your favorite beach. Or new parents can use Glow Baby to see nap, changing and feeding times
all on one face. While Nike Run Club can display stats
like pace from your last run and your weekly run goals. We're also bringing rich complications
to more faces, including a fresh Chronograph face with an integrated tachymeter and an updated Extra Large face with a huge, rich complication
right in the center. And Configuring Watch Faces
has been redesigned so you can easily select
which information you'd like to see. As developers, with watchOS 7 you can now build your rich complications
with native SwiftUI. While there are so many ways
to configure watch faces, you may not have yet gone in
and set these up for yourself yet. That's okay. With watchOS 7 we're making it super easy
to share watch faces, so you can discover a face
that works perfectly for you.

To do this,
we're introducing Face Sharing. You'll be able to discover curated faces
with third-party apps on the App Store, or discover a new favorite watch face
right on a website, or receive watch faces
directly from friends and family. Let's take a look at how this works. When you see a watch face you'd like,
you just press "Add Apple Watch Face." If the watch face uses some apps
that you don't have yet, you'll be offered each one right here
so you can easily get them if you like. And the new face appears
right on your watch. If you'd like to share a face
you've created yourself, that's also really easy. Just long press on the face, tap "Share," pick a contact and send. Developers can offer preconfigured
watch faces right from their apps. You can even share watch faces
across social media. It's a great way for the community
of Apple Watch wearers to connect and help each other discover all the
amazing things Apple Watch is capable of. Next, let's talk about Maps.

Maps is great for walking, driving
and transit directions, and now in watchOS, just like in iOS 14, you can get cycling directions. You'll see a variety of routes with information like time, distance and whether there are bike lanes. You can preview travel time
and elevation changes and navigate with turn-by-turn directions
that are large and easy to read. Maps can direct you to dismount
and walk your bike or even take the stairs to save time. You can also search for and add places
optimized for cyclists, like bike repair shops. [chimes] Now to tell you about
how we're advancing Workouts, please welcome Julz Arney. Thanks, Kevin. The Workout app uses algorithms
that are smartly tuned to track all aspects of your training. It's one of the most used apps
on Apple Watch, and we've continued to add support
for new workout types every year. -And in watchOS 7 we're adding Dance.
-[dance music playing] Dance is a total body workout
that's great for your heart. It makes you more fit and flexible,
and you're guaranteed to have fun. Whether you're doing Hip Hop, Latin,
Bollywood, or simply Cardio Dance, the Workout app now tracks some of the world's most popular
styles of dance for fitness.

Getting the most accurate credit for Dance
presented a unique challenge. Arm movements aren't always repetitive
or synchronized with leg movements like in running and walking. The solution was to use
advanced sensor fusion. [music continues] In Dance, we combine data from
the accelerometer and the gyroscope to detect the difference between
dancing with just your arms… just your lower body… or when you put it all together
and dance with your entire body. Then we add in heart rate data for the most accurate
calorie burn calculations. watchOS 7 also tracks accurate calories
for Core Training, those exercises for your abs and back, Functional Strength Training, a workout type that helps you get stronger
and move better for everyday activities, and also Cooldowns to add on to another workout when you want to continue
with easy moves and stretches as you bring your heart rate
and breathing back to normal.

Of course, you can track your progress
for any of these new workouts inside the Activity app on iPhone, which is completely redesigned
in watchOS 7. The app now features a new Summary tab that gives you an easy way
to see your activity history, workouts and trends,
all in one seamless view. With a new focus on easy navigation
and summary metrics, the app is getting a new name as well: [music continues] Fitness. Back to you, Kevin. -[record scratches]
-[music ends] Thanks, Julz. Apple Watch helps you meet
not only your fitness goals, but also helps support your health with features such as Cycle Tracking,
the Breathe app, and Noise notifications.

And we're going to be adding even more
capabilities this year in watchOS 7. We'd like to share a couple of them
with you today, starting with one of the most-requested
features for Apple Watch: tracking your sleep. To tell you about this, over to Vera Carr. Thanks, Kevin. There are many ways to look at sleep:
scores, advanced monitoring, or sleep cycle analysis. We are taking
a more holistic approach to sleep by leveraging the devices you use
every day to not only track your sleep but to support you in actually meeting
your sleep duration goal. That starts with choosing not only when
you would like to wake up in the morning, but also when you'd like to go to bed. For most of us, setting a goal is easy. But getting to bed on time,
that's the hard part. Experts say that establishing
a bedtime routine helps the body transition
from wakefulness to sleep. So we are offering Wind Down. It can help you get to bed on time
by minimizing distractions and creating a personalized routine. Let's look at how this works.

In the evening, ahead of your bedtime, your phone can display
the Wind Down screen to help you transition mentally
before you go to bed. It creates a calm lock screen experience
and turns on Do Not Disturb for you. You can also set up shortcuts for simple things you may like to do
to help you prepare for bed. These might include
using your favorite meditation app or playing relaxing music. Once it's time for bed, your screen will dim and your watch
will go into Sleep Mode, which looks like this. The screen will be off during time in bed
so it won't bother you, and a tap displays this simple face. When it's time to wake up, you have a selection of gentle
and effective alarm sounds, or a silent Taptic-only wake-up alarm
so you don't disturb your partner.

Once you're up, you'll see a friendly
greeting easing you into the day. It also shows your battery level so you can remember to charge
in the morning. Apple Watch tracks your sleep using a machine-learning model
that senses your motion and even interprets the micro-movements caused by the rise and fall
of your breath, providing signals for when you're awake
and when you're asleep. There's an updated Sleep section
in the Health app, including a view of your trends over time.

Sleep Schedules, Wind Down, and Sleep Mode are also available on iPhone,
without a watch, in iOS 14. We know you'll enjoy using
your watch throughout the day and now throughout the night. Thanks, Vera. In addition to sleep keeping you healthy, there's another preventative-care item
that's so important, particularly now: handwashing. In watchOS 7, Apple Watch is the first watch
to deliver automatic detection when you start washing your hands
and sensing of how long you actually wash. Our approach here is using
machine-learning models to determine motion
which appears to be handwashing and then use audio
to confirm the sound of running water or squishing soap on your hands. During this, you'll get a little coaching
to do a good job.

You'll see a countdown,
along with haptics and sounds, to make sure you wash
as long as you're supposed to. If you pause early,
there's a polite note to keep washing. And when you're done,
you'll see, hear and feel it. That's just some of what's coming
this year in watchOS 7, including discovering and sharing faces,
new workout types, sleep and handwashing detection, and other new capabilities
like Siri language translation. And that's Apple Watch. It's time for you to join Craig again to talk about something
that's important to all of us.

Here we go! ["I'm Getting Tired"
by Jacknife Lee playing] [song ends] At Apple, we believe
privacy is a fundamental human right. So we build it into our products
from the beginning of the design process. Privacy matters now more than ever. And because our devices contain
our most sensitive information, all of our product work is grounded
in a set of privacy principles. First, data minimization.

We use innovative technologies
and techniques to minimize the personal data
we or anyone else can access. Second, on-device intelligence. We avoid data collection by processing as much of your information
on your device as we can rather than sending it to a server. Third, security. Security protections are foundational
to everything we do in privacy. And finally, transparency and control. It helps you better understand
the data being collected so that you can make your own choices
about how that data is used. These principles come together
across our products, in our hardware,
our software and our services. At the end of the day, they result
in great privacy and great ease of use. A powerful example of this
is Sign in with Apple, which is designed for simplicity,
security and privacy. And since we launched it last year, users have created
over 200 million accounts across a wide variety of apps
and websites. And developers are seeing great usage
when they adopt it. For example, Kayak has integrated Sign in with Apple and found that their users are now
two times more likely to use it than any other sign-in provider.

Now, one thing we hear a lot
with Sign in with Apple is that people wish they could convert
their existing accounts to use it. So this year we're going to enable
developers to let you do just that. When you upgrade, you get the ease of use and built-in security
of Sign in with Apple while keeping the account
that you already have. This is another big year
of privacy improvements in our products. And to tell you more, here's Katie Skinner
and Erik Neuenschwander. Thanks, Craig. First let's talk about location. This year we're continuing to give you
even more control. In addition to the option
of sharing your precise location, you will have the option to only share
your approximate location with apps. We're also making changes
for mic and camera so you always know when you're recording. In addition to requiring your permission, this year we're adding more visibility
for current or recent mic or camera use. So if an app uses either one,
we'll indicate that in the status bar.

[Katie] Next, let's talk about tracking. Safari's Intelligent Tracking Prevention
has been really successful on the web. And this year, we wanted to help you
with tracking in apps. We believe tracking should always be
transparent and under your control. So moving forward, App Store policy will require apps to ask
before tracking you across apps and websites
owned by other companies. [Erik] Last, let's talk about app privacy. Today we require that apps
have a privacy policy. Wouldn't it be great to even more quickly
and easily see a summary of an app's privacy practices
before you download it? Now, where have we seen
something like that before? For food, you have nutrition labels.

You can see if it's packed with protein
or loaded with sugar, or maybe both, all before you buy it. So we thought it would be great
to have something similar for apps. We're going to require each developer
to self-report their practices. [Katie] We'll show you what they tell us. You can see if the developer is collecting
a little bit of data on you or a lot of data, or if they're sharing data
with other companies to track you, and much more. [Erik] We're going to put this information on product pages in the App Store.

So for each app, you can see
highlights of their privacy information before you download it. And we're going to include this
in all of our App Stores. Back over to you, Craig. Thanks, Katie and Erik. These are some of the ways we're
strengthening the privacy of our platforms and bringing new features
to give users even more control. And one of the places where
privacy matters most is your home. Like most of you, in the past few months I've spent
more time at home than I ever imagined. It's more clear than ever
just how important it is to live in a home
where the technology just works. That brings us to some great new features
we're bringing to the home this year. All of these features share
a few key attributes. First, anything we develop for the home
should be easy, from initial setup to everyday use, just like how a tap of your iPhone
can automatically configure an Apple TV. Second, home products should never
compromise your privacy. That's why your Siri requests use
a random identifier, not your Apple ID. And finally, your devices should
all work better together, like how AirPlay lets you share
from your iPhone straight to the TV.

And when things are easy, private
and work together seamlessly, your home is more enjoyable, whether you're watching TV,
listening to music or getting the most
out of your smart devices. To tell you more
about what we have in store for the home, let me pass it to Yah Cason. Thanks, Craig. Let's start by talking about
the smart home. With HomeKit,
we've given developers a robust framework to create smart home accessories
that are remarkably easy to set up all while being end-to-end encrypted
to your Apple devices. Already there's a rich ecosystem
of devices available. But we want to make it even easier
to build products that work across more homes. So we formed an alliance and partnered with Amazon, Google
and other industry leaders to define a new interoperability
standard for the smart home. We open-sourced HomeKit to ensure its ease of use and privacy
are core to this effort. And any accessory using HomeKit
or this new standard will work incredibly well
across all your Apple devices. And you control it all in the Home app, the most secure way
to manage your smart home.

Adding new devices to your home
has never been easier. Simply tap or scan to set up an accessory. And in iOS 14, after you add an accessory, the Home app will now suggest
useful automations so you can immediately
put your new device to work for you. Automations are rules
that set your home to autopilot, like automatically turning on
your porch lights when motion is detected or having the garage door open
as you arrive home. And now when you open the Home app, you'll see a new visual status
right up top that prioritizes the accessories
which most need your attention. You can easily see if you've left
a door unlocked or the lights on and quickly control them. Let's take a closer look at one of the most popular categories
of HomeKit accessories: Lights. Millions of us have already added
smart bulbs to our homes, many of which can change color on demand. In iOS 14, we're introducing a feature to help you
get the most out of those bulbs: Adaptive lighting.

Adaptive lighting automatically adjusts
the color temperature of your lights throughout the day. Turn it on to ease into the morning
with warm colors, stay focused and alert midday
with cooler ones, and wind down at night
by reducing blue light. Adaptive lighting ensures
you get the right color at the right time. Another popular smart home category
is cameras. With HomeKit Secure Video
your cameras are completely private. And in iOS 14, we're making your cameras
work even harder for you. You'll be able to define Activity Zones
that focus on the most important areas. This is great if you face a busy sidewalk and only want to be alerted when people
actually walk up to your front door.

Another powerful feature we're bringing
to cameras is face recognition. HomeKit cameras and video doorbells will
now provide even richer notifications, telling you who's there
by leveraging the friends and family you've already tagged in your Photos app. And face recognition extends to HomePod,
announcing who's at the door. And with Apple TV, you'll get a live view
whenever someone rings the bell. In fact, all your HomeKit-enabled cameras
will be directly integrated with tvOS 14 so you can quickly bring them up
in the new home view in Control Center. Or just ask Siri to pull up any camera
at any time.

You can even take any camera full-screen, giving you a great view
of what's going on. And we have even more coming to tvOS 14. Now let me pass it to Cindy Lin
to tell you about it. Thanks, Yah. Apple TV is my favorite way to unwind
and enjoy entertainment with the family. With 4K, HDR, Dolby Vision and Dolby Atmos, you get a theater-like experience
right in your living room, making all your movies and shows
look and sound amazing. But Apple TV goes beyond video. You can access
the entire Apple Music collection and even sing along with timed lyrics. You can play
an incredible selection of games, including all the games in Apple Arcade. And we're making gaming on Apple TV
even more personal by expanding multiuser support. Now you can instantly resume your games
exactly where you left off.

Just open Control Center
to switch between users, and you can now see your game progress,
achievements and friends. And for even more fun,
we're adding support for Xbox Elite 2
and Xbox Adaptive Controllers. Apple TV
also helps you keep active at home with a great selection of fitness apps. And with tvOS 14, we're gonna make your workouts,
and everything you do on Apple TV, even more productive
by extending Picture in Picture across the entire Apple TV experience. So you can keep up with the news
or not miss a second of the big game. You can even have an AirPlay session
show up in a Picture in Picture window. And AirPlay is getting even better. Now the whole family can share
their stunning videos captured on iPhone in their full 4K resolution. With videos, music, games and more, Apple TV truly offers something
for everyone. Of course, nothing compares
to being captivated by a good story. And we've built a service for just that:
Apple TV+.

We've already created
an incredible lineup of Apple Originals. And you can watch them all
in the Apple TV app, which is available
on your favorite Apple devices, on all major streaming boxes and many popular smart TVs. It's already reaching
over a billion screens. And it's coming to Sony and Vizio
smart TVs later this summer. Today, we're really excited
to tell you about a new Apple TV+ Original that we're working on. Almost 70 years ago, Isaac Asimov introduced the world
to a series of epic novels that spanned hundreds of worlds
and thousands of years. Many consider it to be the best
science-fiction series of all time. I'd like to share a sneak peek
with you now.

This is Foundation. [dramatic music playing] They're going to arrest me tomorrow. And you. It's almost a certainty. You're familiar with my work,
Yes, in theory.
But I don't know what it has to do with– It's not a theory. [Salvor] They're worried
you can predict the future.
[whirring] [Hari]
They're worried people believe I can.
[whirring] And they don't like the future I predict. The empire will fall. Order will vanish. There's massive events rushing to meet us. [Salvor] Only we can shorten the darkness. [music fades] Wow. I'm super excited to watch Foundation
when it comes to Apple TV+ next year. With Apple TV, your home
has never been more entertaining. And with adaptive lighting,
face recognition for cameras and these other great features, your time at home
has never been more enjoyable. Now let me hand it back to Craig. Welcome back. Now let's talk about some big changes
coming to macOS. Since its introduction, macOS has revolutionized the experience
of using a computer by combining incredible power
with incredible ease of use.

And it's loved
by all different types of users, from families and students to creative pros, businesspeople
and, of course, software developers. And this year, we're taking
the macOS experience you love even further. But what should we call it? Well, if you're a student of macOS,
you know this question can only be answered by Apple's
legendary crack marketing team. Their drug-fueled,
minibus-driven vision quests have yielded some great names and, sadly, spawned a host of imitators.

The truth is,
we can't responsibly continue to inadvertently lead our competition
to copy these methods when they clearly can't handle the trip. So this year, we're leaving our process
shrouded in mystery and taking you straight
to the glorious destination. Our next release of macOS
is macOS Big Sur. macOS Big Sur introduces
an entirely new design and major updates to some of
the most essential apps on the platform. And just like its name, Big Sur brings you unmatched levels
of power and beauty. Let's start with design, where we're making the biggest change
since the introduction of Mac OS X. To tell you more about the philosophy
and incredible craftsmanship behind the new design, here's a short video with Alan Dye. [piano playing] At Apple, design has always been
about great ideas. Those ideas then are developed
with this obsessive dedication to detail. If we care enough about all the details
that make up a product, then in the end
we will have designed an experience that really feels like
there's no other way it could be. And the best example of this
is macOS Big Sur.

Our goal was to bring even more clarity
to the design of the software while retaining the Mac's
powerful capability and ease of use. We started with the simplest of elements, from the shape of a corner radius
to refinements in buttons and controls. And we brought our unified language
of symbols to the Mac, making them more consistent
and easier to recognize. Depth, shading and translucency are used to create hierarchy. These new materials are rich,
and they're vibrant. They bridge light and dark. We've reduced visual complexity
to keep the focus on users' content. Buttons and controls appear
when you need them, and they recede when you don't. There's a new way
to access system-level controls and a unified space
for notifications and widgets.

We've also created a new suite of sounds. They're familiar to the Mac,
but remastered and more refined. [pulses] [alert notifications chiming] We wanted consistency
throughout the ecosystem, so users can move fluidly
between their Apple devices. But we also love that Mac icons
have a deep history and a distinct look and feel. So we've retained
many of the highly crafted details and the playful elements
that make Mac icons unique. This OS reflects an important history. It's familiar, but it's also
entirely new in every detail.

We love the Mac. It's the tool we use to make all the
products that we put out into the world. And macOS Big Sur is where it starts. [Craig] So that's the thinking
behind our new design. Now let me show it to you
in action with a demo. As Alan said, we've refined some of the most iconic
elements of the Mac experience. Let's start with the Dock. It has an elegant new design that floats
along the bottom of your desktop. And you'll notice that we've created
gorgeous new app icons for all of your favorite apps. Speaking of apps,
let's take a look at the Finder. You notice it has a gorgeous new
top-to-bottom design for the sidebar, and it has a compact,
space-efficient toolbar. Makes it really easy
to get to all of your controls. Next, let's take a look at Mail. You can see that Mail has
all-new glyphs in the sidebar, and you may have noticed
that we've brought color back as well.

Now, each app uses its own key color. That same color is used for the elegant
new rounded row selection style here in the message list. Now, the toolbar makes it really easy
to get to all your controls. Check out how the search bar expands
as I click on it. And, of course,
other operations like filtering, they're just a click away. Next, let's take a look at Photos. It's just stunning. You can get to all your albums
and media types from the sidebar, and the photo grid is backed by Metal, so animations are super smooth whether I'm scrolling, transitioning or zooming all the way in
or all the way out. It's beautiful. Now we've refreshed the design
for all the apps on the system, from apps like Calendar and Notes to Podcasts and Music,
with its new Listen Now pane. And an all-new version of iWork
that features a simplified toolbar. You may have noticed
we've also updated the menu bar. It's now translucent and elegantly takes
on the color of your desktop picture.

And we've updated the layout
of menus as well. We've given all the items
just a little bit more room to breathe. Now, on the Mac, we love our ability
to get directly at controls, like Wi-Fi or Sound. And you can see that we've reworked these
to be even more useful. But we've gone even further this year by giving you one place
to get at all your controls. We've brought Control Center to the Mac. All of my controls are here,
and it's really easy to make adjustments. For instance,
I can change display brightness here or I can click to dive in for more, like turning on Dark Mode
or activating Night Shift. And what's really cool is
that I can customize the menu bar with any of these controls.

So, say I want one-click access
to Do Not Disturb. I can just click and drag it
right into my menu bar and customize just like that. Now we've also reinvented
Notification Center. You can access it by clicking on the time
in the upper right. And as you see, we now have a single view that brings your notifications and widgets
together all in one place. And we now group
related notifications together. You can easily expand them
to take a closer look or clear them all out in one step. And we're bringing our redesigned widgets
to the Mac. They're really beautiful. And you have all-new ways
to customize them. I'm just gonna click "Edit Widgets"
down here at the bottom. And you can see
I have a gallery of all my widgets, and they come in a variety of sizes.

I can select between them just like this. And developers can bring
their own widgets as well, like this one here from Day One. Now adding widgets is easy. Let's start by adding,
say, the World Clock. Maybe I'll add Notes in. And it'd be kind of cool
to add my Reminders list as well. So that's a look at Widgets,
Notification Center and our all-new design in Big Sur. Next, there are exciting updates
for some of the most-used apps on the Mac. First, let's talk about Messages. Messages on the Mac is designed to work
seamlessly with all of your devices, so your SMS and iMessage conversations
are in sync no matter what device you're using. Now this year we're taking Messages
to the next level with a ton of great new features. We're introducing powerful search
to help you find what you're looking for.

We have a redesigned photo picker
to make sharing photos and videos easier. And Memoji. You can now create
and edit your Memoji right on your Mac. And Memoji stickers
bring personality to messages, giving you fun ways to express yourself
in all of your conversations. Messages effects helps you
celebrate special moments and get your point across. And you're also getting
pinned conversations that are synced across devices
so you can always get to them, along with new Groups enhancements.

So that's what's coming
to Messages on Mac: powerful tools
to manage your conversations and new ways to express yourself. Next up, Maps. Apple Maps is the best way
to explore and navigate the world, whether you're planning a trip on the Mac or using turn-by-turn directions
on your iPhone. Today I'm excited to announce
an all-new version of Maps for the Mac. To start,
Maps features a stunning new design that makes it easy to find your way around
using Apple's detailed new map. And for the first time on the Mac, Favorites like home, work
or that great coffee shop on the corner are now just a click away.

You can now create your own Guides
of all the places you want to visit right on your Mac. And before leaving for the airport, you can check where your gate is located
with indoor maps or explore your destination city
with Look Around, which is incredible on the big screen. We've even brought
other useful features to the Mac, like the ability to see
the progress of friends who have shared their ETA with you. And the Mac gets
all the other new Maps features we just introduced in iOS 14. And that's just a taste of what's coming
in the all-new Apple Maps on the Mac.

Next let's talk about Mac Catalyst. Catalyst gives developers a big head start in creating a Mac app from an iPad app. Take, for instance, our recent release
of Swift Playgrounds for Mac. Catalyst gave us a big head start
creating the app, and we were able
to spend our development time crafting a great Mac experience. Today we have some improvements
to Mac Catalyst I'd like to share. This year, developers will be able
to optimize their apps to fully utilize the native resolution
of the Mac screen, providing total control of every pixel.

We've also given developers
new capabilities, including powerful new menu
and keyboard APIs and updated controls
like checkboxes and date pickers. They look great
with the new design of macOS. In fact, we used Catalyst
with the new version of Maps. And just as you'd expect, it's a full-fledged Mac app
that runs natively and is designed
in a way that's true to the Mac. So you get multiple resizable windows,
keyboard shortcuts and everything else you'd expect
from a native Mac app. And we did the same thing for Messages. Maps and Messages join
the great set of apps from Apple that already use
the Mac Catalyst technology.

And there's a growing list
of third-party Catalyst apps available on the Mac App Store. Next… Safari. Our users love Safari for its speedy performance,
power efficiency and state-of-the-art privacy protections. And it delivers all of that while making it easy to get to your
bookmarks, tabs and browsing history across all of your devices. This year, we're building
on Safari's amazing performance, elegant design
and pioneering privacy protections to deliver the biggest update to Safari
since it was first introduced. As you know, Safari has long been
the world's fastest desktop browser. This year, Safari's performance
running JavaScript is better than ever and continues to significantly outpace
all other major browsers.

And now, when it comes to page loading,
we're even faster there too. In fact, when loading
frequently visited websites, Safari is now on average
more than 50% faster than Chrome. And Safari delivers
this amazing performance while continuing to deliver
industry-leading battery life. And of course,
Safari is continuing to build on its pioneering track record
of protecting user privacy. Safari was the first browser to introduce
private browsing, cookie blocking, and most recently,
Intelligent Tracking Prevention. This year, we want to give our users
even more visibility into how each site they visit
tries to track them and the ways that Safari protects them. So now users can click on
the Privacy Report button in the toolbar when they visit a site to better understand
how that site is treating their privacy. In addition to monitoring
unwanted tracking, Safari now also securely monitors
your saved passwords to ensure that they haven't been
compromised in a data breach.

And this is also a big year
for extensions in Safari. We're adding support
for the WebExtensions API so developers can easily
bring over extensions that they built for other browsers. And we're building an all-new category
in the Mac App Store to showcase Safari extensions
so users can easily find them. Now extensions are very powerful but can introduce privacy challenges. In other browsers, extensions can access
every page you visit, every tab you open, even everything you type. So we're doing even more here. In Safari, you choose
which sites each extension can work with, and you can even give them access
just for the day, just for the website or all the time.

But improved performance, power efficiency and privacy protections
are only the start. We have a whole slew
of new features this year. From a customizable start page to redesigned tabs
that are more elegant and powerful, and native translation capabilities
built right into Safari. To tell you more,
I'll hand it off to Beth Dakin. Thanks, Craig. I'm so excited to give you
a tour of the brand-new Safari. When you open Safari,
right away you'll notice the new look. It's clean and fresh. I want to show you
one of my favorite new features: the customizable start page. I love it because I can make it my own. It's easy to get to
the customization controls here in the bottom corner. We think a lot of people are gonna want
to set a background image, and there's a beautiful gallery
of curated wallpapers to choose from. You can use one of your own photos, too,
and I know I want a photo of my son. I have a few really cute ones
here in the downloads. I can just drag and drop to set it. That is so perfect. You can add new sections
to the start page too.

Let's add iCloud Tabs and Reading List. Let's take a look. There. This is perfect. I use Reading List all the time,
and now it's so easy to get to. Another way that you can dial in Safari
to suit your specific needs is with extensions. I can't wait for you developers
to bring your web extensions to Safari. I've gone ahead and downloaded
a few web extensions to take a peek. So let's go here to the Safari preferences
to enable them. And I'll enable Power Thesaurus
and Recipe Filter. So each of the extensions
that I just enabled now has its own button
here in the toolbar. Let me show you Recipe Filter.
I love this one. So this extension will search
the web page for a recipe, and if it finds one,
it will pop it up in a little card. It's been a great accelerator for me
when I'm building a grocery list. So the extension hasn't done anything yet
because I haven't granted permission.

So let me click on the toolbar button, and I'm going to allow this extension
for one day. And when I click here, it'll do its thing. So here we go. And there it is. So useful. Okay, so I have my personality pretty thoroughly stamped
over Safari at this point. And you know,
I use Safari for a lot of personal things, and Apple makes sure
my private life stays private. Privacy is essential to everything we do
at Apple, and it's critical on the web. And now you can see
what Safari is doing to protect you. If I click on the
Intelligent Tracking Prevention button, I can see the number of known trackers that Safari protected me from
on this web page.

I can click here to see a list of the known trackers
right here in this popover. And the full privacy report
is just one click away. And that's what we have for privacy
in the new Safari. Next I'd like to talk about tabs. If you love tabs,
you're going to love the new Safari. It's easier and more efficient than ever
to work with lots of tabs. So I have another window here, and right away you'll notice
that there are icons and tabs, which makes it so easy
to spot what you're looking for. And if I open more tabs,
then you'll see more of them at once because the tabs get smaller
and use the space more efficiently. If it's a little hard to find what
you're looking for with this many tabs, that's no problem. You can just hover over tabs
and see a nice preview of the page. I'm ready to clean up now,
and that's easy too.

I can just bring up the context menu here and close all tabs to the right.
Just like that. We are so excited that
the new Safari has built-in translation. Let me show you. So here on this website, on El Mundo, Safari has detected
that this is not in my primary language, and it's added the translation icon
to the smart search field. I can click here,
and let's translate this page to English. It'll happen inline. And as more content is added, that gets translated dynamically too. Those are some highlights, but there is
so much more to the new Safari. Back to you, Craig. Thanks, Beth. So that's Safari. It's a huge release,
with new ways to customize, big improvements
to your browsing experience like tabs and translation, and even stronger privacy protection to keep your browsing your business.

And that's macOS Big Sur, the biggest update to design since
the original introduction of Mac OS X, significant updates to Messages and Maps and the biggest update to Safari ever. But these changes are only the beginning. For years now,
down deep, below the surface, we've been working
on something truly profound. To tell you more,
I'll hand it back to Tim. Thanks, Craig. Big Sur is going to be
a great release of macOS. But that's only part of the story, because today is going to be
a truly historic day for the Mac. Today we're going to tell you
about some really big changes, how we're going to take the Mac
to a whole new level. From the very beginning, the Mac redefined
the entire computer industry. The Mac has always been about innovation
and boldly pushing things forward, embracing big changes to stay
at the forefront of personal computing.

The Mac has had
three major transitions in its history. The move to PowerPC, the transition to Mac OS X and the move to Intel. And now it's time
for a huge leap forward for the Mac, because today is the day we're announcing
that the Mac is transitioning… to our own Apple Silicon. When we make bold changes,
it's for one simple yet powerful reason: so we can make much better products. When we look ahead,
we envision some amazing new products, and transitioning
to our own custom silicon is what will enable us
to bring them to life.

At Apple, integrating hardware and software
is fundamental to everything we do. That's what makes our products so great. And silicon is at the heart
of our hardware. So having a world-class
silicon design team is a game changer. To tell you more about Apple Silicon and how it will take Mac
to the next level, I'd like to send you over to Johny Srouji at one of our labs
in an undisclosed location. ["I'm Getting Tired"
by Jacknife Lee playing] [song ends] Welcome to our lab. We've been building and refining
our Apple Silicon for over a decade. The result is a scalable architecture that is custom-designed
for Apple products, and it leads the industry in features
and performance per watt. So I'd like to tell you how we got here and what it means for the Mac
moving forward.

It all started with the iPhone. The iPhone demanded
performance and capabilities that were seen as impossible
in a device that small. This is where we developed our relentless
focus on performance per watt. Generation after generation,
we pushed the boundaries of technology, which enabled us to improve
performance and energy efficiency, while building advanced
and industry-leading features. Our team delivered ten generations
of increasingly complex and rich designs, always improving performance. In fact, CPU performance in the iPhone has improved by over a hundred times, keeping the iPhone's performance ahead
of every other phone in the industry. Another opportunity for the team
was the iPad. While iPhone chips
could drive our mainstream iPads, we wanted to push the iPad even further. It began with the iPad's Retina display, which demanded a custom chip. So the team scaled our architecture and designed the most-optimized and highest-performance chip
possible for the iPad. Starting with the A5X, we built a line of SoCs
specifically designed for the iPad. We doubled the iPhone's
graphics performance through a larger GPU and a wider memory subsystem. This put the iPad in a class by itself. Compared to the very first iPad,
the latest iPad Pro delivers over 1,000 times faster
graphics performance in just ten years.

This is part of the reason
why the iPad Pro is faster than the vast majority of PC laptops. And this foreshadows how well
our architecture will scale into the Mac. Another place where we applied our focus
was the Watch. We scaled our SoC architecture
to optimize performance for the device's unique
low-power requirements, and we built a chip
perfectly suited for Apple Watch. Our SoCs enable each of these products
with unique features and industry-leading performance per watt, and it makes each of them best in class.

And we do this at an enormous scale. In fact, adding all of the processors
across these three products, we've shipped over two billion
in just ten years. And we've designed and shipped
billions of additional chips that work together with our SoCs to enable our amazing products. And now we're bringing
all of that expertise and that same focused
and disciplined approach to the Mac. The first thing this will do is give
the Mac a whole new level of performance.

Now, when we talk about performance,
we have to talk about power, because all systems built today
are constrained by power consumption, thermals, or both. Among today's consumer systems, desktops deliver the highest performance but consume the most power. Notebooks trade off performance
for lower power, making them portable. As you can see,
normally to get more performance you have to consume more power. When you take a closer look at this chart, you realize you want to operate
in the upper-left corner. You want to deliver
the highest performance at the lowest power consumption. And that's exactly
where we want to take the Mac. Building up on our years of experience designing the world's
most energy-efficient chips, our plan is to give the Mac
a much higher level of performance while at the same time
consuming less power.

So, much better performance
is reason enough to transition the Mac to Apple SoCs. But that's just part of the story. Our scalable architecture includes
many custom technologies that when integrated with our software will bring even more innovation
to the Mac. With our advanced power management, we will maximize performance
and battery life better than ever before. Our Secure Enclave will bring
best-in-class security, and our high-performance GPU is going to bring a whole new level
of graphics performance to every Mac, making them even better
for Pro Applications and really great for games. And combined with our neural engines, our chips will make the Mac
an amazing platform for machine learning. And we're bringing
many other custom technologies, such as our video-display
and image-processing engines, that will help make the Mac
better than ever before. So, what does all of this mean
for the Mac? First, we're designing a family of SoCs
specifically for the Mac product line. Second, just like we did
with the iPhone, iPad and Watch, we're going to bring
great technologies to the Mac. This will give the Mac
a unique set of features and incredible performance.

And third, we'll have
a common architecture across all of our product lines, making it far easier for developers
to write and optimize software for the entire Apple ecosystem. Ultimately we know
that bringing our SoCs to the Mac will allow us to build
much better products, and the Mac will take
another huge leap forward. Now, a key advantage we have is the tight integration
of our silicon with our software. To tell you more about
how macOS will run on Apple SoCs, here is my colleague, Craig. Thanks, Johny. Now let's talk about the technologies
that we've built into macOS Big Sur that will make the transition
to Apple Silicon smooth and seamless for both consumers and developers.

These new Mac systems will be incredible, and users will want their favorite apps to take full advantage of the capabilities
of our custom silicon. And the best way to do that
is with native apps. So of course when we updated
our apps for Big Sur, we built everything as native
for Apple Silicon. And I'm happy to say we have
all of our own Apple apps, including our most demanding Pro Apps
like Final Cut Pro and Logic Pro, up and running as native now, and they'll be ready for customers
on day one. So, how did we do this? We're using Xcode,
just like all our developers will. Everything developers need
to build apps for these new chips is built into the new version of Xcode. To get started, developers just open
their app projects and recompile. The vast majority of developers
can get their apps up and running in just a matter of days. And to deliver these apps, we've created Universal 2. It's a new type of Universal binary that works on both Intel-based Macs
and Macs built on Apple Silicon.

So developers can tap into the native
power and performance of our new Macs and still support Intel-based Macs, all with a single binary
for all of their users. Some of the biggest Mac developers
have already gotten started. Microsoft is hard at work
on Office for the Mac. And we've been working with Adobe on their flagship Creative Cloud, and many of their apps
are already up and running great. So let's take a look at macOS
running on Apple Silicon. So here we are on the desktop
that we know and love. And I'm just gonna open up About This Mac. And what you see here is that we are
running on our Apple Development Platform. This is a system built
to support early development using the same A12Z processor currently shipping in iPad Pro. Now, I have a confession to make.

This isn't the first time
you've seen macOS running here. In fact, this is the same Mac that Beth and I used to demo
all the new Big Sur features earlier. And as you saw earlier, we've updated all of our Apple apps and they're running great. Of course, a big part
of the Mac experience is third-party apps, and we've been working
with our friends at Microsoft, and they already have Office
up and running natively on our new Macs. Let's take a look at Word. It runs great. Scrolling is super smooth.

Everything you do
is just super responsive. Next let's check out Excel. Just as you'd expect,
complex sheets and elements like this map all update instantly. And next let's take a look at PowerPoint. It's using Metal for rendering,
and it performs great. For instance, check out how I can see
all the layers of my slide in 3D. The animation is perfectly fluid. Now, we've also been working closely
with our friends at Adobe to bring Creative Cloud to our new Macs. Here's Lightroom
running native on Apple Silicon. Navigating large libraries of DNG images
is super fast, and all of Lightroom's editing controls
are available right here. Let's apply an adjustment to this image. Well, that's much better. And we can apply that same edit
to all of these images in a single step.

Looks great. Next, let me show you the app
I know many of you wanna see: Photoshop. Here is a five-gigabyte Photoshop file
by photographer Stephen Wilkes. Now this is a heavy-duty document
with lots of layers. Now let's add one more bird in there. Not totally comfortable with the level of
social distancing, but let's keep going. And let's check out
how smooth the animation is as I zoom out. Wow. Beautiful. Finally, let's turn to
one of our most sophisticated apps: Final Cut Pro. Here it is running on Apple Silicon
for the first time. Let's play back some 4K video. As you can see, playback is super smooth. And all your filters are here,
and you can apply them in real time. Let's try some color correction.

And I can even add
animated titles and lens flare… all during live playback. And Final Cut takes advantage of the unique capabilities
of the Apple Neural Engine with a new feature that analyzes video
and intelligently crops it to keep the most important action
in the frame. But that's not all. Final Cut fully exploits
the system's multicore architecture to let us play back not just one or two,
but three streams of full-resolution 4K ProRes,
all on an A12Z processor. Amazing. So that's a first look
at Universal apps on Apple Silicon. We're really excited to see
so much great work on native apps. These apps even get more amazing
when they're built to take advantage of the Silicon's powerful capabilities. Like its incredible
CPU and graphics performance, a unified memory architecture
and the Neural Engine which accelerates
advanced machine-learning tasks. The transition to Apple Silicon
is also great for developers who've already optimized their apps
for other Apple platforms.

The shared architecture
across our products means that their code
will absolutely sing on our new Macs. And there's even more to the story. We're doing some really important things to make this transition
seamless for our users. Now, while we expect most developers
will go native immediately, we wanna make sure that users
can run all of their apps on day one, even if some apps
haven't yet been updated. Now, we've been down this road before. When we transitioned
from PowerPC to Intel processors, a cornerstone of that transition
was Rosetta, a technology that makes it possible
to run PowerPC apps on Intel-based Macs. macOS Big Sur will include
a new version of Rosetta, Rosetta 2. Rosetta 2 automatically
translates your existing Mac apps so they work on new Macs
with Apple Silicon. And this time Rosetta is even faster,
more powerful and more compatible. It translates the apps
when you install them, so they can launch immediately
and be instantly responsive. And Rosetta 2 can also
translate code on the fly when needed, like for web browsers with just-in-time
JavaScript compilers or for Java code.

It even handles the most complex
Pro Apps and their plug-ins. Rosetta 2 is transparent to users,
and the performance is amazing. We're also introducing new Virtualization
technologies in macOS Big Sur. So, for developers who wanna run
other environments like Linux or tools like Docker, we have you covered. When you put
all of these technologies together, Universal, Rosetta and Virtualization, you have a system that can run
an amazing diversity of apps. To show you how this all comes together,
I'll hand it over to Andreas Wendker. Thanks, Craig. Let's take a look at some existing apps
running under Rosetta.

Rosetta, of course, works great with
all sorts of apps you use every day. But for our first demo, I'd like to
show you something a bit more challenging. This is Maya, the powerful animation
and modeling software running great here on Apple Silicon. I already have a model open that consists
of over six million polygons and, as you can see,
I can fluidly move around in this scene. So let's make it more challenging
and bring in textures and shaders as well. And still, everything is incredibly fluid. So Rosetta works great,
and the performance is simply fantastic.

But Rosetta isn't just for apps. It also works amazingly well with games. I can even use a game controller. This is Shadow of the Tomb Raider, a high-end AAA game
that's using our Metal APIs. I downloaded it directly from
the Mac App Store, so it's completely unmodified, and it is absolutely beautiful. Let me jump into the water. You can see
some of the lighting effects here. And as I follow the path,
you can see the game is responsive, it's smooth, and the best part is, you're running
the 1080p as a translated app and an emulation. So these new Macs, they are fast. You can see
some more of the lighting effects here.

It's awesome what Rosetta can do
with existing games. Now, as Craig mentioned, many of our users
rely on apps from other environments. So let me bring up a Linux VM
in Parallels Desktop. You can see the graphical user interface
designed for Linux here. But, of course, many developers
like to use Linux for hosting servers. So let me dive down to the command line
and launch in Apache Web Server. And now I can simply bring up Safari and browse the website of the server
I just launched in the Linux VM. Here it is.

Now I want to show you one more
type of app to run on these new Macs that we haven't even told you about yet, and that is iPhone and iPad apps. Since they've been built
to run on the same Apple Silicon that we're using on our new Macs, they will run natively,
completely unmodified, on the new Macs as well. Let me show you a few. This here is one of my favorite games,
Monument Valley 2.
It's fun to play here on the new Mac.

And if I want to catch up on my
guitar lessons, I can use Fender Play. [guitar plays] Or if I want to relax
at the end of the day a little bit, I can bring up the Calm app. And that was just a quick look
at Rosetta, Virtualization and support for iPhone and iPad apps,
giving users amazing versatility for running apps and other environments
with macOS on Apple Silicon. Back to you, Craig. Thanks, Andreas. As you saw, Macs built with Apple Silicon will be able
to run iPhone and iPad apps directly. Starting day one, users can download
these apps right from the Mac App Store, and most apps will just work
with no changes from the developer. With everything we're doing, the range of apps that users
will be able to run on these new Macs is truly unprecedented. Together, we have all the technologies in
place to make this an amazing transition. The vast majority of Mac apps can be recompiled as Universal
in a few days, so users can have fast, native apps. Rosetta 2 runs existing Mac apps, our Virtualization technology
makes it easier than ever to bring other environments,
like Linux, to the Mac, and Mac users can, for the first time,
run iOS and iPadOS apps directly, tapping into
the world's most vital app ecosystem.

Now, we know our Mac developers will be
eager to get started on this new platform. So to get them going right away,
we're launching a Quick Start Program. The focus of the Quick Start Program is to enable developers
to make their apps Universal and take advantage
of all the capabilities of Apple Silicon. Developers will have access to
documentation and sample code, forums on developer.apple.com, priority DTS support incidents,
and access to labs around the world. This program also includes
new Developer Transition Kit hardware so developers can get going
even before we ship production systems. The DTK hardware
takes the form of a Mac mini, but one with an A12Z SoC inside. It has desktop specs,
including 16 gigabytes of memory, a 512-gig SSD,
and a complement of Mac I/O ports.

Most significantly, it will include the macOS Big Sur developer beta
and Xcode tools. Developers will be able to apply
to the program at developer.apple.com today. We will be shipping units out
starting this week so you can get to work. So that's how macOS Big Sur
is paving the way for a smooth transition to Apple Silicon. This year, we're elevating the Mac
to a whole new level. And it's an incredible opportunity
for developers. I can't wait to see what you all create, and I can't wait until
we can all be together in person again. And now, back to Tim. Thank you, Craig, and thank you, Johny. It truly is a historic day for the Mac. Our vision for the Mac
has always been about embracing breakthrough innovation
and having the courage to make bold changes.

Every time we've done this, the Mac
has come out stronger and more capable. And I have never been more confident about the future of the Mac
than I am today. So, what's the timeline
for this transition? Well, for developers, it begins this week with
the valuable information delivered at this conference as well as
applying for the Quick Start Program. And for the customers, we expect to ship our first Mac with
Apple Silicon by the end of this year, and we expect the transition
to take about two years.

We plan to continue to support and release
new versions of macOS for Intel-based Macs for years to come. In fact, we have some new Intel-based Macs
in the pipeline that we're really excited about. What a huge leap forward for the Mac
and for Apple. Apple Silicon will bring
amazing technologies, industry-leading performance, and a common architecture
across all of our products. What an incredible day of announcements. As you've seen,
we haven't stopped innovating. We pushed all of our platforms forward
in some amazing new ways. Our OS releases will be available
as developer betas today. And each of them will have a public beta,
including watchOS for the very first time, starting next month. And all of this great software will be
available to our customers this fall. We hope you've enjoyed
this very special keynote and that you're ready
for the big week ahead, with over 100
engineering-led video sessions, one-on-one consultations
with Apple engineers, and so much more.

We can't wait
to start working with all of you and watch you do
the best work of your lives. At Apple, we've always drawn strength from the diversity of our global community because we truly believe
when we all work together, we can change the world for the better. Thanks to you all for joining us. This has been such a big day, and it's only the beginning
of a huge week to come. So let's have a great WWDC. ["Daydreamer" by Aurora playing] [song fades].

As found on YouTube

Google I/O Keynote (Google I/O ’17)

love you guys, too. [LAUGHTER] Can't believe it's
one year already. It's a beautiful day.

We're being joined
by over 7,000 people, and we are live streaming
this, as always, to over 400 events
in 85 countries. Last year was the 10th year
since Google I/O started, and so we moved it closer
to home at Shoreline, back where it all began. It seems to have gone well. I checked the Wikipedia
entry from last year. There were some
mentions of sunburn, so we have plenty of
sunscreen all around. It's on us. Use it liberally. It's been a very busy year
since last year, no different from my 13 years at Google. That's because
we've been focused ever more on our core mission
of organizing the world's information. And we're doing it for everyone. And we approach it by
applying [? deep ?] computer science and technical
insights to solve problems at scale.

That approach has served
us very, very well. This is what allowed
us to scale up seven of our most
important products and platforms to over a billion
monthly active users each. And it's not just
the scale at which these products
are working, users engage with them very heavily. YouTube, not just has
over a billion users, but every single day, users
watch over 1 billion hours of videos on YouTube. Google Maps. Every single day, users navigate
over 1 billion kilometers with Google Maps. So the scale is
inspiring to see, and there are other products
approaching this scale. We launched Google
Drive five years ago, and today, it is over 800
million monthly active users. And every single week, there
are over 3 billion objects uploaded to Google Drive.

Two years ago at Google I/O,
we launched Photos as a way to organize user's photos
using machine learning. And today, we are over
500 million active users, and every single day, users
upload 1.2 billion photos to Google. So the scale of these
products are amazing, but they are all still
working up their way to what's Android, which
I'm excited as of this week, we crossed over 2 billion
active devices of Android.

[APPLAUSE] As you can see, the robot is
pretty happy, too, behind me, so it's a privilege to
serve users of this scale. And this is all
because of the growth of mobile and smartphones, but
computing is evolving again. We spoke last year about this
important shift in computing from a mobile first to
a AI first approach. Mobile made us reimagine every
product we were working on. We had to take into account that
the user interaction model had fundamentally changed,
with multi-touch, location, identity, payments, and so on. Similarly, in a
AI first world, we are rethinking all our products
and applying machine learning and AI to solve user problems. And we are doing this across
every one of our products. So today, if you
use Google Search, we rank differently
using machine learning.

Or if you're using Google
Maps, Street View automatically recognizes restaurant
signs, street signs, using machine learning. Duo with video calling
uses machine learning for low bandwidth situations. And Smart Reply and Allo last
year had great reception. And so today, we
are excited that we are rolling out Smart Reply to
over 1 billion users of Gmail. It works really well. Here's a sample email. If you get an email like this,
the machine learning systems learn to be
conversational, and it can reply, I'm fine with
Saturday, or whatever. So it's really nice to see.

Just like with every platform
shift, how users interact with computing changes. Mobile brought multi-touch. We evolved beyond
keyboard and mouse. Similarly, we now
voice and vision as two new important
modalities for computing. Humans are interacting
with computing in more natural and immersive ways. Let's start with voice. We've been using
voice as an input across many of our products. That's because computers
are getting much better at understanding speech. We have had significant
breakthroughs, but the pace, even
since last year, has been pretty amazing to see. Our word error rate
continues to improve, even in very noisy environments.

This is why if you speak to
Google on your phone or Google Home, we can pick up
your voice accurately, even in noisy environments. When we were
shipping Google Home, we had originally planned to
include eight microphones so that we could accurately
locate the source of where the user was speaking from. But thanks to deep
learning, we use a technique called neural beamforming. We were able to ship it
with just two microphones and achieve the same quality. Deep learning is what allowed
us about two weeks ago to announce support for
multiple users in Google Home, so that we can recognize up
to six people in your house and personalize the experience
for each and every one. So voice is becoming
an important modality in our products. The same thing is
happening with vision. Similar to speech, we are
seeing great improvements in computer vision. So when we look at
a picture like this, we are able to understand the
attributes behind the picture.

We realize it's your
boy in a birthday party. There was cake and
family involved, and your boy was happy. So we can understand
all that better now. And our computer
vision systems now, for the task of the
image recognition, are even better than humans. So it's astounding
progress and we're using it across our products. So if you used the
Google Pixel, it has the best-in-class camera,
and we do a lot of work with computer vision. You can take a low light picture
like this, which is noisy, and we automatically make
it much clearer for you.

Or coming very soon, if you
take a picture of your daughter at a baseball game, and there
is something obstructing it, we can do the hard work
remove the obstruction– [APPLAUSE] –and– [APPLAUSE] –have the picture of what
matters to you in front of you. We are clearly at an
inflection point with vision, and so today, we are
announcing a new initiative called Google Lens. [APPLAUSE] Google Lens is a set of
vision-based computing capabilities that can understand
what you're looking at and help you take action
based on that information.

We'll ship it first in
Google Assistant and Photos, and it'll come to
other products. So how does it work? So for example, if
you run into something and you want to know
what it is, say, a flower, you can invoke Google
Lens from your Assistant, point your phone at it, and we
can tell you what flower it is. It's great for someone
like me with allergies. [LAUGHTER] Or if you've ever been
at a friend's place and you have
crawled under a desk just to get the username and
password from a Wi-Fi router, you can point your phone at it. [APPLAUSE] And we can automatically
do the hard work for you. Or if you're walking
in a street downtown and you see a set of
restaurants across you, you can point your phone.

Because we know where you are
and we have our Knowledge Graph and we know what
you're looking at, we can give you the
right information in a meaningful way. As you can see, we're
beginning to understand images and videos. All of Google was built because
we started understanding text and web pages. So the fact that computers can
understand images and videos has profound implications
for our core mission. When we started
working on Search, we wanted to do it at scale. This is why we rethought our
computational architecture. We designed our data
centers from the ground up. And we put a lot
of effort in them. Now that we are evolving for
this machine learning and AI world, we are rethinking our
computational architecture again. We are building what we think
of as AI first data centers. This is why last year,
we launched the tensor processing units. They are custom hardware
for machine learning. They were about 15 to 30 times
faster and 30 to 80 times more power efficient than CPUs
and GPUs at that time. We use TPUs across
all our products, every time you do a search,
every time you speak to Google.

In fact, TPUs are what powered
AlphaGo in its historic match against Lee Sedol. I see now machine learning
as two components. Training, that is, how
we build the neural net. Training is very
computationally intensive, and inference is what
we do at real time, so that when you
show it a picture, we'd recognize whether it's
a dog or a cat, and so on.

Last year's TPU were
optimized for inference. Training is computationally
very intensive. To give you a sense, each one of
our machine translation models takes a training of
over three billion words for a week on about 100 GPUs. So we've been working
hard and I'm really excited to announce our next
generation of TPUs, Cloud TPUs, which are optimized for
both training and inference. What you see behind me
is one Cloud TPU board. It has four chips in
it, and each board is capable of 180
trillion floating point operations per second. [WHOOPING] And we've designed it
for our data centers, so you can easily stack them. You can put 64 of these
into one big supercomputer. We call these TPU
pods, and each pod is capable of 11.5 petaflops. It is an important advance
in technical infrastructure for the AI era. The reason we named
it cloud TPU is because we're bringing it
through the Google Cloud Platform. So cloud TPUs are
coming to Google Compute Engine as of today. [APPLAUSE] We want Google Cloud to be
the best cloud for machine learning, and so we want
to provide our customers with a wide range
of hardware, be it CPUs, GPUs, including the
great GPUs Nvidia announced last week, and now Cloud TPUs.

So this lays the foundation
for significant progress. So we are focused
on driving the shift and applying AI to
solving problems. At Google, we are bringing
our AI efforts together under Google.ai. It's a collection
of efforts and teams across the company focused on
bringing the benefits of AI to everyone. Google.ai will focus
on three areas, state-of-the-art research,
tools, and infrastructure– like TensorFlow and Cloud TPUs– and applied AI.

So let me talk a little
bit about these areas. Talking about research, we're
excited about designing better machine learning
models, but today it is really time consuming. It's a painstaking effort of a
few engineers and scientists, mainly machine learning PhDs. We want it to be possible
for hundreds of thousands of developers to use
machine learning. So what better way to do
this than getting neural nets to design better neural nets? We call this approach AutoML. It's learning to learn. So the way it works is we take
a set of candidate neural nets. Think of these as
little baby neural nets. And we actually use a neural net
to iterate through them till we arrive at the best neural net.

We use a reinforcement
learning approach. And it's– the
results are promising. To do this is
computationally hard, but Cloud TPUs put it in
the realm of possibility. We are already approaching state
of the art in standard tasks like, say, for our
image recognition. So whenever I spend
time with the team and think about neural nets
building their own neural nets, it reminds me of one of my
favorite movies, "Inception." And I tell them
we must go deeper.

[LAUGHTER] So we are taking all
these AI advances and applying them to
newer, harder problems across a wide range
of disciplines. One such area is health care. Last year, I spoke about our
work on diabetic retinopathy. It's a preventable
cause of blindness. This year, we
published our paper in the "Journal of the
American Medical Association," and [? verily ?] is working
on bringing products to the medical community. Another such area is pathology. Pathology is a
very complex area. If you take an area like
breast cancer diagnosis, even amongst highly
trained pathologists, agreement on some
forms of breast cancer can be as low as 48%.

That's because
each pathologist is reviewing the equivalent of
1,000 10-megapixel images for every case. This is a large data problem,
but one which machine learning is uniquely equipped to solve. So we built neural nets
to detect cancer spreading to adjacent lymph nodes. It's early days,
but our neural nets show a much higher
degree of accuracy, 89% compared to previous
methods of 73%. There are important caveats we
do have higher false positives, but already giving this in
the hands of pathologists, they can improve diagnosis. In general, I think this is
a great approach for machine learning, providing
tools for people to do what they do better. And we're applying it
across even basic sciences. Take biology. We are training
neural nets to improve the accuracy of DNA sequencing.

[? Deep ?] [? Piriant ?] is a
new tool from Google.ai that identifies genetic variants
more accurately than state-of-the-art methods. Reducing errors is in
important in applications. We can more accurately
identify whether or not a patient has genetic disease
and can help with better diagnosis and treatment. We're applying it to chemistry. We're using machine
learning to predict the properties of molecules. Today, it takes an incredible
amount of computing resources to hunt for new
molecules, and we think we can
[? accelerate ?] timelines by orders of magnitude.

This opens up possibilities
in drug discovery or material sciences. I'm entirely
confident one day, AI will invent new molecules that
behave in predefined ways. Not everything we are
doing is so profound. We are doing even
simple and fun things, like a simple tool which
can help people draw. We call this AutoDraw. Just like today when
you type in Google, we give you suggestions,
we can do the same when you're trying to draw,
even I can draw with this thing.

So it may look
like fun and games, but pushing computers
to do things like this is what helps them
be creative and actually gain knowledge. So we are very excited about
progress even in these areas as well. So we are making
impressive progress in applying machine learning,
and we are applying it across all our products, but
the most important product we are using this is for Google
Search and Google Assistant. We are evolving
Google Search to be more assistive for our users. This is why last
year at Google I/O, we spoke about the Assistant,
and since then, we've launched it on Google
Pixel and Google Home. Scott and team are going
to talk more about it, but before that, let's take a
look at the many amazing ways people have been using
the Google Assistant. [VIDEO PLAYBACK] – OK, Google. [MUSIC PLAYING] – Hey, Google? – Hey, Google. – OK, Google. – Hey, Google. [BLING] – Play some dance music. – Sure. [BLING] – This is "Fresh Air." My guest will be– – Kimmy Schmidt on Netflix. [BLING] – OK, Google. Count to 100.

– Sure. 1, 2, 3– – Play vacuum
harmonica on my TV. [VACUUMING] [HARMONICA PLAYS] – –71, 72– – No! – –73– – Play the "Wonder
Woman" trailer. – Hey, Google. Talk to Domino's. – Talk to Lonely Planet. – Talk to Quora. – Show me my photos
from last weekend. [BLING] [SCREAMING] – Your car is parked at 22B. [BEEP BEEP] – Today in the news– [BLING] – Turn the living
room lights on. – OK, turning on the lights. – I'm back, baby. – Hey, Google. Drop a beat. – Flip a coin. – Call Jill. – Set a timer. – Talk to Headspace. [TING] – And then just
for a moment, I'd like you to let go
of any focus at all. Just let your mind do
whatever it wants to do. – Done. – Hey, Google. Good night. – Turning off all the things. See you tomorrow. [END PLAYBACK] [MUSIC PLAYING] [APPLAUSE] SCOTT HUFFMAN: Hey, everyone. Last year at I/O, we introduced
the Google Assistant, a way for you to have a
conversation with Google to get things done
in your world.

Today, as Sundar
mentioned, we're well on our way,
with the Assistant available on over
100 million devices. But just as Google
Search simplified the web and made it more
useful for everyone, your Google Assistant
simplifies all the technology in your life. You should be able
to just express what you want
throughout your day and the right things
should happen. That's what the Google
Assistant is all about. It's your own individual Google. So that video we
saw really captures the momentum of this project. We've made such big strides
and there's so much more to talk about today. The Assistant is becoming
even more conversational, always available wherever you
need it, and ready to help get even more things done.

First, we fundamentally believe
that the Google Assistant should be, hands
down, the easiest way to accomplish tasks, and
that's through conversation. It comes so naturally to
humans, and now Google is getting really good
at conversation, too. Almost 70% of requests
to the Assistant are expressed in
natural language, not the typical keywords that
people type in a search box. And many requests or follow-ups
that continue the conversation. We're really starting to crack
the hard computer science challenge of conversationality
by combining our strengths in speech recognition, natural
language understanding, and contextual meaning. Now recently, we made
the Assistant even more conversational, so each
member of the family gets relevant
responses just for them by asking with their own voice. And we're continuing to make
interacting with your Assistant more natural. For example, it doesn't always
feel comfortable to speak out loud to your Assistant,
so today, we're adding the ability to type to
your Assistant on the phone.

Now, this is great when
you're in a public place and you don't want
to be overheard. The Assistant's also learning
conversation beyond just words. With another person,
it's really natural to talk about what
you're looking at. Sundar spoke earlier about
how AI and deep learning have led to tremendous
strides in computer vision. Soon, with the smarts
of Google Lens, your Assistant will be able to
have a conversation about what you see. And this is really cool,
and Ibrahim is here to help me show you a couple
of examples of what we'll launch in the coming months. So, last time I
traveled to Osaka, I came across a line of
people waiting to try something that smelled amazing.

Now, I don't speak Japanese,
so I couldn't read the sign out front, but Google Translate
knows over 100 languages, and my Assistant will help
with visual translation. I just tap the Google Lens
icon, point the camera, and my Assistant can instantly
translate the menu to English. And now, I can continue
the conversation. IBRAHIM ULUKAYA: What
does it look like? GOOGLE ASSISTANT: These
pictures should match. SCOTT HUFFMAN: All right. It looks pretty yummy. Now notice, I never had to
type the name of the dish.

My Assistant used visual
context and answered my question conversationally. Let's look at another example. Some of the most tedious
things I do on my phone stem from what I see– a business card I
want to save, details from a receipt I need
to track, and so on. With Google Lens,
my Assistant will be able to help with
those kinds of tasks, too. I love live music,
and sometimes I see info for shows around
town that look like fun.

Now, I can just tap
the Google Lens icon and point the camera
at the venue's marquee. My Assistant instantly
recognizes what I'm looking at. Now, if I wanted to, I could
tap to hear some of this band's songs, and my Assistant offers
other helpful suggestions right in the viewfinder. There's one to buy
tickets from Ticketmaster, and another to add the
show to my calendar. With just a tap, my Assistant
adds the concert details to my schedule. GOOGLE ASSISTANT: Saving event.

Saved Stone Foxes for
May 17th at 9:00 PM. SCOTT HUFFMAN: Awesome. [APPLAUSE] My Assistant will help me
keep track of the event, so I won't miss the
show, and I didn't have to open a bunch of
apps or type anything. Thanks Ibrahim. So that's how the
Assistant is getting better at conversation– by understanding language and
voices, with new input choices, and with the power
of Google Lens. Second, the
Assistant is becoming a more connected experience
that's available everywhere you need help, from your living
room to your morning jog, from your commute to
errands around town, your Assistant should
know how to use all of your connected
devices for your benefit. Now, we're making good progress
in bringing the Assistant to those 2 billion
phones, and other devices powered by Android, like TVs,
wearables, and car systems. And today, I'm
excited to announce that the Google Assistant is
now available on the iPhone. [APPLAUSE] Woo. So no matter what
smartphone you use, you can now get help from
the same smart assistant throughout the day at
home, and on the go. The Assistant brings together
all your favorite Google features on the iPhone.

Just ask to get package
delivery details from Gmail, watch videos from your
favorite YouTube creators, get answers from Google
Search, and much more. You can even turn on the
lights and heat up the house before you get home. Now, Android devices and iPhones
are just part of the story. We think the Assistant should
be available on all kinds of devices where people
might want to ask for help. The new Google Assistant SDK
allows any device manufacturer to easily build the Google
Assistant into whatever they're building. Speakers, toys,
drink-mixing robots, whatever crazy device
all of you think up, now can incorporate
the Google Assistant. And we're working with many
of the world's best consumer brands and their
suppliers, so keep an eye out for the badge that says,
"Google Assistant built-in" when you do your holiday
shopping this year. Now obviously, another aspect
of being useful to people everywhere is support
for many languages. I'm excited to announce
that starting this summer, the Google Assistant
will begin rolling out in French, German,
Brazilian Portuguese, and Japanese on both
Android phones and iPhones.

By the end of the
year, we'll also support Italian,
Spanish and Korean. So that's how the Assistant is
becoming more conversational, and how it will be available
in even more contexts. Finally, the
Assistant needs to be able to get all kinds of
useful things done for people. People sometimes ask if
the Assistant is just a new way to search. Now of course, you
can ask your Assistant to get all sorts of
answers from Google Search, but beyond finding
information, users are also asking
the Assistant to do all sorts of things for them.

Now as you've already
seen, the Assistant can tap into capabilities across
many Google Apps and services, but Google's features are
just part of the story. We also open the Assistant
to third-party developers who are building some
really useful integrations. I'll turn it over to Valerie
to share more about how the developer platform
is getting stronger. [MUSIC PLAYING] [APPLAUSE] VALERIE NYGAARD: Hi. OK, so with the actions
on Google Platform, it's been awesome to
see how developers like you have been engaging
with the Google Assistant. Like honestly, you've built
some really cool integrations. Like, I can ask Food Network
about the recipe that's on TV right now. I can work out with
Fitstar, ask CNBC about the news, or
my husband and I can play name that tune
with SongPop, which he is surprisingly good at.

Until now, these
experiences have been available through the
Assistant on Google Home. But today, we're
also bringing them to Android phones and iPhones. It's over 100 million
devices on Android alone. So now people can get
to Google features and third-party
services from anywhere, and they can even pick up where
they left off across devices. So, not only are
third-party integrations available in more places. They'll be able to do more. Starting today,
actions on Google will be supporting transactions. It's a complete end-to-end
solution for developers, including payments, identity,
notifications, receipts, even account creation. The platform handles
all the complexity. Let me show you
how one will work. GOOGLE ASSISTANT:
Hi, how can I help? VALERIE NYGAARD: I'd like
delivery from Panera. PANERA: Hi, this is Panera. I'll need your delivery address.

Which one can I get from Google? GOOGLE ASSISTANT: We'll
go with 1600 Amphitheater. PANERA: What can I
get you started with? VALERIE NYGAARD: I'll have the
strawberry poppy seed salad with steak instead of chicken. PANERA: Got it. How about one of
these cool drinks? VALERIE NYGAARD: And here, I can
just swipe through my options. See what looks good. Agave lemonade. PANERA: Great. Are you ready to check out? VALERIE NYGAARD: Yep. PANERA: OK, the total is $18.40. Are you ready to
place the order? VALERIE NYGAARD: Yes. I'll just scan my fingerprint to
pay with Google, and that's it. [APPLAUSE] PANERA: Thanks. You're all set. VALERIE NYGAARD:
Yeah, super easy, like I was talking to
someone at the store. So here I was a new
Panera customer. I didn't have to install
anything or create an account. You've also probably
noticed I didn't have to enter my address
or my credit card.

I just saved those
earlier with Google, and Panera used
built-in platform calls to request the information. Now, I was in control over what
I shared every step of the way. So– AUDIENCE: Woo! VALERIE NYGAARD: [CHUCKLES]
The developer platform's also getting much stronger for
home automation integrations. Actions on Google can now
support any smart home developer that wants to
add conversational control. Today, over 70 smart
home companies work with the Google Assistant,
so now in my Google Home or from my phone, I can lock my
front door with August locks, control a range
of LG appliances, or check in on my son's room
by putting the Nest cam on TV. All right, now
that we're talking about making your home smarter,
we also have a lot of news to share today about Google
Home, our own smart speaker with the Google
Assistant built in. Here to tell you more
is Rishi Chandra.

[MUSIC PLAYING] [APPLAUSE] RISHI CHANDRA: Thanks, Valerie. You know, it's really
hard to believe we launched Google Home a
little over six months ago, and we've been really
busy ever since. Since launch, we've added
50 new features, including some my favorites like
support for Google Shopping, where I can use my voice
to order items from Costco right to my front door. Or I can get step-by-step
cooking instructions from over 5 million recipes. Or I can even play my favorite
song just by using the lyrics. Now in April, we launched in
the UK to some great reviews. And starting this
summer, we're going to be launching in
Canada, Australia, France, Germany, and Japan. [APPLAUSE] And with support
for multiple users, we can unlock the full
potential of Google Home to offer a truly
personal experience. So now, you can schedule
a meeting, set a reminder, or get your own daily
briefing with My Day by using your own voice.

And get your commute, your
calendar appointments, and your news sources. Now today, I'd like you
share four new features we'll be rolling out
over the coming months. So first, we're
announcing support for proactive assistance
coming to Google Home. Home is great at providing
personally relevant information for you when you
ask for it, but we think it'd be even more
helpful if it can automatically notify you of those timely
and important messages. And we do this by understanding
the context of your daily life, and proactively looking for
that really helpful information, and providing for you
and a hands-free way. So for example, let's say I'm
relaxing and [? playing game ?] with the kids. Well, I can see that the Google
Home lights just turned on. Hey, Google, what's up? GOOGLE ASSISTANT: Hi, Rishi. Traffic's heavy
right now, so you'll need to leave in 14 minutes
to get to Shoreline Athletic Fields by 3:30 PM. RISHI CHANDRA:
That's pretty nice. The Assistant saw the game
coming up on my calendar, and got my attention
because I had to leave earlier than normal.

So now, my daughter can
make it to that soccer game right on time. Now, we're going
to start simple, with really important messages
like reminders, traffic delays, and flight status changes. And with multiple-user
support, you have the ability to control the
type of proactive notifications you want over time. All right, and second,
another really common activity we do in the home today is
communicate with others. And a phone call is still the
easiest way to reach someone. So today, I'm excited to
announce hands-free calling coming to Google Home. [CHEERING AND APPLAUSE] It's really simple to use. Just ask the Google
Assistant to make a call, and we'll connect you. You can call any landline
or mobile number in the US or Canada completely free. And it's all done
in a hands-free way. For example, let's say I forgot
to call my mom on Mother's Day. Well now, I can
call her while I'm scrambling to get the kids
ready for school in the morning. I just see and say, hey Google. Call mom. GOOGLE ASSISTANT:
Sure, calling mom.

finally calling. Mother's Day was three days ago. RISHI CHANDRA: Yeah,
sorry about that. They made me rehearse
for I/O on Mother's Day. Speaking of which, you're
on stage right now. Say hi to everyone. SPEAKER 1: Oh, hi, everyone. AUDIENCE: Hi. RISHI CHANDRA: So, hopefully,
this makes up for not calling, right? SPEAKER 1: No, it doesn't. You still need to visit
and bring flowers.

RISHI CHANDRA: OK, I'm on it. Bye. SPEAKER 1: Bye. RISHI CHANDRA: It's that simple. We're just making a standard
phone call through Google Home. So mom didn't need to learn anything new. She just needs to answer her phone. There's no additional setup,
apps, or even phone required. And since the Assistant
recognized my voice, we called my mom.

If my wife had asked,
we would have called her mom. We can personalize calling
just like everything else. And now, anyone home can
call friends, family, even businesses. Maybe even a local florist to
get some flowers for your mom. Now, by default, we're going to
call out with a private number, but you also have the option
to link your mobile number to the Google Assistant. And we'll use that
number whenever we recognize your voice. So whoever you call [? must ?]
know it's coming from you. Now, we're rolling out
hands-free calling in the US to all existing
Google Home devices over the next few months. It's the ultimate
hands-free speakerphone. No setup required, call anyone,
including personal contacts or businesses, and even dial out
with your personal number when we detect your voice. We can't wait for
you to try it out.

OK, third, let's talk a
little about entertainment. We designed Google Home
to be a great speaker, one that can put in any
room in the house or wirelessly connect to other
Chromecast built-in speaker systems. Well today, we're
announcing that Spotify, in addition to their
subscription service, will be adding their free
music service to Google Home, so it's even easier to play
your Spotify playlists. [APPLAUSE] We'll also be adding support
for SoundCloud and Deezer to the largest global
music services today. [APPLAUSE] And these music
services will join many of the others
already available through the Assistant. And finally, we'll be
adding Bluetooth support to all existing
Google Home devices. So you can play any audio from
your iOS or Android device. AUDIENCE: Yes! [APPLAUSE] But Google Home can do
much more than just audio. Last year, we
launched the ability to use your voice to play
YouTube, Netflix, and Google Photos right on your TV.

And today, we're announcing
additional partners, including HBO NOW. [APPLAUSE] So just say you want to watch,
and we'll play it for you all in a hands-free way. With Google Home, we want to
make it really easy to play your favorite entertainment. OK, finally, I want
to talk a little bit how we see the Assistant
evolving to help you in a more visual way. Voice responses are great,
but sometimes a picture is worth a thousand words. So today, we're announcing
support for visual responses with Google Home. Now to do that,
we need a screen. Well, fortunately,
many of us already have a ton of screens in
our home today, our phones, our tablets, even our TVs.

The Google Assistant
should smartly take advantage of all
these different devices to provide you the best
response on the right device. For example, with Google
Home, I can easily get location information. OK, Google. Where is my next event? GOOGLE ASSISTANT:
Your Pokemon GO hike is at Rancho San
Antonio Reserve. RISHI CHANDRA: It's for my kids. GOOGLE ASSISTANT: It's
at 11:00 AM today. RISHI CHANDRA: It's for my kids. Relax. [LAUGHTER] But if I want to
view the directions, the best place to do
it is on my phone. Well soon, you could
just say, OK, Google. Let's go. GOOGLE ASSISTANT: All right,
I'm sending the best route to your phone. RISHI CHANDRA: And will
automatically your phone– and notify your phone,
whether it's Android or iOS, and take you straight
to Google Maps.

So you can glance at directions,
interact with the map, or just start navigation. It's really simple. Now TVs are another
natural place to get help from the
Google Assistant, and we've a great place to start
with over 50 million Chromecast and Chromecast built-in devices. So today, we're
announcing that we'll be updating Chromecast to show
visual responses on your TV when you ask for help
from Google Home. For example, I can
now say, OK, Google. Show my calendar for Saturday. GOOGLE ASSISTANT:
Showing it on your TV. RISHI CHANDRA: It'll show
up right on TV screen. I'll immediately get
results from the Assistant. [APPLAUSE] And since the Assistant
detected my voice, we're showing my calendar. Others would see their
calendar by using their voice. We can personalize the
experience, even on the TV. They can continue to
follow-up the conversation. Looks like I have a
biking trip to Santa Cruz.

What's the weather in
Santa Cruz this weekend? GOOGLE ASSISTANT: This
weekend in Santa Cruz, it will be clear and
sunny most of the time. RISHI CHANDRA: So
it's really easy. It's all hands-free. Your Assistant can provide
a visual response to a TV to a lot of different
types of questions. We talked about how
easy it is to play what you want to watch
on the TV screen, but what about those times
you don't know what to watch? Well, soon, you could
just ask, hey, Google. What's on YouTube? GOOGLE ASSISTANT: Here you go.

RISHI CHANDRA: And it'll show
me my personalized results right on the TV screen. If I don't like
any of the options, I can continue the
conversation with my voice. Show my Watch Later list. GOOGLE ASSISTANT: All right. RISHI CHANDRA: Play
"Send My Love." GOOGLE ASSISTANT: Playing
"Send My Love" from YouTube. [MUSIC – "SEND MY LOVE"] RISHI CHANDRA:
It's really simple. Again, no remotes
or phone required. In a short conversation, I found
something really interesting to watch using Google Home. I can even do it
with other things. OK, Google. What's on my DVR? GOOGLE ASSISTANT: Here you go. RISHI CHANDRA:
Here we're showing how it works with YouTube
TV, a new live TV streaming service that gives you
live sports and shows from popular TV networks. And YouTube TV
includes a cloud DVR, so I can easily play
my saved episodes. Everything can be done
in a hands-free way all from the
comfort of my couch. And over time, we're going
to bring all those developer actions that Valerie had already
talked about right to the TV screen.

So we'll do even more over
time with Google Home. And that's our update
for Google Home. Proactive assistance will bring
important information to you at the right time, simple
and easy hands-free calling, more entertainment
options, and evolving the Assistant to provide
visual responses in the home. Next up is Anil, who's going
to talk about Google Photos. [APPLAUSE] [MUSIC PLAYING] ANIL SABHARWAL:
Two years ago, we launched Google Photos
with an audacious goal– to be the home for
all of your photos, automatically organized
and brought to life so that you could easily
share and save what matters.

In doing so, we took a
fundamentally different approach. We built a product from the
ground up with AI at its core. And that's enabled
us to do things in ways that only Google can. Like when you're looking for
that one photo you can't find, Google Photos
organizes your library by people, places, and things. Simply type, "Anil
pineapple Hawaii," and instantly find this gem. [LAUGHTER] Or when you come home
from vacation, overwhelmed by the hundreds of
photos you took, Google Photos will
give you an album curated with only the
best shots, removing duplicates and blurry images. This is the secret ingredient
behind Google Photos, and the momentum we've seen
in these two short years is remarkable. As Sundar mentioned, we now
have more than half a billion monthly active users, uploading
more than 1.2 billion photos and videos per day. And today, I'm
excited to show you three new features
we're launching to make it even easier
to send and receive the meaningful
moments in your life.

Now, at first glance, it
might seem like photo sharing is a solved problem. After all, there's no shortage
of apps out there that are great at keeping you
and your friends and family connected, but we
think there's still a big and different problem
that needs to be addressed. Let me show you what I mean. [VIDEO PLAYBACK] – If there's one
thing you know, it's that you're a
great photographer. If there's a second
thing you know, it's that you're kind
of a terrible person. – What? – Yeah, you heard me. The only photo of the
birthday girl in focus? Never sent it. The best picture of
the entire wedding? Kept it to yourself. This masterpiece of
your best friend? We were going to
send it, but then you were like, oh,
remember that sandwich? I love that sandwich.

If only something could say,
hey, Eric looks great in these. You want to send them to him? And you can be like, great idea. Well, it can. Wait, it can? Yup. With Google Photos. [END PLAYBACK] [APPLAUSE] ANIL SABHARWAL:
So today, to make us all a little less
terrible people, we're announcing Suggested
Sharing, because we've all been there, right? Like when you're
taking that group photo and you insist that it be
taken with your camera, because you know if
it's not on your camera, you are never seeing
that photo ever again. [LAUGHTER] Now thanks to the machine
learning in Google Photos, we'll not only remind you so
you don't forget to share, we'll even suggest
the photos and people you should share with. In one tap, you're done. Let's have a look at
Suggested Sharing in action. I'm once again joined onstage
by my friend, and Google Photos product lead, David Leib. [APPLAUSE] All right, so here
are a bunch of photos Dave took while bowling
with the team last weekend.

He was too busy
enjoying the moment, so he never got around
to sharing them. But this time, Google
Photos sent him a reminder via
notification, and also by badging the new Sharing tab. The Sharing tab is
where you're going be able to find all of
your Google Photos sharing activity, and at the top,
your personal suggestions based on your sharing habits and
what's most important to you. Here is the Sharing
Suggestion that Dave got from his day bowling. Google Photos recognized
this was a meaningful moment, it selected the right
shots, and it figured out who he should send it to based
on who was in the photos. In this case, it's Janvi,
Jason, and a few others who were also at the event. Dave can now review
the photos selected, as well as update
the recipients.

Or if he's happy with
it, he can just tap Send. And that's it. Google Photos will even
send an SMS or an email to anyone who
doesn't have the app. And that way, everyone can view
and save the full resolution photos, even if they don't
have Google Photos accounts. And because Google
photo sharing works on any device,
including iOS, let's have a look at what
Janvi sees on her iPhone. She receives a notification,
and tapping on it lets her quickly jump
right into the album. And look at all the photos
that Dave has shared with her. But notice here at
the bottom, she's asked to contribute the photos
she took from the event, with Google Photos automatically
identifying and suggesting the right ones.

Janvi can review the suggestions
and then simply tap Add. Now all of these photos
are finally pulled together in one place, and Dave gets
some photos he's actually in. [LAUGHTER] Which is great, because a
home for all your photos really should include
photos of you. Now, though Suggested Sharing
takes the work out of sharing, sometimes there's a
special person in your life whom you share just
about everything with. Your partner, your best
friend, your sibling. Wouldn't it be great if
Google Photos automatically shared photos with that person? For example, I would love it
if every photo I ever took of my kids was automatically
shared with my wife. And that's why today, we're also
announcing Shared Libraries. [APPLAUSE] Let me show you how it works. So here, we're now looking
at my Google Photos account. >From the menu, I
now have the option to go ahead and
share my library, which I'm going to go ahead
and do with my wife, Jess.

Importantly, I have complete
control over which photos I automatically share. I can share them all,
or I can share a subset, like only photos of
the kids, or only photos from a
certain date forward, like when we first met. In this case, I'm going
to go ahead and share all. [LAUGHTER] [LAUGHS] We did not meet today. [LAUGHTER] And that's all there is to it. I've now gone ahead and shared
my library with my wife, Jess. So, let's switch to her phone
to see what the experience looks like from her end. She receives a notification,
and after accepting, she can now go to see
all the photos that I've shared with her, which she
can access really easily from the menu. If she see something
she likes, she can go ahead and
select those photos and simply save
them to her library.

We'll even notify
her periodically as I take new photos. Now, this is great,
but what if Jess doesn't want to have to keep
coming back to this view and checking if I shared
new photos with her? She just wants every photo
I take of her or the kids to automatically be
saved to her library, just as if she took
the photos herself. With Shared Libraries,
she can do just that, choosing to autosave
photos of specific people. Now, any time I
take photos of her or the kids, without either
of us having to do anything, they'll automatically appear
in the main view of her app. Let me show you. Now, I couldn't justify
pulling the kids out of school today just to have
their photo taken, but I do have the
next best thing.

[APPLAUSE] Let me introduce you to
[? Eva ?] and [? Lilly. ?] All righty here. So I'm going to go ahead,
take a photo with the girls. Smile, kids! [LAUGHTER] Wow, fantastic. And since this is too
good of an opportunity, I'm going to have to
take one with all of you here, too, all right? [CHEERING] Here we go. Woo! Brilliant. All right. OK, so thank you, girls. Much appreciated. Back to school we go. [LAUGHTER] All right. So, using nothing more
than the standard camera app on my phone, I've
gone ahead and taken one photo with my kids and
one photo with all of you here in the audience. Google Photos is going to
back these two photos up. It's going to share
them with Jess, and then it's going to
recognize the photo that has my kids in them
and automatically save just that one to her library,
like you can see right here. [APPLAUSE] Now finally, Jess and I can
stop worrying about whose phone we're using to take the photos. All the photos of our family
are in my Google Photos app, and they automatically
appear in hers too.

And best of all,
these family photos are part of both of
our search results, and they're included in
the great collages, movies, and other fun creations that
Google Photos makes for us. But notice how only the
photos with the kids showed up in Jess's main view. But because I shared my
entire library with her, I can simply go to the
menu, and Jess can now see all of the photos, including
the one with all of you. [APPLAUSE] And that's how easy sharing
can be in Google Photos. Spend less time worrying
about sharing your memories, and more time actually
enjoying them. Suggested Sharing
and Shared Libraries will be rolling out on
Android, iOS, and web in the coming weeks.

Finally, we know
sharing doesn't always happen through apps and screens. There's still something
pretty special about looking at and even gathering around
an actual printed photo. But printing photos and
albums today is hard. You have to hunt across
devices and accounts to find the right
photos, select the best among the duplicates
and blurry images, upload them to a
printing service, and then arrange them
across dozens of pages. It can take hours of sitting
in front of a computer just to do one thing. Thankfully, our machine
learning and Google Photos already does most of
this work for you, and today, we're
bringing it all together with the launch of Photo Books. [APPLAUSE] They're beautiful, high quality
with a clean and modern design, but the best part
is that they're incredibly easy to make,
even on your phone. What used to take hours
now only takes minutes. I recently made a book
for Jess on Mother's Day. And let me show you just
how easy and fast that was. First, thanks to
unlimited storage, all my life's moments are
already here in Google Photos. No need to upload them to
another website or app.

So I'll select a
bunch of photos here. And the good news is I
don't have to figure out which are the right photos
and which are the good ones because this is where
Google Photos really shines. I'm just going to go
ahead and hit plus. Select Photo book. I'm going to pick
a hardcover book. We offer both a softcover
and a hardcover. And notice what happens. Google Photos is going to
pick the best photos for me automatically, automatically
suggesting photo– 40, in this case. [APPLAUSE] How awesome is that? And it's even going to go ahead
and lay them all out for me. All that's left for me to do
is make a couple of tweaks, check out, and in
a few days, I'll end up with one of these
beautiful printed photo books. [APPLAUSE] And soon, we'll make it
even easier to get started, applying machine learning
to create personalized photo books you'll love.

So when you go to Photo
Books from the menu, you'll see pre-made books
tailored just for you. Your trip to the
Grand Canyon, time with your family during
the holidays, or your pet, or even your kids artwork,
all easily customizable. We'll even notify you when
there are new Photo Books suggestions. AUDIENCE: [INAUDIBLE] ANIL SABHARWAL: Photo Books
are available today in the US on photos.google.com,
and they'll be rolling out on Android
and iOS next week, and will be expanding
to more countries soon.

[APPLAUSE] I am really excited about this
launch, and I want all of you to be the first to try it out. And that's why
everyone here at I/O will be receiving a free
hardcover photo book. [APPLAUSE] It's a great example of
machine learning at work. AUDIENCE: [? $10? ?] Take
that photo [INAUDIBLE] ANIL SABHARWAL: So those are
the three big updates related to sharing in Google Photos. Suggested Sharing, Shared
Libraries, and Photo Books. Three new features built
from the ground up with AI at their core. I can't wait for all of you
to try them out real soon. Now before I go, I want to
touch on what Sundar mentioned earlier, which is the way we're
taking photos is changing.

Instead of the occasional
photo with friends and family, we now take 30 identical
photos of a sunset. We're also taking different
types of photos, not just photos to capture
personal memory, but as a way to
get things done– whiteboards we want to remember,
receipts we need to file, books we'd like to read. And that's where Google Lens
and its vision-based computing capabilities comes in. It can understand
what's in an image and help you get things done. Scott showed how Google
Lens and the Assistant can identify what you're looking
at and help you on the fly. But what about after
you've taken the photo? There are lots of photos
you want to keep, and then look back on later to
learn more and take action.

And for that, we're
bringing Google Lens right into Google Photos. Let me show you. So let's say you took
a trip to Chicago. There's some beautiful
architecture there. And during your boat tour
down the Chicago River, you took lots of
photos, but it's hard to remember which
building is which later on. Now, by activating
Lens, you can identify some of the cool
buildings in your photos, like the second
tallest skyscraper in the US, Willis Tower. You can even pull up
directions and get the hours for the viewing deck. And later, while visiting
the Art Institute, you might take photos of a
few paintings you really love. In one tap, you can learn
more about the painting and the artist. And the screenshot that
your friend sent you of that bike rental place? Just activate Lens, and you
can tap the phone number and make the call
right from the photo.

[APPLAUSE] Lens will be rolling out in
Google Photos later this year, and we'll be continually
improving the experience so it recognizes
even more objects and lets you do
even more with them. And those are the updates
for Google Photos. [CHEERING AND APPLAUSE] Now, let's see what's
next from YouTube. [MUSIC PLAYING] SUSAN WOJCICKI: All right. Good morning, everyone. I am thrilled to be
here at my first ever I/O on behalf of YouTube. [APPLAUSE] Thank you. So that opening video
that we all just saw, that's a perfect glimpse into
what makes YouTube so special– the incredible
diversity of content. A billion people
around the globe come to YouTube every
month to watch videos from new and unique voices. And we're hard at
work to make sure that we can reach
the next billion viewers, which you'll hear about
in a later I/O session today.

We want to give
everyone the opportunity to watch the content on YouTube. So, YouTube is different
from traditional media in a number of ways. First of all, YouTube is open. Anyone in the world can upload
a video that everyone can watch. You can be a vlogger
broadcasting from your bedroom, a gamer live streaming
from your console, or a citizen
journalist documenting events live from your
phone on the front lines. And what we've seen
is that openness leads to important
conversations that help shape society,
from advancing LGBTQ rights to highlighting
the plight of refugees, to encouraging body positivity. And we've seen in our
numbers that users really want to engage with this
type of diverse content.

We are proud that last year we
passed a billion hours a day being watched on YouTube,
and our viewership is not slowing down. The second way that
YouTube is different from traditional media is that
it's not a one-way broadcast. It's a two way conversation. Viewers interact directly
with their favorite creators via comments, mobile live
streaming, fan polls, animated GIFs, and VR. And these features enable
viewers to come together, and to build communities
around their favorite content. So one of my favorite stories
of a YouTube community is the e-NABLE network. A few years ago, an
engineering professor named Jon Schull saw a YouTube
video about a carpenter who had lost two of his fingers. The carpenter worked
with a colleague for over a year to build
an affordable 3D-printed prosthesis that would enable
him to go back to work.

They then applied
this technology for a young boy who was
born without any fingers. So inspired by this
video, the professor posted a single
comment on the video asking for volunteers
with 3D printers to help print
affordable prostheses. The network has since grown
into a community of over 6,000 people who have
designed, printed, and distributed these
prosthetics to children in over 50 countries. [APPLAUSE] So today, thousands
of children have regained the ability
to walk, touch, play, and all because
of the one video– one comment– and that
incredible YouTube community that formed to help. And that's just one example of
the many passionate communities that are coming together
on YouTube around video. So, the third feature
of this new medium is that video works
on-demand on any screen. Over 60% of our watchtime now
comes from mobile devices. But actually our
fastest growing screen isn't the one in your pocket. It's the one in
your living room. Our watchtime in our living room
is growing at over 90% a year. So, let's now welcome Sarah Ali,
Head of Living Room Products, to the stage to talk about the
latest features in the living room.

[MUSIC PLAYING] [APPLAUSE] SARAH ALI: Thank you, Susan. So earlier today,
you heard from Rishi about how people
are watching YouTube on the TV via the Assistant. But another way
people are enjoying video is through the
YouTube app, which is available over half a billion
smart TVs, game consoles, and streaming devices. And that number continues
to grow around the world. So, when I think
about why YouTube is so compelling
in the living room, it isn't just about
the size of the screen. It's about giving
you an experience that TV just can't match. First, YouTube offers
you the largest library of on-demand content. Second, our recommendations
build channels and lineups based on your
personal interests, and what you enjoy watching. And third, it's a two-way
interactive experience with features like
voice control.

And today, I'm super
excited to announce that we're taking the
interactive experience a step further by introducing
360 video in the YouTube app on the big screen. And you know that
you can already watch 360 videos on your phone
or in your Daydream headset. But soon, you'll be
able to feel like you're in the middle of the action,
right from your couch, and on the biggest
screen you own. Now, one of my personal
interests outside of work is to travel. And one place I'd
love to visit is Alaska to check out
the Northern Lights. So, let's do a voice search. Aurora Borealis 360. Great. Let's choose that first video. And now, using my TV remote, I'm
able to pan around this video, checking out this awesome
view from every single angle. Traveling is great,
especially when I don't have to get on a flight,
but 360 is now a brand-new way to attend concerts. I didn't make it to Coachella,
but here I can experience it like I was on stage.

And to enhance the
experience even further, we are also introducing
live 360 in the living room. Soon, you'll be able to
witness moments and events as they unfold in a new,
truly immersive way. So whether you have a Sony
Android TV, or an Xbox One console, soon, you'll
be able to explore 360 videos right from
the comfort of your couch and along with your
friends and family. And now, to help
show you another way we're enabling
interactivity, please join me in welcoming Barbara McDonald,
who's the lead of something we call Super Chat. [MUSIC PLAYING] [APPLAUSE] BARBARA MACDONALD:
Good morning I/O, and to everybody
on the live stream.

As Susan mentioned, what
makes YouTube special is the relationships
that creators are able to foster with their fans. And one of the best ways to
connect with your fans is to bring them live, behind
the scenes of your videos, offering up can't-miss content. In the past year, the
number of creators live streaming on
YouTube has grown by 4x. This growth is
awesome, and we want to do even more to deepen the
connection between creators and their fans
during live streams. That's why earlier this year,
we rolled out a new feature called Super Chat. When a creator is
live streaming, fans can purchase Super
Chats which are highlighted, fun, chat messages. Not only do fans
love the recognition, but creators earn
extra money from it. In the past three
months since launch, we've been amazed by
the different ways creators are using Super Chat. Even April, our favorite
pregnant giraffe, who unfortunately could
not be here with us today, has raised tens of
thousands of dollars for her home, the
Animal Adventure Park.

But, OK. [CLAPPING] OK, we can clap for that. [APPLAUSE] [LAUGHS] But enough talking from me. We are going to do a live
stream right here, right now, to show all of you
how Super Chat works. And to help me, I am
very excited to introduce top YouTube creators with
9 million subscribers and over 1 billion
lifetime channel views. On the grass back
there, The Slow Mo Guys! [CHEERING AND APPLAUSE] GAVIN FREE: Hello, everyone. DANIEL GRUCHY: Wow, hey. Happy to be here. How's it going? BARBARA MACDONALD:
It's great to have you. So let's pull up
their live stream. And just look. Chat is flying. Now, I love The
Slow Mo Guys, and I want to make sure that
they see my message, so I'm going to Super Chat them. Pulled up the stream. And right from within live chat,
I am able to enter my message, select my amount, make
the purchase, and send.

Boom. See how much that
message stands out? And it gets to the top. It's cool, right? DANIEL GRUCHY: Yeah,
thanks, Barbara. It's actually lovely
at the minute. Although, I feel like there's
a high chance of showers. GAVIN FREE: Very local
showers, like, specifically to this stage. DANIEL GRUCHY: Very sudden. Yeah. BARBARA MACDONALD:
Ooh, I wonder. I wonder. Well, because we know developers
are incredibly creative, we wanted to see what you can
do to make Super Chat even more interactive. So we've launched an API for it.

And today, we're taking
it to the next level with a new developer
integration that triggers actions in the real world. This means that when a fan
sends a Super Chat to a creator, things can happen in real life,
such as turning the lights on or off in the creator's
studio, flying a drone around,
or pushing buttons on their toys and gadgets. The Slow Mo Guys are going to
create their next slow motion video using Super Chat's API. We have now rigged things up so
that when I send my next Super Chat, it will
automatically trigger the lights and a big horn
in this amphitheater, OK? And that is going to signal our
friends back there on the lawn to unleash a truckload of water
balloons at The Slow Mo Guys. GAVIN FREE: I'm scared. [CHEERING] DANIEL GRUCHY: Yeah. BARBARA MACDONALD: Yeah. [LAUGHS] DANIEL GRUCHY: That's right. For every dollar, we're going
to take another balloon. So, more money
means more balloons. Although, I did hear
a guy over here go, oh, we're going to
really nail these guys.

All right, that's going to
be at least $4 right there. So, yeah. Each dollar donated goes to
the causes Susan mentioned earlier, the e-NABLE network. BARBARA MACDONALD: OK, so, how
much do you think we can send? I can start at $1 and go
anywhere upwards from there. So, it's for charity. How do we think– $100. How's that sound? AUDIENCE: More. BARBARA MACDONALD: OK,
higher, higher. $200? $200? GAVIN FREE: How about
$500 for 500 balloons? BARBARA MACDONALD: $500? I can do that. I can do that. OK. So I'm going to send my
Super Chat and hit Send. $500. Boom. [HORN BLOWS] DANIEL GRUCHY: Oh! Balloons, oh [INAUDIBLE] god! Agh! BARBARA MACDONALD: [LAUGHS] DANIEL GRUCHY: Ugh. Yep. All right. All right. BARBARA MACDONALD: Keep going. Keep going. DANIEL GRUCHY: Oh! BARBARA MACDONALD: It's 500. DANIEL GRUCHY: It's finished. It's finished. GAVIN FREE: It never ends, ah! DANIEL GRUCHY: Ah! [INAUDIBLE] BARBARA MACDONALD:
That was amazing.

Thank you, everybody,
for your help. So this obviously just
scratches the surface of what is possible using
Super Chat's open APIs. And we are super excited
to see what all of you will do with it next. So Susan, how about
you come back out here, and let's check out the

[APPLAUSE] Thank you, Slow Mo Guys. Thank you, Barbara. I'm really happy to
announce that YouTube is going to match The
Slow Mo Guys' Super Chat earnings from today
100x to make sure that we're supplying
prosthetics to children in need around the world. [APPLAUSE] So that 360 living room demo
and the Super Chat demo– those are just two
examples of how we are working to connect
people around the globe together with video.

Now, I hope that what
you've seen today is that the future of media
is a future of openness and diversity. A future filled with
conversations, and community. And a future that works
across all screens. Together with creators,
viewers, and partners, we are building the
platform of that future. Thank you, I/O, and please– [APPLAUSE] Please welcome
Dave Burke, joining us to talk about Android. [CHEERING AND APPLAUSE] [VIDEO PLAYBACK] [MUSIC – JACKIE WILSON, "HIGHER
at Google I/O 2017. As you can see, we
found some new ways to hardware accelerate Android.

This time, with jet packs. But seriously, 2 billion
active devices is incredible. And that's just
smartphones and tablets. We're also seeing new momentum
in areas such as TVs, and cars, and watches, and
laptops, and beyond. So let me take a
moment and give you a quick update on how Android
is doing in those areas. Android Wear 2.0 launched
earlier this year with a new update for
Android and iPhone users. And with you partners like
Emporio Armani, Movado, and New Balance, we now enable
24 of the world's top watch brands. Android Auto. We've seen a 10x user
growth since last year It's supported by more than 300 car
models and the Android Auto mobile app. And just this week,
Audi and Volvo announced that their
next generation nav systems will be powered by
Android for a more seamless, connected car experience. Android TV. We partnered with over 100
cable operators and hardware manufacturers around the world. And we're now seeing 1
million device activations every two months.

And there are more than
3,000 Android TV apps in the Play Store. This year, we're releasing a
brand-new launcher interface, and bringing the Google
Assistant to Android TV. Android Things previewed
late last year, and already there are thousands
of developers in over 60 countries using it to
build connected devices with easy access to the
Google Assistant, TensorFlow, and more. The full launch is
coming later this year. Chromebooks comprise almost 60%
of K-12 laptops sold in the US, and the momentum is
growing globally. And now, with the added
ability to run Android apps, you get to target laptops, too. Now, of course,
platforms are only as good as the apps they run. The Google Play ecosystem
is more vibrant than ever. Android users installed a
staggering 82 billion apps and games in the past year. That's 11 apps for every
person on the planet. All right, so let's come
back to smartphones. And the real reason I'm here
is to talk about Android O. Two months ago, we launched our
very first developer preview. So you could kick the tires
on some of the new APIs.

And of course, it's very
much a work in progress, but you can expect the
release later this summer. Today, we want to walk you
through two themes in O that we're excited about. The first is something
called Fluid Experiences. It's pretty incredible what you
can do on a mobile phone today, and how much we rely on them
as computers in our pockets. But there are still
certain things are tough to do
on a small screen, so we're doing a
couple of features in O that we think will
help with this, which I'll cover
in just a moment. The second theme is
something we call Vitals. And the concept here is to
keep vital system behavior in a healthy state so we can
maximize the user's battery, performance, and reliability. So let's jump
straight in and walk through four new
fluid experiences, with live demos,
done wirelessly.

What could possibly go wrong? [LAUGHTER] All right. These days, we do a lot of
[? wants ?] on our phones, whether it's paying
for groceries while reading a text
message you just received, or looking up guitar chords
while listening to a new song. But conventional
multi-window techniques don't translate well to mobile. They're just too fiddly to
set up when you're on the go. We think Picture-in-Picture
is the answer for many cases. So let's take a look.

My kids recently asked me
to build a lemonade stand. So I opened up YouTube, and I
started researching DIY videos. And I found this one. Now, at the same
time, I want to be able to jot down the
materials I need to build for this lemonade stand. So to multitask, all I do
is press the Home button, and boom, I get

You can think of it as a kind
of automatic multi-window. I can move it out of the
way, I can launch Keep, I can add some more materials. So I know I need to get
some wood glue, like so. Then when I'm done, I just
simply swipe it away like that. It's brilliant. Picture-in-Picture lets you
do more with your phone. It works great when
video calling with Duo.

For example, maybe I
need to check my calendar while planning a
barbecue with friends. And there are lots of
other great use cases. For example,
Picture-in-Picture for Maps navigation, or watching
Netflix in the background, and a lot more. And we're also excited
to see what you come up with for this feature. We're also making
notification interactions more fluid for users. >From the beginning,
Android has really blazed a trail when it comes
to its advanced notification system. In O, we're extending the
reach of notifications with something we call
Notification Dots. It's a new way
for app developers to indicate that there's
activity in their app, and to drive engagement. So take a look. You'll notice that the Instagram
app icon has a dot in it. And this is it
indicating that there's a notification
associated with the app. So if I pull down the
shade, sure enough, you can see there's
a notification. In this case,
someone's commented on a photo I'm tagged in. What's really cool is I can
long press the app icon, and we now show the
notification in place.

One of the things I really
like about the Notification Dot mechanism is that it works
with zero effort from the app developer. We even extract the color
of the dot from your icon. Oh, and you get your erase
the icon by simply swiping the notification like that. So you're always in control. Another great feature in O that
helps make your experience more fluid is Autofill. Now, if you use
Chrome, you're probably already familiar with Autofill
for quickly filling out a username and
password, or credit card information with a single tap. With O, we've extended
Autofill to apps. Let's say I'm setting up a
new phone for the first time, and I open Twitter.

And I want to log in. Now, because I use twitter.com
all the time on Chrome, the system will automatically
suggest my username. I can simply tap it. I get my password. And then, boom. I'm logged in. It's pretty awesome. [APPLAUSE] Autofill takes the
pain out of setting up a new phone or tablet. Once the user opts
in, Autofill will work for most applications. We also provide
APIs for developers to customize Autofill
for their experience. I want to show you
one more demo of how we're making Android more fluid
by improving copy and paste. The feature is called
Smart Text selection.

So let's take a look. In Android, you typically
long press or double tap a word to select it. For example, I can open Gmail. I can start composing. If I double tap the word "bite,"
it gets selected like so. Now, we know from user
studies that phone numbers are the most copy-and-pasted items. The second most common are
named entities like businesses, and people, and places. In O, we're applying
on-device machine learning– in this case, a [? feed ?]
[? for a ?] neural network– to recognize these more
complicated entities. So watch this. I can double tap anywhere on
the phrase, "Old Coffee House," and all of it is
selected for me. No more fiddling around
with text selection handles. [APPLAUSE] It even works for addresses. So if I double tap on the
address, all of it is selected. And what's more– [APPLAUSE] There is more. What's more is the
machine learning model classifies
this as an address and automatically suggests Maps. So I can get directions
to it with a single click. And of course, it works as
you'd expect for phone numbers. You get the phone
dialer suggested.

And for email addresses,
you get Gmail suggested. All of this neural
networking processing happens on-device in real time,
and without any data leaving the device. It's pretty awesome. Now, on-device
machine learning helps to make your phone smarter. And we want to help
you build experiences like what you just saw. So we're doing two
things to help. First, I'm excited to
announce that we're creating a specialized version
of TensorFlow, Google's open source machine
learning library, which we call TensorFlow Lite. It's a library for apps
designed to be fast and small, yet still enabling
state-of-the-art techniques like [? compnets ?] and LSTMs. Second, we're introducing
a new framework at Android to hardware accelerate
neural computation. TensorFlow Lite will leverage
a new neural network API to tap into silicon-specific
accelerators. And over time, we expect to
see DSPs specifically designed for neural network
inference and training.

We think these new
capabilities will help power our next
generation of on-device speech processing, visual search,
augmented reality, and more. TensorFlow Lite will soon
be part of that open source TensorFlow project, and
the neural network API will be made available later
in an update to O this year. OK, so that's a
quick tour of some of the fluid experiences
in O.

Let's switch gears and talk about Vitals. So to tell you more,
I want to hand it over to Steph, who's been
instrumental in driving this project. Thank you. [MUSIC PLAYING] STEPHANIE SAAD
CUTHBERTSON: Hi, everyone. OK, so all the features
Dave talked about are cool. But we think your phones'
foundations are even more important–
battery life, security, startup time, and stability. After all, if your battery dies
at 4:00 PM, none of the other features that Dave talked
about really matter. So in O, we're investing
in what we call Vitals, keeping your phone secure
and in a healthy state to maximize power
and performance. We've invested in three
foundational building blocks– security enhancements,
OS optimizations, and tools to help
developers build great apps. First, security. Android was built with
security in mind from day one with application sandboxing. As Android has matured, we've
developed vast mobile security services. Now, we use machine learning to
continuously comb apps uploaded to Play, flagging
potentially harmful apps.

Then, we scan over 50
billion apps every day, scanning every installed app
on every connected device. And when we find a
potentially harmful app, we disable it or remove it. And we found most
Android users don't know these services
come built-in with Android devices with Play. So for greater
peace of mind, we're making them more
visible and accessible, and doubling down
on our commitment to security, with the
introduction of Google Play Protect. [APPLAUSE] So here, you can see
Play Protect has recently scanned all your apps. No problems found. That's Google Play Protect.

It's available out of the
box on every Android device with Google Play. Second, OS optimizations. The single biggest visible
change in O is boot time. On Pixel, for example,
you'll find, in most cases, your boot time is
now twice as fast. And we've made all
apps faster by default. We did this through extensive
changes to our runtime. Now, this is really cool stuff,
like concurrent compacting garbage collection
and code locality. But all you really need
to know is that your apps will run faster and smoother. Take Google Sheets–
aggregate performance over a bunch of common actions
is now over two times as fast. And that's all from the OS. There are no changes to the app. But we found apps
could still have a huge impact on performance. Some apps were running
in the background, and they were consuming tons
of system resources, especially draining battery.

So in O, we're
adding Wise Limits to background location
and background execution. These boundaries put
sensible limits on usage. They're protecting battery
life and freeing up memory. Now, our third theme is helping
developers build great apps. And here, I want
to speak directly to all the developers
in the audience. Wouldn't it be cool if Android's
engineering team could show you what causes performance issues? Today, we've launched
Play Console Dashboards that analyze every
app and pinpoint six top issues that
cause battery drain, crashes, and slow UI. For each issue the app
has, we show how many users are affected and provide
guidance on the best way to fix. Now, imagine if developers could
also have a powerful profiler to visualize what's
happening inside the app. In Android Studio, we've also
launched new unified profiling tools for network,
memory, and CPU. So, developers can
now see everything on a unified timeline, and
then dive into each profiler. For example, on CPU, you
can see every thread.

You can look at the call
stack, and the time every call is taking. You can visualize
where the CPU is going. And you can jump to
the exact line of code. OK, so that's Android Vitals. [APPLAUSE] How we're investing
in your phone's foundational security
and performance. Later today, you'll see
Android's developer story from end to end. Our hard work to
help developers build great apps at every stage– writing code, tuning,
launching, and growing.

But there is one more thing. One thing we think would
be an incredible complement to the story. And it is one thing our team
has never done for developers. We have never added a
new programming language to Android. And today, we're making
Kotlin an officially supported language in Android. [APPLAUSE] So, Kotlin– Kotlin is one our
developer community has already asked for. It makes developers so
much more productive. It is fully Android
runtime compatible. It is totally interoperable
with your existing code. It has fabulous IDE support. And it's mature and
production ready from day one. We are also announcing our
plans to partner with JetBrains, creating a foundation
for Kotlin. I am so happy JetBrains CEO,
Max Shafirov, is here today. [APPLAUSE] This new language is
wonderful, but we also thought we should increase
our investment in our existing languages. So we're doing that, too. Please join us at the
developer keynote later today to hear our story
from end to end.

OK, so let's wrap up. There are tons more features
in Android O, which we don't have time to go into today. Everything from
redesign settings, to Project Treble, which
is one of the biggest changes to the
foundations of Android to date, to downloadable fonts
with new emoji, and much more. If you want to try some of
these features for yourself– and you do– I'm happy to announce we're
making the first beta release of O available today. Head over to android.com/beta. [APPLAUSE] But there's more. [LAUGHS] You probably
thought we were done talking about
Android O, but, I'd like you to hear some
more about Android. And from that, please
welcome Sameer. Thank you. [MUSIC PLAYING] [APPLAUSE] SAMEER SAMAT: Thanks, Steph. Hi, everyone. >From the beginning,
Android's mission has been to bring the power
of computing to everyone. And we've seen tremendous
growth over the last few years, from the high end to
entry-level devices, in countries like
Indonesia, Brazil and India. In fact, there are now
more users of Android in India than there
are in the US.

And every minute,
seven Brazilians come online for the first time. Now, all this
progress is amazing. For those of us who
have a smartphone, we intuitively understand
the profound impact that computing is having
on our daily lives. And that's why our team
gets so excited about how we can help bring this
technology to everyone. So we took a step back
to think about what it would take to get
smartphones to more people. There are a few
things that are clear. Devices would need to
be more affordable, with entry-level prices
dropping significantly. This means hardware that uses
less power-packed processors and far less memory
than on premium devices. But the hardware is
only half the equation. The software also
has to be tuned for users' needs around
limited data connectivity and multilingual use. We learned a lot
from our past efforts here with Project
Svelte and KitKat, and the original
Android One program.

But we felt like the time was
right to take our investment to the next level. So today, I'm
excited to give you a sneak peek into a
new experience we're building for entry-level
Android devices. Internally, we
call it Android Go. Android Go focuses
on three things. First, optimizing the
latest release of Android to run smoothly on
entry-level devices, starting with Android
O. Second, a rebuilt set of Google Apps that
use less memory, storage space, and mobile data. And third, a version
of the Play Store that contains the
whole app catalog, but highlights the apps
designed by all of you for the next billion users. And all three of these
things will ship together as a single experience starting
on Android O devices with 1 gigabyte or less of memory.

Let's take a look at
some of the things we're working on for Android Go. First, let's talk about
the operating system. For manufacturers to make more
affordable entry-level devices, the prices of their
components have to come down. Let's take one example. Memory is an
expensive component. So we're making a
number of optimizations to the system UI and the
kernel to allow an Android O device built with
the Go configuration to run smoothly with as
little as 512 megabytes to 1 gigabyte of memory.

Now on-device
performance is critical, but data costs and
intermittent connectivity are also big
challenges for users. One person put it best
to me when she said, mobile data feels like currency. And she wanted more control
over the way she spent it. So on these devices, we're
putting data management front and center in Quick Settings. And we've created an API that
carriers can integrate with, so you can see exactly how much
prepaid data you have left, and even top up right
there on the device. But beyond the OS,
the Google Apps are also getting
smarter about data. For example, on these devices,
the Chrome Data Saver feature will be turned on by
default. Data Saver transcodes content on the
server and simplifies pages when you're on a
slow connection. And, well, now we're
making the savings more visible here in the UI.

In aggregate, this
feature is saving users over 750 terabytes
of data every day. I'm really excited that the
YouTube team has designed a new app called YouTube Go for
their users with limited data connectivity. Feedback on the new YouTube
app has been phenomenal, and we're taking many of the
lessons we've learned here and applying them to
several of our Google Apps. Let me show you some of the
things I love about YouTube Go. First, there's a new
preview experience, so you can get a sneak
peek inside a video before you decide to spend
your data to watch it. And when you're sure this
is the video for you, you can select the
streaming quality you want, and see exactly how much mobile
data that's going to cost you. But my favorite
feature of YouTube Go is the ability to save videos
while you're connected. So you can watch them
later when you might not have access to data. And if you want to share any
of those videos with a friend, you can use the built-in
peer-to-peer sharing feature to connect two of your
devices together directly, and share the files
across without using any of your mobile data at all.

[APPLAUSE] But beyond data
management, the Google Apps will also make it
easier to seamlessly go between multiple
languages, which is a really common use case
for people coming online today. For example, Gboard now
supports over 191 languages, including the recent addition
of 22 Indian languages. And there's even a
transliteration feature, which allows you to
spell words phonetically on a QWERTY keyboard to type
in your native language script. Now, Gboard is super cool,
so I want to show it to you. I grew up in the US, so for any
of my family that's watching, don't get too
excited by the demo. I haven't learned Hindi yet. And I'm sorry, mom, OK? [LAUGHTER] So let's say, I want to send a
quick note to my aunt in India. I can open up Allo,
and using Gboard, I can type how it
sounds phonetically. [HINDI], which means
"how are you" in Hindi. And transliteration
automatically gives me Hindi script. That's pretty cool. Now, let's say I want to ask
her how my I/O speech is going, but I don't know how to
say that in Hindi at all.

I can use the built-in
Google Translate feature to say, "how is this going?" And seamlessly, I
get Hindi script, all built right
into the keyboard. [APPLAUSE] My family is apparently
a tough audience. All right. Well, the Google Apps
are getting Go-ified, what had always
propelled Android forward is the apps from all of you. And no surprise, many of
our developer partners have optimized
their apps already.

So to better connect users
with these experiences, we'll be highlighting
them in the Play Store. One example is right
here on Play's home page. To be eligible for
these new sections, we published a set
of best practices called "Building for Billions,"
which includes recommendations we've seen make a big difference
in the consumer experience. Things such as designing
a useful offline state, reducing your APK size to
less than 10 megabytes, and using GCM or JobScheduler
for better battery and memory performance. And also in "Building
for Billions," you'll find best practices for
optimizing your web experience. We've seen developers
build amazing things with new technologies, such
as progressive web apps. And we hope you can come
to our developer keynote later today to learn
a whole lot more.

OK, that was a quick walkthrough
of some of the things coming in Android Go. Starting with Android
O, all devices with 1 gigabyte of RAM or less
will get the Go configuration. And going forward,
every Android release will have a Go configuration. We'll be unveiling much
more later this year, with the first devices
shipping in 2018. We look forward to
seeing what you'll build, and how we can bring computing
to the next several billion users. Next up– next up, you'll
be hearing from Clay on one of Google's newest platforms
that we're really excited about– VR and AR. Thank you. [APPLAUSE] [MUSIC PLAYING] CLAY BAVOR: Thank you, Sameer. So, Sundar talked about how
technologies like machine learning and
conversational interfaces make computing more intuitive by
enabling our computers to work more like we do. And we see VR and AR
in the same light. They enable us to
experience computing just as we experience
the real world. Virtual reality can
be transporting. And you can experience
not just what it's like to see
someplace, but what it's like to really be there.

And augmented reality uses
your surroundings as context, and puts computing
into the real world. A lot has happened since
Google I/O last year, and I'm excited to share a
bit of what we've been up to. So let's start with VR. Last year, we announced
Daydream, our platform for mobile virtual reality. And then in October, to
kick-start the Daydream ecosystem, we released
Daydream View, a VR headset made by Google. And it's super comfortable. It's really easy to use. And there's tons to do with it. You can play inside
alternate worlds, and games like "Virtual
Virtual Reality." And you can see any
part of our world with apps like Street View.

And you can visit other worlds
with apps like Hello Mars. There's already a great
selection of Daydream phones out there, and we're
working with partners to get Daydream on even more. First, I'm pleased
that LG's next flagship phone, which launches later this
year, will support Daydream. And there's another. I'm excited to announce that
the Samsung Galaxy S8 and S8 Plus will add Daydream support
this summer with a software update. [APPLAUSE] So, Samsung, of
course, they make many of the most popular
phones in the world.

And we're delighted to have
them supporting Daydream. So great momentum in
Daydream's first six months. Let's talk about what's next. So with Daydream,
we showed that you can create high
quality mobile VR experiences with
just a smartphone and a simple headset. And there are a lot of nice
things about smartphone VR. It's easy. There aren't a bunch of cables
and things to fuss with. You can choose from a bunch
of great compatible phones. And of course, it's portable. You can throw your
headset in a bag. We asked, how we take the
best parts of smartphone VR and create a kind of device
with an even better experience? Well, I'm excited to announce
that an entirely new kind of VR device is coming to Daydream–
what we call standalone VR headsets. And we're working with
partners to make them. So what's a standalone headset? Well, the idea is that
you have everything you need for VR built right
into the headset itself. There's no cables, no phone,
and certainly, no big PC. And the whole device is
designed just for VR. And that's cool for
a couple of reasons.

First, it's easy to use. Getting into VR is as easy
as picking the thing up. And it's one step
and two seconds. And second, presence. And by that, I mean really
feeling like you're there. By building every part of the
device specifically for VR, we've been able to optimize
everything– the displays, the optics, the sensors– all to deliver a stronger
sense of being transported. And nothing
heightens the feeling of presence like
precise tracking– how the headset
tracks your movement. And we've dramatically improved
tracking with the technology that we call WorldSense. So WorldSense enables what's
known as positional tracking. And with it, your view
in the virtual world exactly matches your
movement in the real world. And it works by using
a handful of sensors on the device that look
out into your surroundings. And that means it
works anywhere. There's no setup.

There's no cameras to install. And with it, you really
feel like you're there. Now, just as we did with
Daydream-ready smartphones, we're taking a platform approach
with standalone headsets, working with partners to
build some great devices. To start, we worked
with Qualcomm to create a Daydream
standalone headset reference design, a sort of
device blueprint that partners can build from. And we're working closely
with two amazing consumer electronics companies to
build the first headsets. First, HTC, the company
that created the VIVE. [APPLAUSE] We're excited about it, too. [CHEERING AND APPLAUSE] They're a leader
in VR, and we're delighted to be working
with them on a standalone VR headset for Daydream. And second, Lenovo. We've been partners for years,
working together on Tango. And now, we're excited
to work with them on VR.

These devices will start to
come to market later this year. So that's the update on VR. Great momentum with apps,
more Daydream-ready phones on the way, and a new category
of devices that we think people are going to love. So let's turn to
augmented reality. And a lot of us were
introduced to the idea of AR last year with Pokemon GO. And the app gave
us a glimpse of AR, and it showed us
just how cool it can be to have digital
objects show up in our world. Well, we've been working
in this space since 2013 with Tango, a sensing
technology that enables devices to understand
space more like we do. Two years ago in 2015, we
released a developer kit. And last year, we shipped the
first consumer-ready Tango phone. And I'm excited to announce
that the second generation Tango phone, the ASUS ZenFone AR
will go on sale this summer.

Now, looking at the slides,
you may notice a trend. The devices are getting smaller. And you can imagine
far more devices having this capability in the future. It's been awesome to
see what developers have done with the technology. And one thing we've
seen clearly is that AR is most
powerful when it's tightly coupled to the real
world, and the more precisely, the better. That's why we've been
working with the Google Maps team on a service that
can give devices access to very precise location
information indoors. It's kind of like
GPS, but instead of talking to satellites
to figure out where it is, your phone looks for
distinct visual features in the environment, and it
triangulates with those.

So you have GPS. We call this VPS, Google's
Visual Positioning Service. And we think it's going
to be incredibly useful in a whole bunch of places. For example, imagine you're at
Lowe's, the home improvement store that has
basically everything. And if you've been there,
you know it's really big. And we've all had
that moment when you're struggling to find that
one, weird, random screwdriver thing. Well, imagine in the
future, your phone could just take you to
that exact screwdriver and point it out to
you on the shelf. Turns out we can
do this with VPS.

And let me show you how. And this is working today. So here we are walking
down an aisle at Lowe's. And the phone will find
these key visual feature points, which you can
see there in yellow. By comparing the feature points
against previously observed ones, those colorful
dots in the back, the phone can figure out exactly
where it is in space down to within a few centimeters. So GPS can get you to
the door, and then VPS can get you to the exact
item that you're looking for.

Further out– [APPLAUSE] Further out, imagine
what this technology could mean to people with
impaired vision, for example. VPS and an audio-based
interface could transform how they make
their way through the world. And it combines so many things
that Google is good at– mapping, computer vision,
distributed computing. And we think precise
location will be critical for
camera-based interfaces. So VPS will be one of the core
capabilities of Google Lens. We're really excited about
the possibilities here. So the last thing
I wanted to share is something that
we've been working on that brings many
of these capabilities together in a really
important area. And that's education. Two years ago, we
launched Expeditions, which is a tool for teachers
to take their classes on virtual reality field trips.

And 2 million
students have used it. Today, we're excited
to announce that we're adding a new capability
to Expeditions– AR mode, which enables kind
of the ultimate show-and-tell right in the classroom. If we could roll
the video, please. [VIDEO PLAYBACK] – All right, who wants
to see a volcano? 3, 2, 1. – Whoa! – Look at that lava. Look at that smoke
coming out of that. Pretend you're an airplane
and fly over the tornado. – That's the top of it. – What do you see? – It's either a
asteroid, meteorite– – We're learning
about DNA and genes– things that we can't see. And so, the most exciting thing
for me with the AR technology was that I could see
kids get an "aha" moment that I couldn't get by
just telling them about it. – The minute I saw it
pop up on the screen, it made me want to
get up and walk to it. – You actually you get
to turn around and look at things from all angles, so
it gave us a nice perspective.

– See if you can
figure out what that might be based on what you know
about the respiratory system. – I got to see where the
alveoli branched off, and I could look inside them
and see how everything worked, which I never saw before. And it was really, really cool. [END PLAYBACK] CLAY BAVOR: We're just
delighted with the response we're seeing so far. And we'll be rolling this
out later in the year. So, VR and AR, two
different flavors of what you might call immersive
computing– computing that works more like we do. We think that's a big idea. And in time, we see VR
and AR changing how we work and play, live and learn.

And all that I
talked about here, these are just the first steps. But we can see where
all of this goes, and we're incredibly
excited about what's ahead. Thanks so much. Back to Sundar. [APPLAUSE] [VIDEO PLAYBACK] – We wanted to make machine
learning have an open source project so that everyone
outside of Google could use the same system
we're using inside Google. [MUSIC PLAYING] [END PLAYBACK] [APPLAUSE] SUNDAR PICHAI: It's incredible
[? with ?] any open source platform, when you see what
people can do on top of it. We're really excited about the
momentum behind TensorFlow. It's already the most popular
ML repository on GitHub. And we're going to
push it further. We are also announcing the
TensorFlow Research Cloud.

We are giving away
1,000 cloud TPUs, which is 180 petaflops
of computing to academics and researchers for free so that
they can do more stuff with it. I'm always amazed by the stories
I hear from developers when I meet them. I want to highlight
one young developer today, Abu Qader from Chicago. He has used TensorFlow to help
improve health for everyone. Let's take a look. [VIDEO PLAYBACK] [MUSIC PLAYING] [CHATTER] – My name is Abu. I am a high school student. 17 years old. My freshman year, I remember
Googling machine learning. I had no clue what it meant. That's a really cool
thing about the internet, is that someone's already doing
it, so you can just YouTube it, and [CLICK] it's right there. Within a minute, I really saw
what machine learning can do. It kind of like hit
something within me. This need to build
things to help people.

My parents are immigrants
from Afghanistan. It's not easy coming in. The only reason we made it
through some of the times that we did was because people
showed acts of kindness. Seeing that at an early
age was enough for me to understand that
helping people always comes back to you. [INAUDIBLE] – How are you? – And then it kind of hit me– a way where I could actually
generally help people. Mammograms are the cheapest
imaging format there is. It's the most accessible to
people all around the world. But one of the biggest problems
that we see in breast cancer is misdiagnosis. So I decided I
was going to build a system for early detection
of breast cancer tumors, that's successful to everyone,
and that's more accurate. How was I going to do it? Machine learning. The biggest, most extensive
resource that I've used, is this platform
called TensorFlow. And I've spent so
many hours going really deep into these
open source libraries and just figuring
out how it works. Eventually, I wrote
a whole system that can help radiologists
make their decisions. All right. – Ready? – Yeah. I'm by no means a wizard
at machine learning.

I'm completely self-taught. I'm in high school. I YouTubed and just
found my way through it. You don't know about
that kid in Brazil that might have a groundbreaking
idea, or that kid in Somalia. You don't know that
they have these ideas. But if you can open
source your tools, you can give them a
little bit of hope that they can actually conquer
what they're thinking of.

a school project, and he's continued to
build it on his own. We are very, very fortunate
to have Abu and his family here with us today. [CHEERING AND APPLAUSE] Thank you for joining us. Enjoy I/O. We've been talking
about machine learning in terms of how it will power
new experiences and research. But it's also important we think
about how this technology can have an immediate
impact on people's lives by creating opportunities
for economic empowerment. 46% of US employers say
they faced talent shortages and have issues filling open job
positions while job seekers may be looking for openings
right next door.

There is a big disconnect here. Just like we focused
our contributions to teachers and students
through Google for Education, we want to better connect
employers and job seekers through a new initiative,
Google for Jobs. Google for Jobs
is our commitment to use our products to
help people find work. It's a complex,
multifaceted problem, but we've been investing
a lot over the past year, and we have made
significant progress. Last November, we announced
the Cloud Jobs API. Think of it as the first
fully end-to-end, pre-trained, vertical machine learning
model through Google Cloud, which we give to employers– FedEx, Johnson & Johnson,
HealthSouth, CareerBuilder, and we're expanding to
many more employers. So in Johnson &
Johnson's career site, they found that applicants
were 18% more likely to apply to a job suggesting the matching
is working more efficiently. And so far, over 4
and 1/2 million people have interacted with this API. But as we started
working on this, we realized the first
step for many people when they start looking for
a job is searching on Google.

So, it's like other
Search challenges we have worked in the past. So we built a new feature
in Search with a goal that no matter who you
are or what kind of job you are looking for, you can
find the job postings that are right for you. And as part of this
effort, we worked hard to include jobs across
experience and wage levels, including jobs that have
traditionally been much harder to search and classify– think retail jobs,
hospitality jobs, et cetera. To do this, well, we have
worked with many partners– LinkedIn, Monster, Facebook,
CareerBuilder, Glassdoor, and many more. So let's take a look
at how it works. Let's say you come to
Google and you start searching for retail jobs. And you're from Pittsburgh. We understand that. You can scroll down and click
into this immersive experience. And we immediately start showing
the most relevant jobs for you. And you can filter. You can choose Full-time. And as you can see, you
can drill down easily.

I want to look at jobs which are
posted in the past three days. So you can do that. Now, you're looking at retail
jobs in Pittsburgh, posted within the last three days. You can also filter
by job titles. It turns out employees
and employers use many different terminologies. For example, retail could
mean a store clerk, a sales representative, store manager. We use machine
learning to cluster automatically, and so that we
can bring all the relevant jobs for you. As you scroll through it,
you will notice that we even show commute times. It turns out to be an important
criteria for many people. And we'll soon add a
filter for that as well.

And if you find something
that's of interest to you– so maybe the retail
position [? in ?] Ross. And you can click on it, and you
end up going to it right away. And you're one click away. You can scroll to find more
information if you want. And you're one click away from
clicking and applying there. It's a powerful tool. We are addressing jobs of every
skill level and experience level. And we are committed to making
these tools work for everyone. As part of building
it, we literally talked to hundreds of people. So whether you are in a
community college looking for a barista job, a
teacher who is relocating across the country and you
want teaching jobs, or someone who is looking for
work in construction, the product should
do a great job of bringing that
information to you. We are rolling this out in
the US in the coming weeks, and then we are
going to expand it to more countries in the future.

I'm personally enthusiastic
for this initiative because it addresses
an important need and taps our core
capabilities as a company, from searching and
organizing information, to AI and machine learning. It's been a busy morning. We've talked about
this important shift from a mobile first
to a AI first world. And we're driving it forward
across all our products and platforms so that all of you
can build powerful experiences for new users everywhere.

It will take all of
us working together to bring the benefits of
technology to everyone. I believe we are on the verge
of solving some of the most important problems we face. That's our hope. Let's do it together. Thanks for your time today,
and enjoy Google I/O. [APPLAUSE] [MUSIC PLAYING].

As found on YouTube

Surface Duo first look: Microsoft’s foldable Android phone

– Remember Windows Phone
from way back when? Well Microsoft is kind
of getting back into making smartphones. This is Surface Duo, and it runs Android. Not Windows or Windows Phone. That's right. Microsoft is making a
Surface phone with Android. If that sounds surprising, it's because it really is. But we'll get back into the Android side in a minute. Duo is part of two new futuristic dual-screen devices that
Microsoft announced today. And they're coming in Holiday 2020. Surface Duo has two 5.6 inch displays that fold out into an
8.3 inch device overall. And it's just 4.8 millimeters thin. It folds like many two-in-one laptops thanks to a 360 degrees hinge. And it's designed to
get more done on the go. It looks tiny for this type of device, and it felt kind of like a Galaxy Note in my pocket. Now, I wasn't allowed to play around with the software on this device, but it looks and feels
like a tiny pocket tablet that's also a phone. The difference between this and any other Android phone, except maybe the Galaxy Fold, is visually obvious.

But Microsoft thinks this is part of a new category of devices that allow people to do a lot more with tablets and phones
than they do today. As part of this idea, Microsoft also announced a
Surface Neo device today. Which has two larger 9 inch displays. The Duo and the Neo share a very similar design, but they don't share a
common operating system. Neo, the larger dual-screen device, runs Windows 10 X, and has all your familiar
desktop and tablet apps. The reason this isn't
running Windows Phone is because Microsoft gave up on that operating system years ago, when it couldn't convince developers to create apps for it's devices. Now we sat down with Microsoft's Chief Product Officer, Panos Panay, on the Vergecast this week, to talk about why Microsoft
chose Android this time for the Surface Duo.

– [Panay] Well because, those are the apps you want. I don't know how to answer it differently for you. Because there's hundreds
of thousands of apps and you want them. Asati and I talked about it, it's about meeting our
customers where they are. And I don't think the, you know, the mobile application platform's going anywhere any time soon, you need the apps. – So you'll get the apps you'd expect from a phone inside
the dual-screen device, but how is this different from any other smartphone? I mean it obviously looks different. And the main idea is making use of these two displays in ways we're only starting to see other Android phone makers explore. You could run a game on one side, and a game pad on the other, or multi-task by dragging
and dropping content between apps. Microsoft hasn't thought
of everything you'd do with the Surface Duo just yet, but that's why it's announcing it now so developers can fill in the gaps.

They're really aiming to introduce a new form factor here, and a way for a device to
adjust itself on the go, no matter the task. We've seen foldable devices from Huawei and Samsung, but the Duo has two separate displays that are made of glass, rather than foldable plastic. Which given the issues
with Samsung's Galaxy Fold, that might be a good choice right now. Microsoft has been
working on this hardware for three years, and Panos Panay tells us that this device won't change much by the time it debuts late next year.

The real key question will be whether Android app developers create the apps and
experiences that really take advantage of this dual-screen device. And whether consumers
want this type of hardware in a phone form factor in the first place. That's why Microsoft also has its largest Surface Neo device running Windows. And it really feels like the company wants to offer a Surface at every shape and size. Microsoft also seems to be implying that the operating system really doesn't matter for
Surface devices anymore.

And it's willing to partner
with Google and others to offer what makes sense. So does that means that
Android is the future for Microsoft? – [Panay] (clears throat)
No no no no no no. You want to give customers what they want in the form factor that they're using. We've learned this, you know, the right operating system
on the wrong product or the other way around, pick your words, but what's the right operating system for the form factor? And in this case, in mobile devices, Android's
the obvious choice. But anything above that, Windows is everything.

Superior for me. – So, will the Surface
Duo and the Surface Neo combine in the future? Will there be a smartphone
that turns into a tablet, that then turns into a laptop, then you dock and turns into a real PC? We're years away from anything even getting close to that. But it opens up the questions about where this dual-screen and foldable hardware is going exactly. And they're really hard
questions to answer right now. Microsoft will now need to convince app developers and consumers that these dual-screen devices are truly the new device category that we've
all been waiting for.

Wherever things end up, it looks like Microsoft want to be ready at every point with Surface. You want a phone that's a little bit more than a phone that
has an extra display? Surface Duo. You want a tablet that
transforms into a laptop? Surface Neo or Surface Pro. Microsoft is covering
every hardware base here, and it's leaving it up to you to decide what device you actually need.

– [Panay] You know, I think like anything, look at the product you think is most interesting to you and where you think you can be more creative, that's what I would push. And I think this products gonna be there next year. Not in a hurry, you know, hang out. Take photos or do whatever it is you do on your phone today
for a little bit longer and then, see if we can convince you that you can be more
creative on this product. – It's been a crazy day of Surface devices and there's a bunch of hands-on videos you should check out
on our YouTube channel.

Be sure to also definitely
check out the Vergecast, 'cause it has the full
interview with Panos Panay, and you don't wanna miss it..

As found on YouTube

Apple announces the new iPhone 7 and 7 Plus, AirPods, and a new Apple Watch

Today Apple unveiled the new iPhone 7 and 7 Plus, as well as their new Apple Watch and wireless earbuds. If you missed the livestream of the presentation, here’s a quick wrap-up video posted by Apple showcasing all the new products in 107 seconds.



The phones themselves are confirmed to have upgraded storage options in relation to the previous iPhones; starting at 32GB of internal storage, and then jumping up to the 128GB and 256GB options. The phones also come in two ‘new’ darker colours; a sort of anodized Black and a new glossy option that Apple is calling Jet Black. The devices are also all IP67 rated, which is a welcome addition, making them the first ever iPhones to be water resistant.

If you would like to watch the full keynote in it’s entirety, here’s a link to their past broadcast: http://www.apple.com/apple-events/september-2016/

The phones are slated to be available September 16th. If you have any questions, you can always contact us by calling (780) 998-9551, or come into our store. We will update when more news becomes available.