Google’s YouTube is the first streaming app that will actually tell users to stop watching. The company at its Google I/O conference this week introduced a series of new controls for YouTube that will allow users to set limits on their viewing, and then receive reminders telling them to “take a break.” The feature is rolling out now in the latest version of YouTube’s app along with others that limit YouTube’s ability to send notifications, and soon, one that gives users an overview of their binge behavior so they can make better-informed decisions about their viewing habits.
With “Take a Break,” available from YouTube’s mobile app Settings screen, users can set a reminder to appear every 15, 30, 60, 90 or 180 minutes, at which point the video will pause. You can then choose to dismiss the reminder and keep watching, or close the app.
The setting is optional,
Given Google’s recent rebranding mode and a few pieces of news trickling out over the past week, it seemed safe to expect some key updates for the operating system formerly known as Android Wear. But the company only really mentioned Wear OS in passing at yesterday’s keynotes.
Google’s push to offer an open, Android-like experience for wearable devices has stagnated a bit in recent years, along with the category itself — but the company is pushing out some key updates for devs this week at I/O. Over on the Android Developer blog, the company is highlighting some key features for developer preview 2, which launches this week.
The biggest news here is the addition of an “enhanced battery saver mode.” Battery life is certainly one of the chief concerns on these devices, thanks to their relatively small size. While in the new mode, the device will sport a
Google today launched a new tool for teachers and their students called Tour Creator, which allows anyone to create their own VR tour using imagery from Google’s Street View or their own 360-degree photos. The new app is designed to work with Google Cardboard and Google’s existing VR “field trips” app Expeditions.
The goal with Expeditions is to let people virtually travel the world to see far off places they may never have the chance to visit in person – like Antarctica or Machu Picchu, for example. Google says that since Expeditions’ arrival in 2015, over three million students have virtually visited places around the globe.
Now, the idea is to let students and teachers themselves build their own VR experiences and stories without needing technical knowledge.
Instead, using Tour Creator, anyone can build an immersive 360-degree tour from their computer.
To use the service, you click to get started,
App revenue continues to climb year-over-year, a large part which can be contributed to the growth of subscription services. Now, Google is looking to make subscribing to apps easier for both consumers and developers alike, with a series of new features announced today at Google’s I/O conference.
On the user’s side of things, the company is launching a new app discovery experience for finding subscription-based apps and tools for managing existing subscriptions.
As the company explained during a breakout session at I/O, consumers are often hesitant to sign up for subscription services because they’re concerned it will be too much of a hassle to cancel — they feel trapped.
Google will address this with a new “subscription center” in Google Play, where users will be able to both explore new subscription apps to try, as well as manage their current subscriptions. Here, they can address issues like updating a
Google managed to elicit an audible gasp from the crowd at I/O today when it showed off a new augmented feature for Maps. It was a clear standout during a keynote that contained plenty of iterative updates to existing software, and proved a key glimpse into what it will take to move AR from interesting novelty to compelling use case.
Along with the standard array of ARCore-based gaming offerings, the new AR mode for Maps is arguably one of the first truly indispensable real-world applications. As someone who spent the better part of an hour yesterday attempting to navigate the long, unfamiliar blocks of Palo Alto, California by following an arrow on a small blue circle, I can personally vouch for the usefulness of such an application.
It’s still early days — the company admitted that it’s playing around with a few ideas here. But it’s easy to see how
Google today showed off new ways it’s combining the smartphone camera’s ability to see the world around you, and the power of A.I. technology. The company, at its Google I/O developer conference, demonstrated a clever way it’s using the camera and Google Maps together to help people better navigate around their city, as well as a handful of new features for its previously announced Google Lens technology, launched at last year’s I/O.
The maps integration combines the camera, computer vision technology, and Google Maps with Street View.
The idea is similar to how people navigate without technology – they look for notable landmarks, not just street signs.
With the camera/Maps combination, Google is doing that now, too. It’s like you’ve jumped inside Street View, in fact.
In the interface, the Google Maps user interface is at the bottom of the screen, while the camera is showing you what’s in
Google I/O is nowhere near done. While the mainstream keynote just ended, the company is about to unveil the next big things when it comes to APIs, SDKs, frameworks and more.
The developer keynote starts at 12:45 PM Pacific Time (3:45 PM on the East Cost, 8:45 PM in London, 9:45 PM in Paris) and you can watch the live stream right here on this page.
If you’re an Android developer, this is where you’ll get the juicy details about the next version of Android. You can expect new possibilities and developer tools for you and your company. We’ll have a team on the ground to cover the best bits right here on TechCrunch.
Along with the A.I.-powered changes coming to the Google Photos app, Google today also announced a new way for developers to work with Google Photos’ service. The company is launching a developer API that will allow other apps and services access to connect to, upload to, and share to a users’ Google Photos library, as well as a Partners Program for Google Photos.
Early partners include HP, Legacy Republic, NixPlay, Xero and TimeHop.
“This is really about allowing users to use their content across the apps and products that they they own or that they love – to take that magic of Google Photos and make it helpful to them wherever they need it,” said Google Photos Product Manager Ben Greenwood.
The Google Photos Library API lets developers help users easily find photos based on what’s in the photo, where it was taken or other attributes; upload directly to users’
Google unveiled some of the new features in the next version of Android at its developer conference. One feature looked particularly familiar. Android P will get new navigation gestures to switch between apps. And it works just like the iPhone X.
“As part of Android P, we’re introducing a new system navigation that we’ve been working on for more than a year now,” VP of Android Engineering Dave Burke said. “And the new design makes Android multitasking more approachable and easier to understand.”
While Google has probably been working on a new multitasking screen for a year, it’s hard to believe that the company didn’t copy Apple. The iPhone X was unveiled in September 2017.
On Android P, the traditional home, back and multitasking buttons are gone. There’s a single pill-shaped button at the center of the screen. If you swipe up from this button, you get a new
Google today announced at its I/O developer conference a new suite tools for its new Android P operating system that will help users better manage their screen time, including a more robust do not disturb mode and ways to track your app usage.
The biggest change is introducing a dashboard to Android P that tracks all of your Android usage, labeled under the “digital wellbeing” banner. Users can see how many times they’ve unlocked their phones, how many notifications they get, and how long they’ve spent on apps, for example. Developers can also add in ways to get more information on that app usage. YouTube, for example, will show total watch time across all devices in addition to just Android devices.
Google says it has designed all of this to promote what developers call “meaningful engagement,” trying to reduce the kind of idle screen time that might not necessarily be
Google wants to bundle its voice assistant into every device and app. And it’s true that it makes sense to integrate Google Assistant in Google Maps. It’ll be available on iOS and Android this summer.
At Google I/O, director of Google Assistant Lilian Rincon showed a demo of Google Maps with Google Assistant. Let’s say you’re driving and you’re using Google Maps for directions. You can ask Google Assistant to share your ETA without touching your phone.
You can also control the music with your voice for instance. Rincon even played music on YouTube, but without the video element of course. It lets you access YouTube’s extensive music library while driving.
If you’re using a newer car with Android Auto or Apple CarPlay, you’ve already been using voice assistants in your car. But many users rely exclusively on their phone. That’s why it makes sense to integrate Google Assistant
No more rudely yelling at your Google Home smart speaker, kids. Google today announced at its I/O developer conference a new Google Assistant setting for families called “Pretty Please.” The feature will teach children to use polite language when interacting with the Google Assistant, and will receive thanks from the virtual assistant in response.
For example, when children say “please,” the Assistant will respond with some sort of positive reinforcement while performing the requested task.
During a brief demo, the Assistant was shown interacting with kids, and saying things like “thanks for saying please,” “thanks for asking so nicely, or “you’re very polite.”
The feature arrives at a time when parents were growing concerned that kids were learning to treat the virtual assistants in smart speakers rudely, which would translate into their interactions with people.
Amazon recently addressed this problem with an Alexa update called Magic Word, which
Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on iOS and Android later today. The company announced that new feature at Google I/O after showing a video of Tania Finlayson.
Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people using Morse code.
Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.
Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple word suggestions above the keyboard just like on the normal keyboard. The company has also created a Morse poster so that
Google Photos already makes it easy for users to correct their photos with built-in editing tools and clever, A.I.-powered features for automatically creating collages, animations, movies, stylized photos, and more. Now the company is making it even easier to fix photos with a new version of the Google Photos app that will suggest quick fixes and other tweaks – like rotations, brightness corrections, or adding pops of color, for example – right below the photo you’re viewing.
The changes, which are being introduced on stage at the Google I/O developer conference today, are yet another example of this year’s theme of bringing A.I. technology closer to the end user.
In Google Photos’ case, that means no longer just hiding the A.I. away within the “Assistant” tab, but putting it directly in the main interface.
The company says that the idea to add the fix suggestions to
Scheduling tech conferences is hard, especially in May, when seemingly every company wants to hold a major event, including Google and Microsoft. Typically, Google I/O and Microsoft Build, the flagship developer conferences for both companies, happen within a week or two of each other in May. But not this year. Microsoft today announced that its Build conference in Seattle will run from May 7… Read More
A week ago, Microsoft held its Build developer conference in its backyard in Seattle. This week, Google did the same in an amphitheater right next to its Mountain View campus. While Microsoft’s event felt like it embodied the resurgence of the company under the leadership of Satya Nadella, Google I/O — and especially its various, somewhat scattershot keynotes — fell flat… Read More
We’re here at day two of Google I/O 2017, and the show starts with a keynote specific to Google’s work in VR, led by its head of virtual reality and augmented reality, Clay Bavor. The keynote gets started at 9:30 AM PT (12:30 PM PT) so tune in. Read More
Google just kicked off its annual Google I/O developer conference. It’s a great opportunity to get a sneak peak at what’s next for Android and other platforms. This year was no different as the company announced a ton of stuff.
In case you missed it, here’s everything Google announced today. Read More
There are 2 billion Android devices currently in use around the world. Google is now thinking about the next 2 billion devices. In order to do this, Google has a new project called Android Go. It’s a lightweight version of the upcoming version of Android (Android O) with optimized apps and Play Store.
Google focused on devices with very low specs, users with limited connectivity and… Read More