Communicating with your apps and services through voice commands given to Google Assistant could be a reality soon, as Google plans to release its Assistant Developer Platform in December. Google’s personal Assistant, integrated into the Pixel phones, Allo and Google Home, is seen as a leading step in home automation and Artificial Intelligence. Google intends to use the Google Assistant as an ecosystem for apps and services.
Action on Google API is scheduled to be open to developers starting next month. Currently three types of Actions are specified by Google:
1) Direct Actions– These actions are triggered by Google Assistant when a voice command doesn’t require a follow up question. These actions can be used for home automation, media and communications.
2) Conversation Actions– These actions involves a “back and forth” interaction with the users. These conversation actions can reportedly be built using the Actions on Google API.
3) Embedded Google Assistant SDK– This software development kit allows developers to build Google Assistant right into devices like Raspberry Pi among other consumer products.
Third-party developers get restricted access to digital voice assistants such as Google Now and Cortana. Amazon was one of the firsts to let its developers get a greater access to their Alexa assistant, which opened up the potential for a range of services for it. Apple has since realised the potential of Siri, and has opened up access to third party developers. Google appears to have something similar in mind with its new Assistant.
XDA reports that Google is working with a number of brands such as Spotify, CNN, Uber and OpenTable to familiarise them with the system. Google Assistant already makes use of ‘app indexing, deep linking and Voice Interaction API’ to return user generated requests. So far, third party apps do not take advantage of the voice assistant, and opening it up to developers is reported to streamline interactions.