From 8th to 10th March, the Google Cloud Next ’17 is being held, an event where the latest Google Cloud research and projects are exposed, with a special emphasis on the potential of artificial intelligence.
In this context, Google surprised again with their demonstrations and the reach they have managed to achieve with artificial intelligence. Their new bet has been to use artificial intelligence to identify objects in videos, creating context and analyzing elements.
This dynamic can be familiar to us since Google takes time doing that process with the photographs. For example, this technology is applied in Google Photos, allowing you to recognize elements within the images. And that’s why, just by typing “dog”, “boat” or any other term in the search engine, shows us pictures with those elements.
But implementing this dynamic in the videos was a great challenge. But Google’s AI did it, as you could see in today’s demonstrations. Specifically, it has introduced Cloud Video Intelligence API, in beta for developers that will allow them to recognize objects in the videos, and even identify the context to yield a more accurate result.
For example, in the image not only can identify that is a tiger, but also in analyzing the rest of the elements of the video, can interpret in which scenario is located, whether in a zoo or if it is part of a wild place.
You can also point to keywords, and this tool will throw the scenes where those objects are. We can read more details of this API in the Google blog.