Google joins the empowered edge with Cloud IoT Edge

The internet of things has been a rapidly growing segment of technology over the past decade. Ever since Apple took made the smartphone a consumer success with its first iPhone, users have grown comfortable carrying technology in their hands and pockets. This IoT-filled world has created new opportunities and challenges.

According to IDC, connected devices will generate over 40 trillion gigabytes of data by 2025. This is too much of a good thing, especially if IoT devices remain only collectors and not processors. To help speed up data collection, Google has announced its Cloud IoT Edge platform, as well as a new hardware chip called the Edge tensor processing unit.

What are Google’s new announcements?
Google described its decision to move forward on the Cloud IoT Edge platform as “bringing machine learning to the edge.” Essentially, current edge devices, such as drones and sensors currently transmit most of their data collection back for internal processing. This procedure uses a lot of bandwidth and reduces the speed at which decisions can be drawn from the data. It also places a lot of stress on constant network connectivity, as any downtime can result in lost information.

Google’s new software solution would allow this data processing to happen right at the data source. It will also enable advanced technology, such as machine learning and artificial intelligence, to operate on these edge devices. Enter the Edge TPU: This chip is designed to maximize performance per watt. According to Google, the Edge TPU can run TensorFlow Lite machine learning models at the edge, accelerating the “learning” process and making software more efficient faster.

Google is seen as one of the big three when it comes to cloud infrastructure solutions. Google is seen as one of the big three when it comes to cloud infrastructure solutions.

How does this compare with the greater market?
In this announcement, Google is following in the path of Microsoft. Released globally in July, Azure IoT Edge accomplished many of the same tasks that the Cloud IoT Edge solution intends to. The two aim to empower edge devices with greater machine learning performance and reduce the amount of data that must be transmitted to be understood.

However, as Microsoft has been in the hardware space much longer than Google, no TPU chip needed to accompany the Azure IoT Edge release. It is possible that Google may gain an advantage by releasing hardware designed to optimize its new platform performance.

Amazon’s AWS Greengrass also brings machine learning capabilities to IoT devices. However, unlike the other two, this platform has existed for a while and seen modular updates and improvements (rather than a dedicated new release).

The presence of all three cloud platform giants in edge space signifies a shift to at-location data processing. Cloud networks have already been enjoying success for their heightened security features and intuitive resource sharing. As these networks become more common, it has yet to be fully seen how Microsoft, Amazon and Google deal with the increased vulnerabilities of many edge devices. However, with all three organizations making a sizeable effort to enter this market space, businesses should prepare to unlock the full potential of their edge devices and examine how this technology will affect workflows and productivity.

The potential of Project Kinect for Azure

When Microsoft first debuted its Kinect hardware in 2010, the product had nothing to do with edge computing, AI or machine learning. The Kinect served as a controller interface for Microsoft's Xbox 360 video game console. (Later versions were released for Windows PC and Xbox One.) Using cameras and sensors, it registered a player's body movements and inputted these gestures as controls. While it was innovative, Kinect struggled to gain a footing.

Despite going through various upgrades, it was fully discontinued as a consumer project in 2017. However, Microsoft did not fully abandon its Kinect hardware. At this year's Build developer's conference the company revealed a new use for its one-time video game accessory: edge computing.

Specifically, the new Kinect project factors into the greater themes of Build 2018, namely combining cognitive computing, AI and edge computing. 

"Microsoft has ambitious plans to bring its Cognitive Services software to Azure IoT Edge."

Microsoft at Build 2018
Edge computing is at the forefront of technological innovation. Capitalizing on the internet of things, this method of data processing de-emphasizes a central hub. Remote sensors receive computer processing power to analyze the data near its source before sending it back, greatly reducing bandwidth needs. This system is also more dependable because the sensors store the data, at least for a limited time span. Network outages or dropped connections won't result in lost or fragmented information.

However, these sensors are, at the moment, fairly basic equipment. Microsoft aims to change that. At Build 2018, the company announced ambitious plans to bring its Cognitive Services software to its edge computing solution, Azure IoT Edge. According to TechCrunch, the first of these programs will be the Custom Vision service.

Implementation of this software with Azure IoT Edge can allow unmanned aerial vehicles, such as drones, to perform more complex tasks without direct control from a central data source. It will give these devices the ability to "see" and understand the environment around them, analyzing new visual data streams. This technology can also be used to improve advanced robotics, autonomous vehicles and industrial machines.

This advanced method of machine learning can increase productivity because all of these devices will be able to continue to perform complicated, vision-based tasks even with network connection disruptions.

Microsoft has also partnered with Qualcomm to bring cognitive vision developer's tools to devices like home assistants, security cameras and other smart devices.

However, this technology, Qualcomm and its Custom Vision service, while useful only work with devices equipped with sensors and cameras that can process visual data. To increase the variety of edge sensors that can benefit from these new tools and software services, Microsoft resurrected the Kinect. 

Allowing advanced robotics to "see" will enable them to perform far more complex actions, even without a constant relay of instructions. Allowing advanced robotics to "see" will enable them to perform far more complex actions, even without a constant relay of instructions.

The power of the Kinect 
In an introduction on LinkedIn, Microsoft Technical Fellow Alex Kipman discussed Project Kinect for Azure. In his piece, Kipman outlined the company's reasoning for opting to return to the commercial failure. First, Kinect has a number of impressive features that make it ideal as a sensor.

These benefits include its 1024×1024 megapixel resolution, which is the highest among any sensor camera. Kinect also comes with a global shutter that will help the device record accurately when in sunlight. Its cameras capture images with automatic per pixel gain selection. This functionality allows the Kinect to capture objects at various ranges cleanly and without distortion. It features multiphase depth calculation to further improve its image accuracy, even when dealing with power supply variation and the presence of lasers. Lastly, the Kinect is a low-power piece of hardware thanks to its high modulation frequency and contrast.

Utilizing the Kinect sensors for cognitive computing makes sense. When looking at the product history, Microsoft had already developed more than half the specifications needed to create an effective sensor. The Kinect was designed to track and process human movement, differentiate users from animals or spectators in the room and operate in numerous real-world settings. It was also made to endure drops and other household accidents. Essentially, the Kinect was a hardy specialized sensor interface a market where it had to compete with precise button pressing.

In an industrial space, Kinect can fair far better. Augmenting existing data collection sensors with this visual aid will increase the amount of actionable data that is recorded. The Kinect brings with it a set of "eyes" for any machine. This advantage will let developers and engineers get creative as they seek to create the advanced edge computing networks of the future.