Posts

Should companies embrace Microsoft’s Azure IoT Edge?

As of late June 2018, one of Microsoft's newest software platforms, Azure IoT Edge, is generally available. This means that commercial enterprises and independent consumers now have access to it and, thanks to Microsoft's decision to take the platform open source, can begin modifying the technology to fit specific needs.

Every innovation brings new opportunity and unforeseen challenges, and there is no reason to suspect that Azure IoT Edge will be any different. Even programs created by technology industry leaders like Microsoft have their potential disadvantages. 

What exactly is Azure IoT Edge?
Simply put, Azure IoT Edge represents Microsoft's plan to move data analytics from processing centers to internet of things enabled devices. This sophisticated edge computing technology can equip IoT hardware with cognitive computing technologies such as machine learning and computer vision. It will also free up enormous bandwidth by moving the data processing location to the device and allow IoT devices to perform more sophisticated tasks without constant human monitoring.

According to Microsoft, there are three primary components at play:

  1. A cloud-based interface will allow the user to remotely manage and oversee any and all Azure IoT Edge devices.
  2. IoT Edge runtime operates on every IoT Edge device and controls the modules deployed to each piece of IoT hardware.
  3. Every IoT Edge module is a container that operates on Azure services, third-party software or a user's personalized code. The modules are dispersed to IoT Edge machines and locally operate on said hardware.

Overall, Azure IoT Edge represents a significant step forward in cloud computing and IoT operations, empowering devices with functionality that wasn't before possible.

Devices like drones will be able to carry out more sophisticated tasks using Azure IoT Edge. Devices like drones will be able to carry out more sophisticated tasks using Azure IoT Edge.

The cybersecurity concerns of Azure IoT Edge
It is worth remembering that IoT hardware has a long and complicated history with cybersecurity standards. Considering the bulk of IoT technology adoption has been driven by consumer, rather than enterprise, products – issues like security and privacy were placed second to interface design and price point.

Research firm Gartner found that 20 percent of organizations had already reported at least one IoT-centered data breach within the three years leading up to 2018. This risk has led to IoT security spending that is expected to cost $1.5 billion globally in 2018. Some companies scrambling to make their IoT hardware more secure may want to leave this problem as a priority over incorporating Microsoft's newest software platform.

Another potential issue is Microsoft's decision to make the platform open source. The original code is public knowledge and now available to all to modify for personal use. While this flexibility will greatly help the product's user base expand, open source programs have not historically been the most secure from cybercriminals.

Many ecommerce websites ran on the Magento platform, an open source solution that became the target of a brute force password attack in 2018, which ultimately proved successful. The resulting data breach led to thousands of compromised accounts and stolen credit information.

A Black Duck Software report tracked open source programs as they have become more widespread. While the overall quality of open source code is improving, the study found that many organizations do not properly monitor and protect the code once it has been put in place, leaving it vulnerable to exploitation from outside sources.

"Microsoft annually invests $1 billion in cybersecurity research."

The Microsoft advantage
However, Microsoft is arguably in position to address the major security concerns with its Azure IoT Edge platform. The company invests over $1 billion in cybersecurity research each year. According to Azure Government CISO Matthew Rathbun, a lot of this money is spent  with Azure in mind:

"Ninety percent of my threat landscape starts with a human, either maliciously or inadvertently, making a mistake that somehow compromises security," Rathbun told TechRepublic. "In an ideal state, we're going eventually end up in a world where there'll be zero human touch to an Azure production environment."

Azure IoT Edge represents a bold step forward in empowering IoT technology and improving automated productivity. While there are risks associated with every innovation, Microsoft remains committed to staying at the forefront and protecting its platforms. Companies should be willing to invest in Azure IoT Edge while remaining vigilant about the possible risks. 

The potential of Project Kinect for Azure

When Microsoft first debuted its Kinect hardware in 2010, the product had nothing to do with edge computing, AI or machine learning. The Kinect served as a controller interface for Microsoft's Xbox 360 video game console. (Later versions were released for Windows PC and Xbox One.) Using cameras and sensors, it registered a player's body movements and inputted these gestures as controls. While it was innovative, Kinect struggled to gain a footing.

Despite going through various upgrades, it was fully discontinued as a consumer project in 2017. However, Microsoft did not fully abandon its Kinect hardware. At this year's Build developer's conference the company revealed a new use for its one-time video game accessory: edge computing.

Specifically, the new Kinect project factors into the greater themes of Build 2018, namely combining cognitive computing, AI and edge computing. 

"Microsoft has ambitious plans to bring its Cognitive Services software to Azure IoT Edge."

Microsoft at Build 2018
Edge computing is at the forefront of technological innovation. Capitalizing on the internet of things, this method of data processing de-emphasizes a central hub. Remote sensors receive computer processing power to analyze the data near its source before sending it back, greatly reducing bandwidth needs. This system is also more dependable because the sensors store the data, at least for a limited time span. Network outages or dropped connections won't result in lost or fragmented information.

However, these sensors are, at the moment, fairly basic equipment. Microsoft aims to change that. At Build 2018, the company announced ambitious plans to bring its Cognitive Services software to its edge computing solution, Azure IoT Edge. According to TechCrunch, the first of these programs will be the Custom Vision service.

Implementation of this software with Azure IoT Edge can allow unmanned aerial vehicles, such as drones, to perform more complex tasks without direct control from a central data source. It will give these devices the ability to "see" and understand the environment around them, analyzing new visual data streams. This technology can also be used to improve advanced robotics, autonomous vehicles and industrial machines.

This advanced method of machine learning can increase productivity because all of these devices will be able to continue to perform complicated, vision-based tasks even with network connection disruptions.

Microsoft has also partnered with Qualcomm to bring cognitive vision developer's tools to devices like home assistants, security cameras and other smart devices.

However, this technology, Qualcomm and its Custom Vision service, while useful only work with devices equipped with sensors and cameras that can process visual data. To increase the variety of edge sensors that can benefit from these new tools and software services, Microsoft resurrected the Kinect. 

Allowing advanced robotics to "see" will enable them to perform far more complex actions, even without a constant relay of instructions. Allowing advanced robotics to "see" will enable them to perform far more complex actions, even without a constant relay of instructions.

The power of the Kinect 
In an introduction on LinkedIn, Microsoft Technical Fellow Alex Kipman discussed Project Kinect for Azure. In his piece, Kipman outlined the company's reasoning for opting to return to the commercial failure. First, Kinect has a number of impressive features that make it ideal as a sensor.

These benefits include its 1024×1024 megapixel resolution, which is the highest among any sensor camera. Kinect also comes with a global shutter that will help the device record accurately when in sunlight. Its cameras capture images with automatic per pixel gain selection. This functionality allows the Kinect to capture objects at various ranges cleanly and without distortion. It features multiphase depth calculation to further improve its image accuracy, even when dealing with power supply variation and the presence of lasers. Lastly, the Kinect is a low-power piece of hardware thanks to its high modulation frequency and contrast.

Utilizing the Kinect sensors for cognitive computing makes sense. When looking at the product history, Microsoft had already developed more than half the specifications needed to create an effective sensor. The Kinect was designed to track and process human movement, differentiate users from animals or spectators in the room and operate in numerous real-world settings. It was also made to endure drops and other household accidents. Essentially, the Kinect was a hardy specialized sensor interface a market where it had to compete with precise button pressing.

In an industrial space, Kinect can fair far better. Augmenting existing data collection sensors with this visual aid will increase the amount of actionable data that is recorded. The Kinect brings with it a set of "eyes" for any machine. This advantage will let developers and engineers get creative as they seek to create the advanced edge computing networks of the future.

Is a hybrid cloud solution right for your company?

Over the last decade, many companies have been shifting IT responsibilities to the cloud, a solution that allows various users and hardware to share data over vast distances. Cloud programs frequently take the form of infrastructure as a service. A company that can't afford in-house servers or a full-sized IT team can use cloud solutions to replace these hardware and personnel limitations.

Large companies like Amazon, Microsoft and Google are all behind cloud services, propelling the space forward and innovating constantly. However, there are still limitations when it comes to cloud adoption. For as convenient as theses services are, they are designed for ubiquitous usage. Organizations that specialize in certain tasks may find a cloud solution limited in its capabilities.

Those businesses wishing to support service-oriented architecture may wish to consider a hybrid cloud solution, a new service becoming widespread throughout various enterprise application. As its name suggests, a hybrid cloud solution combines the power of a third-party cloud provider with the versatility of in-house software. While this sounds like an all-around positive, these solutions are not for every organization.

"Before businesses discuss a hybrid solution, they need three separate components."

Why technical prowess matters for hybrid cloud adoption
TechTarget listed three essentials for any company attempting to implement a hybrid cloud solution. Organizations must:

  1. Have on-premise private cloud hardware, including servers, or else a signed agreement with a private cloud provider.
  2. Support a strong and stable wide area network connection.
  3. Have purchased an agreement with a public cloud platform such as AWS, Azure or Google Cloud.

Essentially, before businesses can discuss a hybrid solution, they need all the separate components. An office with its own server room will still struggle with a hybrid cloud solution if its WAN cannot reliably link the private system with the third party cloud provider. And here is the crutch. Companies without skilled IT staffs need to think long and hard about what that connection would entail.

Compatibility is a crucial issue. Businesses can have the most sophisticated, tailored in-house cloud solution in the world but, if it doesn't work with the desired third party cloud software, the application will be next to useless. It isn't just a matter of software. Before a hybrid cloud solution can be considered feasible, equipment like servers, load balancers and a local area network all need to be examined to see how well they will function with the proposed solution.

After this preparation is complete, organizations will need to create a hypervisor to maintain virtual machine functionality. Once this is accomplished, a private cloud software layer will be needed to empower many essential cloud capabilities. Then the whole interface will need to be reworked with the average user in mind to create a seamless experience.

In short: in-house, skilled IT staff are essential to successfully utilizing a hybrid cloud solution. If businesses doubt the capabilities of any department, or question whether they have enough personnel to begin with, it may be better to hold off on hybrid cloud adoption.

Without being properly installed, a poorly implemented solution could cause delays, lost data and, worse of all, potentially disastrous network data breaches.

Cloud technology has been designed to keep business data secure. Poorly installing a hybrid solution could weaken this stability.Cloud technology has been designed to keep business data secure. Poorly installing a hybrid solution could weaken this stability.

The potential benefits of the hybrid cloud
However, if created the right way, a hybrid cloud solution brings a wide array of advantages to many enterprises, particularly those working with big data. According to the Harvard Business Review, hybrid cloud platforms can bring the best of both solutions, including unified visibility into resource utilization. This improved overview will empower companies to track precisely which employees are using what and for how long. Workload analysis reports and cost optimization will ultimately be improved as organizations can better direct internal resources and prioritize workers with stronger performances.

Overall platform features and computing needs will also be fully visible, allowing businesses to scale with greater flexibility. This is especially helpful for enterprises that see "rush periods" near the end of quarter/year. As the need rises, the solution can flex right along with it.

Hybrid cloud services are also easier to manage. If implemented properly, IT teams can harmonize the two infrastructures into one consistent interface. This will mean that employees only need to become familiar with one system, rather than learning different apps individually.

Companies processing big data can segment processing needs, according to the TechTarget report. Information like accumulated sales, test and business data can be retained privately while the third party solution runs analytical models, which can scale larger data collections without compromising in-office network performance.

As The Practical Guide to Hybrid Cloud Computing noted, this type of solution allows businesses to tailor their capabilities and services in a way that directly aligns with desired company objectives, all while ensuring that such goals remain within budget.

Organizations with skilled, fully formed IT teams should consider hybrid cloud solutions. While not every agency needs this specialized, flexible data infrastructure, many businesses stand ready to reap considerable rewards from the hybrid cloud.