Cybersecurity tips at a glance: Managing IoT devices

As the realm of the internet of things grows, it is important to understand all aspects of the technology’s performance. Companies and industries that see only the benefits open themselves up to data breaches, public embarrassment and even legal action. IoT technology can boost productivity when done right but lead to costly and unnecessary expenses if utilized without proper foresight.

The possible downsides of exercise wearables
Employee wellness is a trend that is sweeping across industries. These initiatives have shown positive results, such as increasing worker morale and promoting healthy behaviors. One study from the Journal of Occupational and Environmental Medicine even found that employee wellness diet programs can reduce health risks.

To this end, exercise wearables, such as Fitbit, appear to make sense. These devices can track heart rate, body temperature, calorie consumption and sleep quality. Many come with a social aspect, as well, allowing co-workers to engage in friendly competition to see who is the most active within the office.

For many industries, these wearables have no real downside. However, employers should know that the data gathered by many fitness wearables can be used to track employee location. This vulnerability has been problematic, especially for those working for the U.S. armed forces. According to The Washington Post, several previously secret military bases were revealed when data gathered by GPS tracking company Strava was made public.

The U.S. army had been using these fitness wearables for their advantages without fully understanding how the technology could be exploited. Most commercial hardware is designed for ease of use and cost affordability. These traits are in part the reason why IoT has famously encountered cybersecurity concerns over the past several years.

For enterprises working with sensitive and classified materials, IoT wearables may have a downside. Outside parties, benign and malicious, can track employee movement, knowing more about workers than may be deemed safe.

Augmented reality glasses can also potentially leak vital secrets, as they see and record all the employee does. Augmented reality glasses can also potentially leak vital secrets, as they see and record all the employee does.

Know where backup data is stored
Many IoT devices provide extra “eyes” on the field. Drones have been performing various types of reconnaissance missions for decades, whether for government contractors or farmers wishing to understand more about their soil. These unmanned aerial vehicles, or UAVs, are built to capture, transmit and store data.

While useful, drones have several serious cybersecurity concerns. They can be intercepted, and if so, their data is easily accessible. This risk is especially a problem for devices that back up information into themselves. A report from Syracuse University indicates that there are concerns that data stored on Chinese manufactured drones could be accessed by their government and would be out of U.S. control.

Using IoT devices has many advantages, but executives must always consider the full picture before implementation.

How cave fish may help prevent IoT jamming

Jamming is a potential crippling blow to internet of things-enabled hardware. It can bring down drones from the sky, disrupt network connections and lead to economic downtime. In the cybersecurity arena, jamming is more commonly known as dedicated denial of service attacks. According to a CORERO DDoS trend report, this method of cyberattack increased by an incredible 91 percent in 2017.

IoT devices are behind this surge in DDoS attacks, as many lack comprehensive cybersecurity protocols and can be easily jammed. While this deterrent is not enough to slow the pace of IoT adoption, enterprises hoping to make use of mass IoT market penetration must be aware of the risks, as well as what is being done to prevent IoT jamming.

Luckily, a recent study published in Optics Express gives some hope against rampant DDoS cybercrime. As with many technological innovations, the potential salvation is inspired by a system that already works inside the animal kingdom.

Studying the Eigenmannia
The Eigenmannia are a species of cave fish that exist in total darkness. Without light, these creatures need another way to hunt, communicate and otherwise “see” within the perpetual darkness. The researchers studying these fish discovered that they emitted an electric field to sense the environment and communicate with other fish.

Because two or more of these animals could emit the field near one another, the species had to have a way to stop the signal from getting disrupted, otherwise the fish couldn’t thrive. The scientists learned the Eigenmannia have the ability to alter their signals. This capability is due to a unique neural algorithm in their brain activity. The purpose and function of the field remains in tact, but its frequency is changed just enough to avoid confusion.

This same trait can be harnessed to help create a light-based jamming avoidance response device.

Drones if jammed run the risk of damaging hardware and products. If jammed, drones run the risk of damaging hardware and products.

Creating a jamming avoidance response device
When two IoT devices operating on the same frequency come close to each other, the fields become crossed, and jamming occurs. The closer the two pieces of hardware drift, the more the disruption intensifies.

However, with a JAR device, similar to the natural solution used by Eigenmannia, these IoT components could adjust their frequency, preserving the function of the signal while avoiding jamming. Using a light-based system would enable IoT devices to shift through a wide range of frequencies.

The resulting machine, created by the research team, shows promise.

“This could allow a smarter and more dynamic way to use our wireless communication systems without the need for the complicated coordination processes that currently prevent jamming, by reserving whole sections of bandwidth for specific phone carriers or users such as the military,” said team lead Mable P. Fok.

While it won’t single-handedly eliminate the threat of DDoS attacks, JAR device usage on a large scale has some advantages. Essentially, it is a low-cost solution for any agency that utilizes a plethora of IoT content. In addition to the aforementioned military use case, health care facilities like hospitals, air traffic control towers and even educational institutions could find immediate value in this technology.

Since a JAR device would likely lower the bandwidth needed for IoT hardware interaction, DDoS attacks could become less expensive. As these attacks continue to become more prevalent, the value of this research will likely increase. Designing IoT devices on software that can shift frequency will reduce costs and, hopefully, a more secure IoT landscape.

The potential of Project Kinect for Azure

When Microsoft first debuted its Kinect hardware in 2010, the product had nothing to do with edge computing, AI or machine learning. The Kinect served as a controller interface for Microsoft's Xbox 360 video game console. (Later versions were released for Windows PC and Xbox One.) Using cameras and sensors, it registered a player's body movements and inputted these gestures as controls. While it was innovative, Kinect struggled to gain a footing.

Despite going through various upgrades, it was fully discontinued as a consumer project in 2017. However, Microsoft did not fully abandon its Kinect hardware. At this year's Build developer's conference the company revealed a new use for its one-time video game accessory: edge computing.

Specifically, the new Kinect project factors into the greater themes of Build 2018, namely combining cognitive computing, AI and edge computing. 

"Microsoft has ambitious plans to bring its Cognitive Services software to Azure IoT Edge."

Microsoft at Build 2018
Edge computing is at the forefront of technological innovation. Capitalizing on the internet of things, this method of data processing de-emphasizes a central hub. Remote sensors receive computer processing power to analyze the data near its source before sending it back, greatly reducing bandwidth needs. This system is also more dependable because the sensors store the data, at least for a limited time span. Network outages or dropped connections won't result in lost or fragmented information.

However, these sensors are, at the moment, fairly basic equipment. Microsoft aims to change that. At Build 2018, the company announced ambitious plans to bring its Cognitive Services software to its edge computing solution, Azure IoT Edge. According to TechCrunch, the first of these programs will be the Custom Vision service.

Implementation of this software with Azure IoT Edge can allow unmanned aerial vehicles, such as drones, to perform more complex tasks without direct control from a central data source. It will give these devices the ability to "see" and understand the environment around them, analyzing new visual data streams. This technology can also be used to improve advanced robotics, autonomous vehicles and industrial machines.

This advanced method of machine learning can increase productivity because all of these devices will be able to continue to perform complicated, vision-based tasks even with network connection disruptions.

Microsoft has also partnered with Qualcomm to bring cognitive vision developer's tools to devices like home assistants, security cameras and other smart devices.

However, this technology, Qualcomm and its Custom Vision service, while useful only work with devices equipped with sensors and cameras that can process visual data. To increase the variety of edge sensors that can benefit from these new tools and software services, Microsoft resurrected the Kinect. 

Allowing advanced robotics to "see" will enable them to perform far more complex actions, even without a constant relay of instructions. Allowing advanced robotics to "see" will enable them to perform far more complex actions, even without a constant relay of instructions.

The power of the Kinect 
In an introduction on LinkedIn, Microsoft Technical Fellow Alex Kipman discussed Project Kinect for Azure. In his piece, Kipman outlined the company's reasoning for opting to return to the commercial failure. First, Kinect has a number of impressive features that make it ideal as a sensor.

These benefits include its 1024×1024 megapixel resolution, which is the highest among any sensor camera. Kinect also comes with a global shutter that will help the device record accurately when in sunlight. Its cameras capture images with automatic per pixel gain selection. This functionality allows the Kinect to capture objects at various ranges cleanly and without distortion. It features multiphase depth calculation to further improve its image accuracy, even when dealing with power supply variation and the presence of lasers. Lastly, the Kinect is a low-power piece of hardware thanks to its high modulation frequency and contrast.

Utilizing the Kinect sensors for cognitive computing makes sense. When looking at the product history, Microsoft had already developed more than half the specifications needed to create an effective sensor. The Kinect was designed to track and process human movement, differentiate users from animals or spectators in the room and operate in numerous real-world settings. It was also made to endure drops and other household accidents. Essentially, the Kinect was a hardy specialized sensor interface a market where it had to compete with precise button pressing.

In an industrial space, Kinect can fair far better. Augmenting existing data collection sensors with this visual aid will increase the amount of actionable data that is recorded. The Kinect brings with it a set of "eyes" for any machine. This advantage will let developers and engineers get creative as they seek to create the advanced edge computing networks of the future.

How a holistic approach to data analytics benefits cybersecurity

Almost everyone, regardless of industry, recognizes the growing importance of cybersecurity. Cyberattacks are on the rise and growing increasingly varied and sophisticated. According to data collected by Cybersecurity Ventures, the annual cost of cybercrime is estimated to reach roughly $6 trillion by 2021. An effective information security policy is, in many cases, the only thing standing between companies and possible financial ruin.

The danger is especially real for small- to medium-sized businesses. Data from the U.S. Securities and Exchange Commission found that only slightly more than a third of SMBs (40 percent) survive for longer than six months after a successful data breach. For these types of organizations, cybersecurity is literally a matter of life and death.

The good news: Many businesses recognize the need for effective cybersecurity strategies and are investing heavily in personnel and software solutions. The bad news: Many of these same companies are only reacting, not thinking about how to best deploy this protective framework. Effective cybersecurity isn’t as simple as applying a bandage to a cut.

It can be better equated to introducing a new nutritional supplement to the diet. The whole procedure is vastly more effective if integrated into every meal. To best use modern cybersecurity practices, businesses must rethink their approaches to corporate data structure. Data analytics is a vital tool in providing the best in information protection.

“Segmenting data spells disaster for an effective cybersecurity policy.”

Siloed data is unread data
As organizations grow, there is a tendency to segment. New branches develop, managers are appointed to oversee departments – in general, these groups tend to work on their projects and trust that other arenas of the company are also doing their jobs. The responsibility is divided and thus, easier to handle.

While this setup may make the day-to-day routine of the business easier on executives, it spells disaster for an effective cybersecurity policy. This division process creates siloed or segmented data pools. While a department may be very aware of what it is doing, it has far less knowledge of other corporate branches.

Many organizations may figure that an in-house IT team or chief information security officer can oversee everything, keeping the company running at full-tilt. However, this assumption is only half-true. While these staff members can and do oversee the vast majority of business operations, they will lack the data to make comprehensive decisions. A report from the Ponemon Institute found that 70 percent of cybersecurity decision-makers felt they couldn’t effectively act because of a surplus of jumbled, incoherent data.

Data analytics, or the study of (typically big) data, provides facts behind reasoning. To gather this information, companies need systems and software that talk to one another. Having the best-rated cybersecurity software won’t make a difference if it can’t easily communicate with the company’s primary OS or reach data from several remote branches.

CISOs or other qualified individuals can make practical, often less-expensive strategies with a clear view of the entire company. Without this type of solution, a business, no matter its resources or personnel, will essentially be operating its cybersecurity strategy through guesswork.

Separated data creates bubbles where information can be misplaced or duplicated, resulting in a slower data analysis process. Separated data creates bubbles where information can be misplaced or duplicated, resulting in a slower data analysis process.

Centralized businesses may miss real-time updates
Businesses face another challenge as they expand. Data collection has, in the past, slowed with remote locations. Before IoT and Industry 4.0, organizations were bound with paper and email communications. Remote branches typically grouped data reports into weeks or, more likely, months.

This approach meant that the central location effectively made decisions with month-old information. When it comes to minimizing the damage from data breaches, every hour matters. Luckily, many institutions can now provide data streaming in real time. Those that can’t must prioritize improving information flow immediately. Cybercrime looks for the weakest aspect within a company and tries to exploit the deficiency.

For data analytics to work properly, businesses need access to the full breadth of internal data. The more consistent and up to date this information is, the better CISOs and IT departments can make coherent and sensible decisions.

Visibility may not sound like the answer to fighting cyberattacks, but it is a crucial component. Companies need to be able to look within and adapt at a moment’s notice. This strategy requires not just the ability to see but also the power to make quick, actionable adjustments. Those organizations that still segment data will find this procedure difficult and time consuming.

As cybercrime becomes an expected aspect of business operations, those who still think in siloed brackets must change their mindsets or face expensive consequences.

Is a hybrid cloud solution right for your company?

Over the last decade, many companies have been shifting IT responsibilities to the cloud, a solution that allows various users and hardware to share data over vast distances. Cloud programs frequently take the form of infrastructure as a service. A company that can't afford in-house servers or a full-sized IT team can use cloud solutions to replace these hardware and personnel limitations.

Large companies like Amazon, Microsoft and Google are all behind cloud services, propelling the space forward and innovating constantly. However, there are still limitations when it comes to cloud adoption. For as convenient as theses services are, they are designed for ubiquitous usage. Organizations that specialize in certain tasks may find a cloud solution limited in its capabilities.

Those businesses wishing to support service-oriented architecture may wish to consider a hybrid cloud solution, a new service becoming widespread throughout various enterprise application. As its name suggests, a hybrid cloud solution combines the power of a third-party cloud provider with the versatility of in-house software. While this sounds like an all-around positive, these solutions are not for every organization.

"Before businesses discuss a hybrid solution, they need three separate components."

Why technical prowess matters for hybrid cloud adoption
TechTarget listed three essentials for any company attempting to implement a hybrid cloud solution. Organizations must:

  1. Have on-premise private cloud hardware, including servers, or else a signed agreement with a private cloud provider.
  2. Support a strong and stable wide area network connection.
  3. Have purchased an agreement with a public cloud platform such as AWS, Azure or Google Cloud.

Essentially, before businesses can discuss a hybrid solution, they need all the separate components. An office with its own server room will still struggle with a hybrid cloud solution if its WAN cannot reliably link the private system with the third party cloud provider. And here is the crutch. Companies without skilled IT staffs need to think long and hard about what that connection would entail.

Compatibility is a crucial issue. Businesses can have the most sophisticated, tailored in-house cloud solution in the world but, if it doesn't work with the desired third party cloud software, the application will be next to useless. It isn't just a matter of software. Before a hybrid cloud solution can be considered feasible, equipment like servers, load balancers and a local area network all need to be examined to see how well they will function with the proposed solution.

After this preparation is complete, organizations will need to create a hypervisor to maintain virtual machine functionality. Once this is accomplished, a private cloud software layer will be needed to empower many essential cloud capabilities. Then the whole interface will need to be reworked with the average user in mind to create a seamless experience.

In short: in-house, skilled IT staff are essential to successfully utilizing a hybrid cloud solution. If businesses doubt the capabilities of any department, or question whether they have enough personnel to begin with, it may be better to hold off on hybrid cloud adoption.

Without being properly installed, a poorly implemented solution could cause delays, lost data and, worse of all, potentially disastrous network data breaches.

Cloud technology has been designed to keep business data secure. Poorly installing a hybrid solution could weaken this stability.Cloud technology has been designed to keep business data secure. Poorly installing a hybrid solution could weaken this stability.

The potential benefits of the hybrid cloud
However, if created the right way, a hybrid cloud solution brings a wide array of advantages to many enterprises, particularly those working with big data. According to the Harvard Business Review, hybrid cloud platforms can bring the best of both solutions, including unified visibility into resource utilization. This improved overview will empower companies to track precisely which employees are using what and for how long. Workload analysis reports and cost optimization will ultimately be improved as organizations can better direct internal resources and prioritize workers with stronger performances.

Overall platform features and computing needs will also be fully visible, allowing businesses to scale with greater flexibility. This is especially helpful for enterprises that see "rush periods" near the end of quarter/year. As the need rises, the solution can flex right along with it.

Hybrid cloud services are also easier to manage. If implemented properly, IT teams can harmonize the two infrastructures into one consistent interface. This will mean that employees only need to become familiar with one system, rather than learning different apps individually.

Companies processing big data can segment processing needs, according to the TechTarget report. Information like accumulated sales, test and business data can be retained privately while the third party solution runs analytical models, which can scale larger data collections without compromising in-office network performance.

As The Practical Guide to Hybrid Cloud Computing noted, this type of solution allows businesses to tailor their capabilities and services in a way that directly aligns with desired company objectives, all while ensuring that such goals remain within budget.

Organizations with skilled, fully formed IT teams should consider hybrid cloud solutions. While not every agency needs this specialized, flexible data infrastructure, many businesses stand ready to reap considerable rewards from the hybrid cloud.

Exploring the true value of a CISO

As cybersecurity issues become more prevalent, one position within the corporate ladder is gaining new attention: the chief information security officer. The financial burden of data breaches continues to rise. One recent report from Accenture stated that the average global cost of cybercime reached $11.7 million in 2017. This was a 27.4 percent raise from $9.5 million in 2016.

Along with the rising expenses of cyberattacks, companies have been spending more on protection, primarily on CISOs. Security Current data indicated that the overall average salary for an CISO was $273,033 by end of 2016 and this number is only expected to have increased. As organizations continue to pay more for CISO expertise, the question becomes: What value do CISOs truly bring to the organizations they serve?  

Distilling decision-making to one person
Cybercriminals have certain inherent advantages over the companies they target. For one, their anonymity. Hackers typically research an organization's staff as this aids with spear phishing and other data breach initiatives. By contrast, businesses have no certainty they're even being targeted until they've been attacked.

Another crucial advantage on the side of hackers is that many corporations, especially those small- to medium-sized businesses, don't have CISOs. This means that all cybersecurity policies and initiatives must go through the IT department or other group. When a chief technology officer has to deal with cybersecurity on top of other duties, the initiatives can be slowed, in some cases encountering month-long delays or more.

Cybercriminals are constantly adapting and incorporating new malicious software into their arsenals. In order to keep pace with this rapid innovation, one person within the organization must function as the hacker's opposite, keeping the company cybersecurity policies fluid and responsive. As Helpnet  Security pointed, CISOs must not only be leaders but also serve as the link between innovation and defense. A single, dedicated person can do this much more effectively than a distracted team.

Having a leader creates a clear, authoritative flow for decision making. Having a leader creates a clear, authoritative flow for decision-making.

Presenting a single, unified cybersecurity vision
Likewise, a C-level executive is typically the only class of employee capable of making real, impactful decisions within a corporate structure. Unfortunately, many executives and decision-makers remain uneducated about issues of cybersecurity. A BAE Systems survey found only 42 percent of executives felt they were very or extremely knowledgeable about their company's cybersecurity policies.

In order to create comprehensive, overarching information security standards, businesses need a respected voice in the room who can articulate and educate other executives on the need for cybersecurity initiatives. CISOs have this presence and, unlike CTOs, they are not hindered by distractions that can occur in other business segments. 

"Think in terms of 'when' instead of 'if.'"

Creating and updating corporate response strategy
Experts agree that companies that develop cyberattack response strategies minimize losses and more quickly seal breach points. While it is nice to hope that your organization will never be affected, the far more prudent strategy is to think in terms of "when" instead of "if." When a cyberattack occurs, organizations must have a clear, itemized response plan.

According to Risk Management, the best plans are proactive; changing biannually or even quarterly to adapt to new methods of cyberattack. A comprehensive plan includes steps like workforce education, breach detection tools, consumer alerts and legal recourse tools.

Once a data incursion occurs, the CISO and his or her team must be able to detect it immediately. With cyberattacks, the longer they go unnoticed, the worse they are. Placing a CISO in charge of maintaining and updating this response plan will ensure that it gets done and comes from a point of clear authority.

When a data breach occurs, the last thing that decision-makers want or need is to be arguing about what to do and who should do it.

Allowing the IT team to focus
IT teams within companies are frequently overburdened. In addition to maintaining and updating company software, IT personnel regularly respond to the daily crises of other employees. Every hardware, email or other type of problem distracts IT groups from performing their primary duties.

While typical employees tend not to notice whether or not an operating system is updated, it is these performance checks that ultimately help keep company networks safe from unauthorized access.

Bringing in a CISO allows the IT group more time to focus on their core responsibilities. The CISO may even operate alongside regular IT staff during certain times, however, it is best not to overlap duties too much. CISOs can handle red flags, such as phishing emails and imbedded malware that may otherwise escape detection or occupy IT manpower.

CISOs don't need to be paid a quarter million dollars a year to be valuable. Essentially, they act as a point person in  the realm of cybersecurity, a clear head that can dictate commands and formulate strategy. Too often, companies take a relaxed approach to cybersecurity, which almost always results in lost income and damaged reputation.

For organizations that cannot afford to keep a full-time CISO, other options remain. Cloud solutions tend to be more secure than in-office networks and some managed IT providers offer the same level of oversight and proactive planning. Regardless of who or what is in charge of information security, companies must prioritize all compliance and protection development as crucial issues.

How will the GDPR affect your business?

After two years of preparation, the European Union's General Data Protection Regulation is set to go into effect May 25, 2018. Designed to replace the Data Protection Directive of 1995, this legal framework will provide substantial protection for EU citizen's data by imposing heavy fines on any company found to be in violation of the GDPR.

While large companies within the EU have been bracing themselves for impact, many organizations feel unprepared. A report from information security provider Varonis found that 55 percent of businesses worldwide were worried about incurring fines for a GDPR violation. Given that these penalties can be severe – with a maximum fine of €20 million or 4 percent of annual worldwide turnover – organizations may have reason for alarm.

However, arguably the group most at risk are smaller businesses not based in the EU, or companies that don't primarily deal with data. After all, the GDPR is all about regulating data privacy. Yet these organizations may be in the crossfire. Any business that collects data, any amount of it, from an EU citizen or the EU market must fully comply with GDPR standards.

Who needs to comply with the GDPR?
According to the New York University School of Law, any U.S. organization possessing an entity or any kind (person or office) should ascertain if they will be required to follow the new GDPR policy. GDPR standards will apply to all businesses that process any amount of "personal data" from individuals located in, or protected by, the EU.

The definition here of personal data is broad. According to the initiative, personal data is now any information, not just personally identifying information, that relates to a natural person, identified or identifiable. These new standards apply to log-in information, vehicle ID numbers and IP addresses.

"Any operation or set of operations which is performed on personal data or on sets of personal data" will be regulated by the new standard, according to the articles of the GDPR. These broad definitions and regulations have been purposely worded to incorporate not just companies within the EU but global organizations as well. While the GDPR is a Euro-centric law, its implications may create a new global standard of internet data security.

Businesses with remote employees who are citizens of the EU should investigate whether they will be bound to GDPR policy. Businesses with remote employees who are citizens of the EU should investigate whether they will be bound to GDPR policy.

How prepared generally is the U.S.?
Unfortunately, many businesses in the U.S. simply are not sufficiently informed regarding the coming measure. The Varonis report found that U.S. awareness of the GDPR was only at 65 percent, below the overall average of 79 percent. Only 30 percent of U.S. respondents reported being in full compliance with the upcoming laws. Over 10 percent of organizations still didn't know whether the bill would affect them.

When looking at overall measure compliance completion, the majority of U.S. companies affected by the GDPR have re-evaluated data breach detection procedures, as the GDPR mandates that any EU citizen affected by a breach must be notified within 72 hours of its detection. A little less than 60 percent of U.S. organizations have also conducted a comprehensive assessment of personal data stored within their organization.

This procedure is highly recommended for all companies that may even remotely store some sort of personal data from the EU. It is only after such an assessment has been performed that an organization can be sure whether or not it will be affected by the GDPR.

About 7 percent of U.S. businesses had completed no significant measures to comply with the GDPR.

"About 7 percent of U.S. businesses had completed no significant measures to comply with the GDPR."

What does the GDPR mean for data collection?
Personal data collection will become more transparent under GDPR guidelines. Everyone, personally and professionally, is familiar with user agreements, popular on social media sites like Facebook and Google. These documents have been full of dense legalese designed to disguise their intentions and limit consumer knowledge of the websites' activities.

Under the GDPR, these wordy documents will be made illegal, replaced by concise, comprehensible wording that will alert the "data subject" of exactly what information is being taken. The individual will reserve the right to leave said data contract anytime with no negative repercussions allowed. In short, the naive early days are over and the GDPR will arm at least EU consumers will the tools needed to determine what, if any, information they allow to be shared for commercial purposes.

Data protection by design will also be mandated. Companies will have to factor in information security at every stage of data collection software collection, instead of regulating it to outside software or hardware.

How the GDPR will impact overall data collection remains to be seen. However, what is clear now is that many organizations still have work to do before May 25. With such steep penalties for failure to comply, businesses cannot afford to be asleep on this issue, or even to drag their feet. The fundamental nature of information security could well change from this act. Hopefully, it will be for a better, more secure data privacy marketplace. 

Data Madness: The importance of deleting/removing critical data from old devices

You arrive at work and get an immediate call to see the CEO. Upon entering the office, you notice that the CIO and other executives are in the room, as well as several people in suits you don’t recognize. Everyone is looking stressed, brows furrowed and heads bent.

Those new people in suits are lawyers planning the company’s defense to the major data breach that was just detected. The malicious activity occurred last month and the hacker supposedly used your information.

After frantic moments of head scratching, you remember: You sold your smartphone last month. While it was a personal device, you used it to check office email and it had stored access to the company network password.

While data madness often happens when vital data goes missing, it can also occur when data isn’t properly disposed of. Too often, organizations fail to stress the importance of information security at every phase of the hardware’s life cycle. Before a machine can be decommissioned, data must first be thoroughly purged and, in some cases, destroyed.

A broken phone can still house perfectly working data. A broken phone can still house perfectly working data.

Sanitizing data vs. deleting data
In some companies, the temptation is to delete data by moving it to the recycling bin and pressing “empty.” However, this is not enough. According to Secure Data Recovery, data emptied from the recycling bin is not permanently deleted – at least not right away. The computer simply deletes the pathing and labels the information as “free space,” meaning that it can be overwritten by new data.

For all intents and purposes, data deleted from the recycling bin is gone, at least as far as the layperson is concerned. Those with computer programming and specialized skills or software, however, can recover the information and restore it. If you’ve ever done a search for “data recovery” – you will see that these skills are not in short supply.

Yet companies make this mistake all the time. A survey conducted by Blancco found that almost half of all hard drives carried at least some residual data. The same was true for over a third of smartphones. Files such as emails, photos and sensitive company documents were recovered from these devices. To securely delete files requires a more thorough process.

The University of California, Riverside defines data sanitization as “the process of deliberately, permanently, and irreversibly removing or destroying the data stored on a memory device.” Sanitized data drives typically carry no residual data, even with the aid of recovery tools. However, this solution often times requires additional software that will erase and rewrite information multiple times.

Companies have a wide variety of options to choose from when it comes to securing data sanitization software. Microsoft even provides an in-house solution in the form of its tool, data eraser – which has been optimized for PCs and tablets. It’s important to remember that different types of data drives will only be compatible with certain software.

Given the sensitive nature of the material in question, companies should only choose data sanitization software from trust organizations.

Recycling bins - like their physical counterparts - are not known for permanently disposing of trash. Recycling bins – like their physical counterparts – are not known for permanently disposing of trash.

When physical destruction may be needed
However, for some kinds of data, sanitization may not be enough. This can be regulated by internal business policy (such as placing employee payroll information as the most sensitive data) or by government laws like HIPAA – which mandate time-effective data destruction.

In this case, the storage device matters more. Hard disk drives, commonly found in computers and servers, are the easiest to destroy as they operate on magnetic fields. A hard drive degausser can permanently alter these fields, leaving the device completely unreadable.

Solid state drives and flash media are more difficult. Their data storage is circuit-based, rendering a degausser ineffective. These drives should be shredded or destroyed by quality equipment expressly designed for the task. Hard drive data can be recovered after improper destruction, even in extreme cases. ComputerWorld reported that data was restored from the wreckage of the Columbia space shuttle tragedy, illustrating the hardiness of certain drives and the effectiveness of professional data recovery tools.

Safely disposing of data is no easy task and innovations like the internet of things have made it more difficult. Cybercriminals may be developing more sophisticated ransomware but they are also still routinely diving in dumpsters and scoping out secondhand stores for improperly deleted data. Make sure your company is taking the necessary steps to avoid data madness.

Data Madness: Exploring the reliability of in-house data vs. cloud servers

Much is made today about choosing the right kind of data storage. When you’re running a team, the last thing you want is for some crucial information to go missing. Such a setback can be disastrous, especially if the data lost was from a survey or customer response. In addition, you have the added anxiety of only hoping the data was lost, not stolen.

As data madness continues, we’re exploring the most secure methods to backup essential data. In today’s article, we’re putting the two most popular solutions under a microscope: in-house servers and cloud data storage. For many companies, success literally hinges on data security. Know the best method and keep your organization running.

How to keep in-house servers running effectively
The longer a server is in operation, the more likely it is to break down. A Statista report found that only 5 percent of servers broke after the first year. By the fourth year, that number had more than doubled. By year seven, nearly 20 percent of servers failed. While the likelihood of a break is still relatively low after seven years, organizations are clearly taking a huge risk. Executives at this hypothetical company might as well tell their employees that there is only an 80 percent chance for productivity each day.

Servers should be continually replaced and upgraded to be effective at securely housing data. However, age is not the only factor that can cause a server to malfunction. RocketIT stressed the need to continuously upgrade server software to keep it protected and compatible with modern systems.

Since servers are gold mines of confidential data, they are the prime targets for any malicious hacker. Keeping servers up to date not only keeps them running smoothly, it also reduces the risk of viruses and malware being able to infiltrate the hardware.

Lastly, if your business opts for servers then it needs a dedicated, maintained space in which to house them. According to Serverscheck, the ideal server room temperature is between 64-80 degrees Fahrenheit with no more than 60 percent humidity. Servers work best with constant conditions so any change could impact device functionality. In addition, if there is a flood or water leakage in the room, then the organization is at serious risk of data loss.

Servers need dedicated, environmentally-controlled space in order to function at peak levels. Servers need dedicated, environmentally-controlled space in order to function at peak levels.

Choosing the right professional cloud services provider
If your company instead opts for a cloud service provider, it must choose the right provider. There are currently numerous options in the field, with Amazon and Microsoft standing out as the dominant players.

Many cloud service providers use physical servers themselves. Essentially, they handle all the maintenance, storage and cybersecurity responsibilities and charge clients for the operations. While some servers, like Cisco in a recent fiasco, have lost client data, the problem has so far been a rare occurrence, according to The Register.

However, there is another side to cloud data. It can keep existing even when the order is given for deletion, as some celebrities learned in an unfortunate way, according to Wired. If an organization is going to store data through a cloud provider, they should be very careful if and when additional backups are made. Data that survives its intended expiration can be dangerous, especially if the parent company has no idea it exists.

And the most secure data storage method is…
Oxford Dictionaries chronicled the phrase “you can’t have your cake and eat it too” as a way of summarizing that you need to choose only one option. With data storage – you can eat as much of your cake as you want, while still having an infinite supply left over. For companies serious about safeguarding data, the best option is simply both.

Backing up data to multiple sources is one of the best ways to ensure that it is never accidently deleted. Just be sure that every copy is secure, to keep classified information out of malicious hands.

Storing data in multiple sites ensures that it lasts longer. Storing data in multiple sites ensures that it lasts longer.

Video: Bringing IT All Together

Click this fun animation to see how ISG supports IT teams with managed services and infrastructure solutions. Our experts help you manage innovation projects such as shared storage, virtualization, disaster recovery, security, mobility and UC collaboration projects.

Video Thumbnail





Join the ISG Technology Team