Desktop virtualization: Why companies need to stop dragging their feet

Desktop virtualization is a necessary investment that reflects the changing technological paradigm. With employees increasingly mobile and companies more globalized, personnel need to be able to access their desktop operating system and applications from anywhere. Many organizations are eagerly sending data storage to the cloud and investing in as-a-service solutions to better manage and protect growing application environments. However, this accelerated investment wanes when it comes to desktop virtualization. Why? Shouldn't location-independent services extend to the level of the end user?

Cost continues to be an impediment to desktop virtualization in the eyes of many companies. While organizations acknowledge that the Internet offers a much more cost-effective and centralized medium through which to provide enterprise application and information access, they are worried about the expenses involved in reconfiguring enterprise infrastructure to make it compatible, according to a recent TechNavio report. While it's true that this can represent a sizeable capital investment, the long-term operational savings are enormous.

Bearing this in mind, ZDNet contributor Ken Hess wrote that it's surprising that companies are "still having this conversation" about the merits of desktop virtualization. Many companies who are worried about the costs of deploying virtual desktops and other infrastructure are the same ones clinging to hardware that is approaching or past its fifth year in use.Old equipment breaks down more frequently and often costs more to repair, and the more outdated hardware is, the more difficult it is to transition to a new IT program. Newer hardware likely has virtualization capacity. It makes sense to upgrade now, knowing that doing so when it is inevitable or reached a critical point will be extra complex.

Curing data management issues in the healthcare sector

Data management in the healthcare industry is reaching a tipping point. According to CDW Healthcare, the medical sector is gearing up to massive data growth – the 500 petabytes of data in 2013 are set to rise to 25,000 PBs by 2020. By 2015, the average hospital could be producing around 665 terabytes of data.

It's not just the amount of data that's the issue, but the types of information organizations collect. About 80 percent of data is unstructured, with imaging, scans and video requiring huge swaths of server space. Also, many healthcare providers are storing redundant information – the average hospital has 800,000 total records, but as many as 96,000 are duplicates. They are costly to store, making filing systems and data management efforts more complex without delivering additional security.

While big data offers potential benefits in patient care, research and treatment, the healthcare sector is flailing. In part, it's due to a relatively unique set of circumstances. The healthcare sector is traditionally fairly tech-averse – that acres of file cabinets containing patient records in manila folders still persist is a testament to how difficult it is to go digital. Initiatives such as electronic health records and healthcare information exchanges that increase the value of data have to contend with a slew of compliance, privacy and confidentiality issues.

Data management services can help healthcare organizations wield their vast information reserves in a cost-effective and secure way. Modern information technology infrastructure and business intelligence tools are critical to the effective utilization and protection of game-changing data-driven strategies, wrote Forbes contributor John Foley. Not only are massive file systems difficult to back up in a comprehensive way, many medical providers don't have any idea how long it would take to make files available following an unplanned incident. A data management services provider can help the organization establish a customized storage and backup system that prioritizes continuity and compliance. With people's lives potentially hanging in the balance, it's vital that healthcare providers alleviate big data headaches.

Colocation provides balance in a precarious world


Colocation is an increasingly popular choice for companies that want to cut down on data center spending without relinquishing control over their equipment. The market for wholesale and retail colocation is expected to surpass $43 billion by 2018, according to MarketsandMarkets. This represents a compound annual growth rate of 11 percent from 2013 to 2018. Retail colocation, in which businesses lease space in a large data center that services multiple clients, is rising in demand, with retail colocation deals often topping 1 megawatt of critical power to satisfy scaling client needs.

Many organizations that have little experience with massive infrastructure needs are now faced with increasing convergence between business and IT. This dive into the deep end can quickly subvert budgeting, resourcing, tech support and data strategies that companies have carefully planned. Colocation provides an alternative to an endless cycle of purchasing new equipment, building additions to onsite data centers and retraining staff. As Computer Weekly contributor Clive Longbottom pointed out, it makes little sense to build a facility given so much uncertainty, when it’s nearly impossible to predict demand even a few years down the road.

Unlike managed services, in which a company outsources the oversight of its infrastructure to a provider, colocation enables it to use its own servers and retain control of installation, maintenance and management. This can be a good first step for an organization that may have less experience with IT outsourcing but knows that it can’t subsist much longer on the status quo.

3 ways cloud storage solves IT complexity issues

Cloud storage enables businesses to exert more control over increasingly complex IT environments. Many IT departments are struggling with the management-related issues and costs stemming from infrastructure expansion. It's a physical problem, in terms of the storage equipment and support needed for big data and application provisioning. It's also an issue of management, as rising device and networking demands put more pressure on IT resourcing and policymaking capacities. At the same time, pressure to keep costs down can leave IT systems fractured or bloated. 

Cloud storage is critical to reducing the costs and complications of IT for a better bottom line. Here are three ways it makes a difference:

  1. Simplifies backup and recovery: Many organizations struggle to get employees to back up files in anything approaching real time. This reality is compounded by growing IT environments, wrote ZDNet senior editor Jason Perlow. Cloud storage offers organizations scalable storage space that expands as a business's needs do, plus automated syncing and backup to ensure real-time recovery availability.
  2. Reduces CAPEX and OPEX: The cloud can reduce storage-related capital and operating expenses in one fell swoop, observed CSO Online contributor Gordon Makryllos. Cloud storage offers upfront advantages to organizations by drastically reducing the amount of equipment they need to buy. Its scalability also offers OPEX cost benefits through streamlined security management, greater flexibility and more centralized IT support that provides continuity as organizations' priorities change.
  3. Improves collaborative potential: Communication and collaboration are more critical than ever to establishing a vibrant, successful organization. By centralizing file storage in a cloud server instead of on individual devices, employees can view, edit and share documents and files easily and in real time. IT departments can also leverage cloud environments to provide enhanced encryption and other security measures, automating access and preserving data integrity in the face of cyberthreats.

As complexity and costs rise, cloud storage can help relieve IT departments of many of the daily tasks that take up an increasing amount of their time. It enables them to spend more time on business-critical projects, with this alignment serving as another way to boost margins and take control of changing technological imperatives.

Real-world business continuity: The soaring costs of downtime

Many organizations approach business continuity as an afterthought. When a company is building up its hardware footprint and application investments in support of its growing business model, contingency plans are often relegated to the backseat and linger there. These organizations find out too late about the costs of prolonged downtime and the difficulty involved in righting the ship only in the aftermath of an unplanned event. One recent report offers some fairly chilling statistics about widespread shortcomings and expensive consequences of ignoring business continuity planning.

The Ponemon Institute report on the cost of data center outages in 2013 found that organizations lose $7,900 per minute of downtime. The mean cost of a single data center outage is $627,418 and the maximum amount lost to a single incident was more than $1.7 million. The total and per-minute costs correlated to the size of the facility and the duration of the outage, while IT equipment failure represented the most expensive root cause of unplanned data center downtime. Financial hits were worse for companies in data center-dependent industries such as e-commerce, financial services and telecommunications.

Costs can quickly escalate as a business recovers from an unplanned incident. From detection and containment to lost revenues and dwindled productivity, the expenditures can be immense. An organization will suffer more for each area of its business continuity planning that is lackluster or poorly thought out. 

These findings convey the importance of having an effective business continuity approach in place. The approach is twofold – prevention and recovery. Eliminating root causes of downtime is vital, especially in the case of expensive ones like IT equipment that can be more effectively managed. Visibility and redundancy can help streamline efforts to get the system back on track following a surprise incident.

Virtualization can be a great asset to both aspects of business continuity planning, as a recent webinar pointed out. It provides a more manageable, agile environment for continuity efforts, mitigates hardware vulnerabilities by slashing equipment needs and helps a company access its safely stored systems and applications immediately following an unplanned occurrence.

Online backup services: A cost-effective insurance policy

Say you have a car. It's not just any car – it's the 2014 Mercedes-Benz SLS sports car. It's an investment sure to turn the heads of everyone in the neighborhood. It's also going to entail costs far beyond purchase price – at an annual average of $7,040, the SLS commands the highest (non-exotic) insurance prices of any 2014 model, according to Forbes. Now pretend you have a fleet of them – think about what it costs to insure 20. 

The obvious rejoinder here is, why would anyone buy 20 of the same car? It would certainly change the conversation about car ownership. What about big data? Data has grown astronomically in recent years. It may not seem so crazy because it's been an almost natural progression, but big data and application environments have created zettabytes of information beyond anyone's expectations just a few years ago. It means that the conversation about information storage has transformed almost completely. Legacy models have to change. Traditional insurance policies don't make much sense, logistically or financially, when extrapolated to a more expansive environment.

Online backup services are the data management version of a smart, cost-effective insurance policy. Storing and backing up data with conventional physical storage devices will likely hamper an organization's ability to accumulate and safeguard information, or result in unmanageable costs. According to ITProPortal contributor Neil Rubenking, many companies are still reticent to invest in online backup services due to security concerns. However, nothing is more insecure than not having data properly archived. As an insurance policy against the growing dangers of data leaks and breaches, they represent the most cost-effective and user-friendly option.

"Online backup is quite simply convenient. It takes a lot of the pain out of the backup process by doing away with media swapping and a lot of the drudgery – you just set it and forget it," Rubenking wrote. "Your files are safe on somebody else's servers. If your home implodes into a Stone Age burial mound you can still recover those backups."

How desktop virtualization satisfies changing business needs

Desktop virtualization is the natural evolution of a changing business operations landscape. With flexibility and availability becoming higher priorities than ever before, making user interfaces available anywhere can increase productivity, enhance management strategies and cut costs – three outcomes vital to continue fielding a competitive business model. 

In a nutshell, desktop virtualization involves the abstraction of a user's desktop workspace from the physical machine onto a server. It allows the user to log on to his or her desktop interface from anywhere, connecting to applications and files via a data center instead of a single PC.

The nature of work with respect to technology has evolved rapidly in the last few years. Many employees use multiple devices for business, connect through the cloud and conduct a high percentage of their business activity via the Internet. Adherence to traditional workstation technologies would effectively put a cap on how fast an organization can move, mobilize and communicate. A business model based entirely on virtualization and Internet connectivity – which would include laptops, mobile devices and voice over Internet Protocol – could recreate the office for employees located anywhere. BusinessNewsDaily contributor Sara Angeles pointed out.

In today's business world, leaner is better. Changing environments, however, tend to make it easier to add than subtract. More devices, more applications, more security concerns, more equipment – many organizations are hard-pressed to keep costs from escalating out of control. Desktop virtualization helps companies not only corral escalating spending, but keep budgets down in the long term. Organizations can use existing infrastructure or small, targeted investments to build up a robust, virtualized environment quickly and problem free, wrote ZDNet contributor Simon Bisson. Cost-effectiveness is a significant driver for cloud investment, but virtualized infrastructure enables businesses to stretch the benefits of the cloud to all infrastructure as well. It gives companies the resources and breathing room to be able to invest in emerging developments and attack new challenges.

Managed services key to making disaster recovery planning stick

Managed services can help organizations eliminate one of their biggest pain points – disaster recovery. Establishing and upholding continuity and contingency plans can be complicated and resource-intensive. Many businesses, especially fledgling ones, choose to shove disaster recovery planning on the back burner. Over time, the lack of attention paid to disaster recovery planning puts organizations at risk.

According to a recent study by The Disaster Recovery Preparedness Council, many organizations are woefully unprepared for disaster to strike. Its global survey of more than 1,000 organizations, from small businesses to large corporations, found that a whopping 73 percent of organizations do not have adequate disaster recovery plans in place. Its other findings include:

  • 64 percent of respondents said that their organizations’ disaster recovery efforts are underfunded.
  • More than 60 percent do not have fully documented plans.
  • Among the 40 percent that do have documented plans, 23 percent have never actually tested them to see if they work.
  • Of respondents that experienced outages, almost 30 percent lost data center functionality for days or weeks at a time.

Since there’s no way of knowing when and how a potential disaster may occur, companies are gambling with their future every day they don’t do something about their disaster recovery and business continuity planning efforts. Being proactive is the only way to successfully combat the effects of unplanned events.

Managed services can help organizations establish a meaningful, up-to-date disaster recovery system. They can provide concentrated data backup and system recovery services beyond those a business has the budget or time to uphold, noted MSPmentor. Keeping systems current, especially when an organization adds a new application or hardware, is key to eliminating vulnerabilities that stem from outdated disaster recovery plans.

Proactive risk mitigation is important. Managed services providers can help organizations develop recovery time objectives for business-critical applications and conduct automated recovery testing. Having a dedicated IT staff on hand relieves companies of having to make their forays into the difficult science of disaster recovery and business continuity planning alone.

Connecting the dots: Bandwidth as a business model

Few developments have affected businesses in the past few years as much as the burning desire for bandwidth. As enterprise environments expand, complications are inevitable. Proper information storage and security are increasingly vital as more businesses transition to data-driven initiatives. They're also becoming harder to attain. Many organizations find themselves caught in a tangled web of carriers, data centers, service providers and connectivity requirements. A lack of interoperability between services and poor communication among stakeholders can make undoing these knots an expensive and resource-intensive slog. It induces broadband rage and burns a lot of bandwidth in the process.

Optimizing connectivity needs to be a foremost concern in today's business model. In theory, it means providing enough bandwidth to create sufficient breathing room for all locations and stakeholders. In practice, an organization needs to centralize its connectivity support. Data Center Knowledge contributor Bill Kleyman recently discussed some fundamental changes in information technology that should compel companies to consider building their business model around their data center network. 

"Business used to establish their practices and then create their IT department. Now big (and smart) businesses are approaching data centers and technology from a completely different angle," Kleyman wrote. "These visionaries see that the future revolves around complete mobility and true device-agnostic connectivity."

Examples Kleyman highlighted included cloud-based data distribution models, which support expanding application development and processing environments. He also observed that new ways of computing, such as virtualization and software-defined networking, place more emphasis on minimizing granular infrastructure management and centralizing IT. Complexity in digital compliance and data governance can also be assuaged by a centralized connectivity platform.

Looking at bandwidth as a business model involves seeing technology as a critical role player rather than simply as a means to get things done. Connectivity infrastructure can and should contribute directly to bottom-line thinking. Paring down the number of service providers to a basic carrier-agnostic data center model can provide more bandwidth integrity and fewer headaches. 

How to choose a colocation provider

Colocation is an advantageous infrastructure model for any company concerned about supporting its data storage needs. Among the variety of data center, server placement and management options available, it's the one that directly marries an organization's desire to maintain control over its equipment with its need for better network and security support.

In a colocation environment, an organization leases data center space for servers it owns. The data center provider offers server racks, power, bandwidth and physical security. The organization retains control over server management, unless it chooses to outsource these needs to the provider as well. 

Simple, right? Because the colocation business is booming, it attracts a lot of upstart providers. Not all of them offer the same level of service. That's just the reality of the situation. Additionally, one provider's solutions may be right for one organization and match up poorly with another's needs. Misfiring on this selection can be a costly decision, not only in wasted capital expenses but potentially down the road if business continuity is affected, according to ComputerWeekly. 

Determining the most pressing concerns is a company's first step. For example, a company with its central location in an area more susceptible to natural disasters should look for a colocation facility in a safer area. Connectivity is another issue. While every business wants to stay online, some may be able to afford less than 99.999 percent uptime ("five-nines uptime") in exchange for a more cost-effective colocation plan. A financial services firm or federal entity may need to pay a premium to ensure servers are always available. It's simply a matter of weighing financial costs with the price of availability.

Security is a near-universal concern, while many organizations may be dealing with increased complications related to industry compliance, according to Data Center Knowledge contributor Bill Kleyman. A company needs to make sure its colocation provider is certified for adherence to compliance standards. A variety of physical and facility safeguards can provide additional protection, which may be the way to go if a company's colocation center is in a more populated area.