Optimizing data center strategies for financial services firms

 

Data center investment strategies are critical to the lifeblood of financial services organizations. While finance firms have long used proprietary or third party data centers for information storage and business continuity, big data has given rise to a new set of complications and considerations. Not the least of these are a variety of regulatory and compliance measures that place restrictions on information storage and archival practices. New technologies, rising costs and data management issues are driving compatibility issues in traditional data center models, and financial services firms need to adapt.

Data management in finance is a problem with several moving parts that impact each other. Accumulating and storing data is a relatively straightforward issue, albeit a resource-intensive one. Under the traditional model, a firm would procure additional servers for its onsite facility or enlarge its third-party data center investment, either through colocation or leasing the provider’s equipment.

The deluge of data can make this approach prohibitively costly, forcing organizations to rethink their infrastructure approach, Wall Street & Technology editorial director Greg MacSweeney wrote. Firms with proprietary data centers now stand to save significantly by outsourcing its storage, architecture and management demands. A third-party data center can provide state-of-the-art server hardware, but more importantly has the infrastructure to deploy next-gen network solutions such as virtualization, which drastically reduces the amount of physical equipment needed to contain rising petabytes of data and information-crunching applications.

Working with a third-party data center provider also helps businesses tackle more rapidly moving targets – data integrity and compliance. Data quality and validation are some “small data” issues that grow more problematic as firms accumulate more information from a wider source pool, said software developer Oleg Komissarov, according to a recent FierceFinanceIT article.

Keeping data clean, complete and consistent is a tough task that requires powerful tools and a dedicated team. A managed data center services provider can help offer this level of attention. It can also help in compliance efforts, as any blind spots or inconsistency in information or reporting leave the door open for compliance issues to crop up. As big data expands and accelerates, financial services firms need their data centers to stay one step ahead.

Managed services can help organizations avoid top 10 business hazards

Managed services enable businesses to more successfully navigate a threat-laden enterprise landscape. Although an organization’s biggest IT, operations and security anxieties vary by region, industry and company size, what they’re most afraid of is generally the same across the board – lost profitability, client churn and a tarnished reputation.

In the Twitter age, no confirmed threat goes unpublished or unanalyzed, and it’s difficult for an organization to escape blame even if it’s only affected as a byproduct of another incident. The woes of retailer Target, which reported a 22 percent decrease in its client base in January following a massive data breach during the 2013 holiday season, serve to underscore consumer response to an enterprise that demonstrates less-than-exemplary information security, data management and business continuity.

According to a recent Business Continuity Institute study of nearly 700 enterprise respondents in 82 different countries, the top 10 most common perceived threats to disaster recovery and business continuity are:

  1. Unplanned IT outages
  2. Cyberattacks
  3. Data breaches
  4. Adverse weather effects
  5. Utility supply interruptions
  6. Fires
  7. Security compromises
  8. Health or safety incident
  9. Act of terrorism
  10. New laws or regulations

How managed services assuage anxiety
Managed services offer vast potential for companies to mitigate potential problems in many areas because a provider’s solutions are customized to the needs of the company. The above list offers a variety of incidents stemming from the company’s location, industry, employee behavior and general security management. Overseeing prevention and contingency plans that effectively respond to all of these potential hazards is time consuming, resource intensive and costly. While it’s impossible to prevent adverse weather or control regulatory measures, it’s possible to keep these threats from doing any real damage.

Managed services are scalable, so the amount of a provider’s involvement can correspond exactly a company’s anxieties and potential hazards. One organization may simply require online backup services via an offsite server in order to increase its data loss prevention activities. Another may want to virtualize nearly all of its infrastructure so its employees can stay connected and productive during a wave of bad weather. As a company’s needs change over time, it doesn’t have to rearrange its entire back-end infrastructure in order to keep danger at bay.

Differentiating effective IT business continuity from disaster recovery

With constant threats posed by extreme weather and external attackers, companies have increasingly recognized the importance of protecting their IT assets in the wake of a disaster. But the nature of that protection plan is often up for debate. Recovering from disaster means leveraging tools like online backup services at the very least. However, true resilience in the face of a disaster requires a more all-encompassing business continuity approach.

The plan goes beyond data protection and recovery
While backing up data so it can be restored in the wake of an outage is the bedrock of any business continuity plan, it's only half the battle. Depending on a business's approach, its backup solution may do it little good in the event of an actual disaster. For instance, some businesses relying on off-site tape storage have found themselves unable to restore their files at a secondary location after a storm because they couldn't physically travel to the tape storage facility due to flooding, industry expert Jarrett Potts explained in a column for Data Center Knowledge. Having a plan that encompasses the full recovery process is essential.

"IT disaster recovery plans are very important when one considers how intertwined organizations are with technology, but it is important to note that IT disaster recovery plans are not, by themselves, a complete business continuity strategy," Continuity Central contributor Michael Bratton explained in a recent article.

The solution is oriented toward application uptime
A key differentiator between disaster recovery and business continuity is that the latter's focus is keeping core business operations running. As Bratton noted, this approach goes beyond simply IT. However, from a tech perspective, it primarily means keeping critical applications running with as little interruption as possible. Through technologies like virtualization and a distributed network of colocation facilities, businesses can establish a flexible application hosting model that can easily weather unexpected events. The exact nature of the plan is likely to vary from company to company, so working with a third-party solution provider to develop a custom response can also be beneficial.

Real-world business continuity: The soaring costs of downtime

Many organizations approach business continuity as an afterthought. When a company is building up its hardware footprint and application investments in support of its growing business model, contingency plans are often relegated to the backseat and linger there. These organizations find out too late about the costs of prolonged downtime and the difficulty involved in righting the ship only in the aftermath of an unplanned event. One recent report offers some fairly chilling statistics about widespread shortcomings and expensive consequences of ignoring business continuity planning.

The Ponemon Institute report on the cost of data center outages in 2013 found that organizations lose $7,900 per minute of downtime. The mean cost of a single data center outage is $627,418 and the maximum amount lost to a single incident was more than $1.7 million. The total and per-minute costs correlated to the size of the facility and the duration of the outage, while IT equipment failure represented the most expensive root cause of unplanned data center downtime. Financial hits were worse for companies in data center-dependent industries such as e-commerce, financial services and telecommunications.

Costs can quickly escalate as a business recovers from an unplanned incident. From detection and containment to lost revenues and dwindled productivity, the expenditures can be immense. An organization will suffer more for each area of its business continuity planning that is lackluster or poorly thought out. 

These findings convey the importance of having an effective business continuity approach in place. The approach is twofold – prevention and recovery. Eliminating root causes of downtime is vital, especially in the case of expensive ones like IT equipment that can be more effectively managed. Visibility and redundancy can help streamline efforts to get the system back on track following a surprise incident.

Virtualization can be a great asset to both aspects of business continuity planning, as a recent CIO.com webinar pointed out. It provides a more manageable, agile environment for continuity efforts, mitigates hardware vulnerabilities by slashing equipment needs and helps a company access its safely stored systems and applications immediately following an unplanned occurrence.

Managed services key to making disaster recovery planning stick

Managed services can help organizations eliminate one of their biggest pain points – disaster recovery. Establishing and upholding continuity and contingency plans can be complicated and resource-intensive. Many businesses, especially fledgling ones, choose to shove disaster recovery planning on the back burner. Over time, the lack of attention paid to disaster recovery planning puts organizations at risk.

According to a recent study by The Disaster Recovery Preparedness Council, many organizations are woefully unprepared for disaster to strike. Its global survey of more than 1,000 organizations, from small businesses to large corporations, found that a whopping 73 percent of organizations do not have adequate disaster recovery plans in place. Its other findings include:

  • 64 percent of respondents said that their organizations’ disaster recovery efforts are underfunded.
  • More than 60 percent do not have fully documented plans.
  • Among the 40 percent that do have documented plans, 23 percent have never actually tested them to see if they work.
  • Of respondents that experienced outages, almost 30 percent lost data center functionality for days or weeks at a time.

Since there’s no way of knowing when and how a potential disaster may occur, companies are gambling with their future every day they don’t do something about their disaster recovery and business continuity planning efforts. Being proactive is the only way to successfully combat the effects of unplanned events.

Managed services can help organizations establish a meaningful, up-to-date disaster recovery system. They can provide concentrated data backup and system recovery services beyond those a business has the budget or time to uphold, noted MSPmentor. Keeping systems current, especially when an organization adds a new application or hardware, is key to eliminating vulnerabilities that stem from outdated disaster recovery plans.

Proactive risk mitigation is important. Managed services providers can help organizations develop recovery time objectives for business-critical applications and conduct automated recovery testing. Having a dedicated IT staff on hand relieves companies of having to make their forays into the difficult science of disaster recovery and business continuity planning alone.

How to choose a colocation provider

Colocation is an advantageous infrastructure model for any company concerned about supporting its data storage needs. Among the variety of data center, server placement and management options available, it's the one that directly marries an organization's desire to maintain control over its equipment with its need for better network and security support.

In a colocation environment, an organization leases data center space for servers it owns. The data center provider offers server racks, power, bandwidth and physical security. The organization retains control over server management, unless it chooses to outsource these needs to the provider as well. 

Simple, right? Because the colocation business is booming, it attracts a lot of upstart providers. Not all of them offer the same level of service. That's just the reality of the situation. Additionally, one provider's solutions may be right for one organization and match up poorly with another's needs. Misfiring on this selection can be a costly decision, not only in wasted capital expenses but potentially down the road if business continuity is affected, according to ComputerWeekly. 

Determining the most pressing concerns is a company's first step. For example, a company with its central location in an area more susceptible to natural disasters should look for a colocation facility in a safer area. Connectivity is another issue. While every business wants to stay online, some may be able to afford less than 99.999 percent uptime ("five-nines uptime") in exchange for a more cost-effective colocation plan. A financial services firm or federal entity may need to pay a premium to ensure servers are always available. It's simply a matter of weighing financial costs with the price of availability.

Security is a near-universal concern, while many organizations may be dealing with increased complications related to industry compliance, according to Data Center Knowledge contributor Bill Kleyman. A company needs to make sure its colocation provider is certified for adherence to compliance standards. A variety of physical and facility safeguards can provide additional protection, which may be the way to go if a company's colocation center is in a more populated area.

ISG Announces Executive Responsibility Changes

The Twin Valley family of companies announced leadership changes among their C-Level executives, effective for 2014. John Gunn, President and Co-Founder of ISG Technology, will transition from his current role as President and COO to Chief Strategy Officer of ISG Technology. Gunn will focus on supporting and defining the strategy of ISG Technology.  Gunn currently serves and will remain on the board of directors for all three companies.  Ben Foster, current CEO of all Twin Valley companies, will assume the general management of ISG Technology previously handled by Gunn. Foster will remain President and  CEO for the Twin Valley companies.

“When Twin Valley acquired ISG Technology in 2011 the roadmap and evolution of our company was clear to both John and me,” said Foster. “John has served and will continue to serve as a critical business leader and advisor to the Twin Valley family of companies.  This is a carefully planned transition that will enhance our clients’ ability to attain their organizational goals.”

Additionally, Scott Cissna has joined the Twin Valley companies as CFO, effective January 6, 2014.  Cissna brings more than 20 years of finance experience from the telecommunications, cloud services, and wireless industries.  With his broad base of experience ranging from performing sophisticated analysis to building scalable teams, Cissna is an analytical leader and decision maker hired to help navigate the complex, growing, and rapidly changing landscape where the companies operate.

The Twin Valley family of companies is a privately owned business with diversified investments and holdings in the telecommunications and IT space. The company is diversified through three separate operating entities; Twin Valley Telephone, Twin Valley Communications, and ISG Technology.

Twin Valley Telephone and Twin Valley Communications is the largest privately owned independent telecommunications company in the state of Kansas, providing voice and advanced services in rural areas for more than 65 years.  Over state-of-the-art FTTP infrastructure, Twin Valley’s portfolio includes voice, broadband, television, cloud services, hosted voice, wireless, and security.  Through strategic acquisitions and organic growth, the company has experienced a tenfold growth in the past decade.

ISG Technology is a full spectrum Data Center and IT Infrastructure partner uniquely positioned to provide complete solutions from client premises all the way to the cloud. With more than 30 years of experience, ISG has the unique ability to provide IT equipment (IT Infrastructure), cloud services through a regional Network of Data Centers https://isgtech.com/data-centers (Data Center Services), the required connectivity (Bandwidth) to connect it all together, and the ongoing support to help manage client’s IT environments (ITaaS). ISG places a premium on enduring client relationships and reference-ability – attributes only achievable through deep understanding of client business objectives, authoritative vision of what’s possible, and the relentless commitment to deliver successful solutions.