Hybrid IT Interconnection Priorities: Insights from Cyxtera

Choosing whether to take your enterprise to public cloud, private cloud or a combination of both can be a convoluted and nuanced journey without support from a trusted partner like Unitas. Cyxtera hosted a webinar diving into the topic of interconnection featuring Craig Matsumoto, Senior Analyst, Datacenter Networking at 451 Research. Matsumoto shared insights into what’s fueling interconnection, connectivity key trends, optimizing costs vs latency and interconnection priorities/next steps to consider.

Here are the most compelling tips from the webinar to help you on your journey to the cloud:

 

Consider Fluidity with Software Programmable Interconnection

The world is changing at breakneck speeds and so are the needs of your business. Be it remote working or global interactions, data needs to be accessed from any distance around the world and it also needs to be able to move into various clouds–FAST. For some operations the Internet will suffice, but if an enterprise needs less latency, more security, or applications of specific analytics, the Internet or one cloud may not suffice. Interconnection and a hybrid cloud solution are key trends enterprises are exploring.

 

According to a 451 Research Voice of the Enterprise quarterly report, 72% of organizations using public cloud have more than one vendor in place (the majority are using AWS or Azure). 48% of organizations migrate workloads as needed between on-premises and public cloud environments (based on requirements for data residency, cost, speed/agility/innovation and security/risk). This fluidity is important to plan for; as needs change, so might the location of the data need to change. You don’t want to be bound to one cloud–their pricing scheme and limited features–so the idea of keeping data in one cloud permanently is not a great long-term solution. Not that data needs to move back and forth daily between clouds (though there might be a use case where that would be needed), but a more likely scenario is that something within the enterprise changes, or a cloud’s capabilities change, and there could be advantages to moving the data to another cloud. Why waste time or money being stuck in one place when you can build flexibility into your strategy now?

 

The future is in software-programmable-interconnection; it is a way to extract away location as a barrier. Unless you have a private data center on premise, there are many different places (public/private clouds + data centers) that are at various distances from the enterprise and you need to be able to access that data swiftly and easily. Sometimes a set of data centers might physically be far apart in location and you need to connect to their various clouds, or you might need to connect to a SaaS service (for example: Salesforce.com—when you use this SaaS application you are connecting to their cloud) or perhaps you have edge compute deployments. The default is to connect to the Internet to access SaaS applications, which may work if you are a small operation or an individual working remotely, but many enterprises require a more reliable, secure, and controllable solution.

 

Automation through software is the key to connectivity: it’s very fast and potentially self-serves. It does not need to be pre-provisioned. The beauty of automation is that you are able to create and tear down connections in a few minutes–even if there are security measures that must be taken–should you need to shift your workload from a particular data center or type of cloud.

 

Unitas simplifies cloud connectivity. For more information on our connectivity solution, click here.

 

Is Latency Worth the Spend?

According to a 451 Research Voice of the Enterprise Survey the top factor influencing workload venue selection is cost, so is latency worth spending an exorbitant amount of money to lessen? No one likes to wait, but when you look at the cost-benefit analysis, latency may be a non-issue and not worth the spend. Most applications can tolerate latency—typically latency is only fractions of a second, and unless you’re running a mission critical operation, it will be a negligible amount of lag. For many other processes, the end user does not even perceive a gap in time. A suitable solution to save money could be adopting a cloud adjacent model vs paying the premium to have data close to the end user.

 

Enterprises pay a premium to access their data from a data center that has a point of presence (PoP). For example, some public cloud providers will charge an egress charge to move your data out, but if they leverage cloud connectivity through an on-ramp provider, such as Unitas, they can get colocation space at a 30% savings (which Matsomoto asserts is a conservative estimate, as it will vary by market). To talk concrete numbers, the typical monthly cost of being in a data center with a cloud POP can total $2,411.54 depending on location (for 4kW cabinet, fiber cross connect, Megaport 10 Gbps, Virtual XC 1 Gbps, AWS network cost and AWS cost to transmit 10TB). You can save nearly 30% with the cloud adjacent solution at $1,897.33 a month for all the same features. AWS data costs are what they are, but by being cloud adjacent, you’re not paying a premium to get to that cloud or to be in co-residence with that cloud PoP.

 

Questions to Ask Yourself When Selecting a Connectivity Option for a Particular Workload

Matsomoto advises asking the following questions to evaluate which connectivity option is right for your workload:

 

#1: Is direct internet access (DIA) good enough or do you need a new connectivity options?

Some things to consider are where your employees are accessing data–for instance, is it through their personal Internet? That might be enough to complete their workloads. However, do you need something more reliable, controllable, or private than a personal Internet connection? If not, then there is no need to spend the money and energy if it’s not absolutely necessary.

 

#2: Does your enterprise have a strong internal IT team?

You could consider self-service data management, however, it does mean someone has to push the buttons and oversee these processes. You may need a partner, such as a data center operator or systems integrator, to help build up your connectivity picture.

 

#3: How many clouds, on-ramps and outside parties are involved?

Which places do you know you have to reach and how many places you suspect you will need to reach? The answer depends on what business your enterprise is in, if you rely heavily on SaaS, or perform edge computing. Also, many enterprises may or may not know where all the clouds are that they are actually connected to since connectivity is fluid.

 

#4: How much bandwidth is needed?

If you need significant bandwidth (such as a full 10 GB pipe all the time) you may be outside the sphere of software programmable interconnection. Software programmable interconnection is great if you need many connections that will change over time or if there are burstable connections where you want to turn on a tunnel for a few hours and then shut it down and erase that connectivity. If you need something larger or more sustained, you’ll want to speak to a data center provider or cloud provider because you will need a customized solution.

 

Next Steps+ Priorities

If you’ve found a connectivity option that could be a fit for you, Matsomoto recommends taking the following steps.

 

#1: Assume your cloud set up is going to change.

Wherever you have an application running give yourself the flexibility/option of moving it eventually, if needed. Future proof your solution. Your data might not move every day (though that could be a possibility in some use cases), but over time you will probably look at that option. Don’t assume you’re frozen into any one cloud and their abilities forever. The needs of the enterprise may change along with what you do with your data. Also, various clouds may offer options in the future that you will want to take advantage of to maximize your data’s potential.

 

#2: Explore data center providers’ connectivity options.

Look at the data center environment that you colocate inside to evaluate if you want to go the software programmable interconnection route. Software defined networking is advanced, and not all data centers offer it yet. Not all facilities are fully wired up to provide that kind of instant connectivity. This interconnection works only if you have software defined networking. You will also want to know how the data center does cross connects and what services are available to you in their ecosystem. It is good to know what data center locations are nearby should you need to get your data into various locations.

 

#3: Assess the potential to optimize TCO vs latency. 

How urgent is latency for you? If you must eliminate latency maybe there are some TCO considerations you can manipulate. If reducing latency is urgent, weave it into your connectivity picture and put things as close as you can to the end users or the data you need to assess.

 

To view the full webinar, click here.

 

 

 

 

 

Our Mission

The Unitas Global mission is to transform the consumption of IT by providing the most innovative, secure, and simple to use solutions for our enterprise clients. We are passionate about excellence, built on the foundation of trust, integrity, and devoted to exceptional customer experiences.

Follow Us

Recent Posts

Differentiators for Unitas Global's Cloud Connectivity Solution

Sign up for our Newsletter