Category: Virtualisation

Vendor Lock In and your Business Strategies

Look out for the Dragons!

So one of the phrases which I keep seeing coming up in different social media posts and articles is about “Vendor Lock In”. You can see some of the ones I’ve found via a quick search of the interwebs below, all with their own theme and position.

https://www.geeks2u.com.au/geekspeak/beware-the-risks-of-vendor-lock-in/

http://www.forbes.com/sites/joemckendrick/2011/11/20/cloud-computings-vendor-lock-in-problem-why-the-industry-is-taking-a-step-backwards/

http://www.wisegeek.com/what-is-vendor-lock-in.htm

http://www.zdnet.com/10-steps-to-avoid-cloud-vendor-lock-in-7000017971/

To me a lot of this may as well be a map from the 1500s saying “here be dragons” (or to be accurate “HC SVNT DRACONES” as it was in Latin). There is something out there when making business decisions to be scared of, but it isn’t dragons…I mean vendor lock in, it is not probably reviewing our business requirements before making a decision

herebedragons11

 So how real is Vendor Lock In?

BusterBehindBars

Very real, I can guarantee that any purchase you make will have some level of lock in to a technology or a process or something that will be difficult to change:

house location

car model

phone

From my perspective I’m looking at the technology side; but we are of course as free as we want to believe we are.

“A puppet is free as long as he loves his strings.” – Sam Harris, Free Will

Is there a way out?!

press-to-exit_r1_c2

Docker and Cloud Foundry are two organisations talking about the ability to remove vendor lock in. This is great for giving you portability between public and private clouds however it brings up that little issue….you are locked into their solution.

https://www.docker.com/

http://www.cloudfoundry.org/

Even if there is no or little cost there is a considerable amount of time and effort you have to put into this solution to get it up and running and then, if a feature or entire product is impacted some how, you have to be able to migrate away from it.

The only solution….choose wisely!

In short we have to make our decisions wisely and weigh up all the options without emotion relating to what is best for our business. You could build up on Azure PaaS for example, and have a close working relationship with Microsoft giving you access to updates, new features, automatic scale, hyper-scale etc (shameless plug) or you could look at utilising Docker to enable you to switch between public cloud providers like Azure, Google and AWS depending on where the wind is blowing price wise and geography requirements.

Whatever you do just make sure you properly consider what you are trying to achieve and relate this to your design decision and, this is a big one, when you do make your decision take responsibility for it. Also make sure you have prepared at least a high level exit plan. If vendor X decided to deprecate a product you are using have an idea about what it would take to migrate off the product.

And the ultimate vendor lock in?

Grant Orchard (https://twitter.com/grantorchard) from VMware made a comment about the ultimate vendor lock in, and nailed it! It is…..drumroll……kids! Hopefully that is one kind of vendor lock in that you do like (I know I do)

WP_20140705_003

Advertisements

SNIAfest2 – Dimension Data

First cab off the rank for the blogfest was Dimension Data (DD). I was hoping to get this one up sooner but Justin Warren beat me to the punch (http://www.eigenmagic.com/) by a fair bit as I’ve been pretty slack.

We kicked this off with coffee and breakfast downstairs (thanks to SNIA) and then made our way up to see Martin Aungle (Corporate Communications Manager, Marketing); all apart from Rodney that is.

For some reason Rodney was excluded because he works for a competitor, but Chris, Sachin and I were welcome, even though we work for competitors as well. In my opinion the SNIA bloggers day is meant to be an open forum, with no confidential information exposed, to present what is publicly available. The people attending are meant to have their SNIA hat on and be looking at this through impartial, independent  eyes. It is just shame as this is a blight on what was an interesting presentation/discussion. Now back to the DD presentation.

DD had David Hanrahan (General Manager, Virtual Data Centres and Microsoft Solutions) and John Farrell (Enterprise Architect) presenting from Melbourne via Cisco Telepresence. If you haven’t used a Cisco Telepresence room it is worth checking them out at a Cisco office. Still not as good as being there in person but a huge improvement to existing video conferencing facilities.

This was more of a conversation than a presentation, which I think the bloggers appreciated. DD definitely touched on some common themes and trends that I am seeing out in the customer landscape. Also kudos to DD for avoiding any buzz words until near the end of the session…only after “Big Data” was first raised by Sachin (@sachchahal) did we briefly descend into the depths of marketing buzz words.

Themes

The conversation was quite opened and unstructured without  a powerpoint slide in sight (well done guys). Some of the key themes being discussed were:

  • More discussions with C level staff around entire “stack”
  • Business requiring compressed delivery time frames
  • Convergence of teams, data and technology
  • “Black Magic”

Compressed Time Frames and CxOs

Delivery of entire solutions, within a limited amount of time, simple to report on and low risk is becoming more of a theme currently. For example one company wanted to be at the operate phase within 30 days of signing the PO (design, hardware delivery, implementation, operational doco). To quote one CIO “I know I want a unicorn…but I still want it”.

Previously this would of been nearly impossible to do, just from a hardware lead time perspective, but vendors are now starting to understand and come to the table.

DD see VCE as their A game for delivering this. “Design guides are great, but not always followed by engineers and therefor still have inherit risk” (John Farrell). With VCE they can have something ready to hit the floor with minimal integration required in short time frames. That said they have other options to best fit the customer if VCE is not right.

This change in the way businesses and IT departments is leading to larger budgets, at least for a single project. This is down to customers buying the entire integrated stack, rather than just running isolated projects for one area.

Convergance, in more ways than one

Convergance is something that DD is seeing more and more across different areas. It is common that convergance is a happening across different traditional IT silos (network team, security team, storage team, server team etc). These are now getting to the stage where they can be quite counterproductive at times and this has lead to CxOs having the conversations about full stacks and getting more involved. The particularly need converged teams for the achievement of compressed time frames and de-risking of solutions. This is also challenging DD as they need to ensure that their staff are trained up enough to talk to the different teams and the business folks.

The other meaning of convergance it seems is that data is getting more and more intermingled. The advent of VDI means business and personal data is getting intermingled on data centre storage, where previously the personal would stay on the user device. “How did my itunes library end up on the SAN…” was how John Farrell phrased it. This then leads to the challenge of managing the data.

“If you can’t drive the managment costs down you aren’t truly making a saving. Can we afford to manage the environments we are creating?” was a question posed. The same thing could happen with storage as has happened with virtualisation (VM sprawl) and this is a risk.

Black Magic and Roadmaps

Black magic was used as the way some solutions used to be sold. You would buy an appliance and it just worked. Customers didn’t care or want to know how it works, as long as it worked. David and John believe that you can no longer sell black magic. Customers are smarter and understand that you still need smart people to put it together and make it work. The customer sees the value in the out of the box solution but also understand the value that integrators or internal staff add to the solution and that they are necessary to make the solution meet their expectations.

With customers spending money on staff augmentation, residency services, and training for their own people to get the cloud benefits a roadmap is key. AWS is now the benchmark for internal IT and internal IT needs to meet their users expectations. It is not possible to simply take that jump to “private cloud”, the challenge is showing a roadmap with the invidual components/decisions that will take the customer to that end goal. To this end DD are seeing a lot of customers are now doing showback (few doing chargeback though),  measuring ROI achieved (in excess of 50% of customers) and greater awareness around cost and allocation.

And on SNIA and standards

I asked about a few open standards and DD’s opinion on these. First up was Cloud Data Management Interface (CDMI) – a standard that SNIA are currently working on. They believe this is a key standard as you need a common framework across the industry and it shouldn’t/couldn’t come from a specific vendor. If you haven’t read up on CDMI you can have a look here http://www.snia.org/cdmi.

This then went over to the networking side – what about L2 portability? Several standards are available, but all are vendor specific…I can’t remember where this quite trailed off to but we all agreed that it is important and any mobility assumes layer 2 portability.

Summary

DD had a very consultative approach and this was more a conversation than a presentation (which is good). They are definitely seeing a lot of the same issues and requirements that a lot of us (as competitors) see out with customers and are have their own way to approach them. A key theme I’ve seen in the integrator space is repeatable, low risk solutions that are easy to report on and provide a quicker time to market/operate of a solution.

Another thing coming out of the day was that the day of the specialist (very technical minded) guys driving the purchasing and strategy discussions is numbered; with the generalist, who understands the entire stack and business, is now the one most likely to get the budget for an uplift.

Only negative of the visit was Rodney not being allowed to join us, as mentioned previously.
Merch: Notebook and pen, which I didn’t take as I already had my own, and some morning tea.

NetApp VSC – simplifying vSphere storage configuration

I was setting up a demo in our lab for a customer recently and decided to use it as an opportunity to rebuild the VMware vSphere environment to a different design.

I’ve used the VSC before but it has been a while since I’ve installed/configured it and I’m still impressed ease of use and ability to simplify tasks. Following the steps within the NetApp Technical Report TR-3749 within 30 minutes I had:

  • Created a role with the appropriate permission I wanted – (provisioning, cloning and modify datastores)
  • Created group, assigned role to group and assigned user to group
  • Installed the VSC
  • Registered the VSC within vCenter
  • VSC discovered the array, including vfilers
  • Entered the newly created account, with customised permissions created earlier, for the VSC to access the NetApp array

After this quick, and simple, installation I went through and had look at the datastores I had already setup. At this stage I had configured the iGroups (LUN Masking) for ALUA on the array but had not configured the correct settings for ALUA or  the HBAs within vSphere, I decided to use the VSC to do this.

To confirm manually that the path to the LUNs wasn’t configured for Round Robin below is the path selection configuration of a datastore  before utilising VSC. I also confirmed my queue depth settings and this was not in line with NetApp recommendations.

The process I followed was to select the NetApp icon under “Solutions and Applications”

I was then shown a list of my controllers and vFilers and could see the status of each datastore (sorry, forgot to take screenshot, but MPIO and Adapter settings were in red)

Then I selected all datastores, with status errors and select “Set Recommended Values

Launched it to set the recommended values for NFS, MPIO and HBA/CNA adapters. It then works through the process and took a couple of minutes to do this for multiple datastores (both NFS and VMFS), across two clusters and 8 x ESXi servers (each server was booting from the array via FCP)

Upon completion the status came up all green via the VSC but, just to confirm, I went and looked at my path selection policy on the LUNs being utilised and found that it had changed to the correct settings.

I should also point out that most of my VM datastores, with VMs residing on them, were actually being presented via NFS. The information and administration options here are also very useful, as you can see below.

In conclusion the NetApp VSC not only provides VMware and Storage administrators with the ability to monitor, configure and manage  storage, from within vCenter,  but also makes it simple to comply with the vendor recommended practices for configuration and get the optimal performance and stability out of your VMware and storage.