Author: bdiqual

Vendor Lock In and your Business Strategies

Look out for the Dragons!

So one of the phrases which I keep seeing coming up in different social media posts and articles is about “Vendor Lock In”. You can see some of the ones I’ve found via a quick search of the interwebs below, all with their own theme and position.

https://www.geeks2u.com.au/geekspeak/beware-the-risks-of-vendor-lock-in/

http://www.forbes.com/sites/joemckendrick/2011/11/20/cloud-computings-vendor-lock-in-problem-why-the-industry-is-taking-a-step-backwards/

http://www.wisegeek.com/what-is-vendor-lock-in.htm

http://www.zdnet.com/10-steps-to-avoid-cloud-vendor-lock-in-7000017971/

To me a lot of this may as well be a map from the 1500s saying “here be dragons” (or to be accurate “HC SVNT DRACONES” as it was in Latin). There is something out there when making business decisions to be scared of, but it isn’t dragons…I mean vendor lock in, it is not probably reviewing our business requirements before making a decision

herebedragons11

 So how real is Vendor Lock In?

BusterBehindBars

Very real, I can guarantee that any purchase you make will have some level of lock in to a technology or a process or something that will be difficult to change:

house location

car model

phone

From my perspective I’m looking at the technology side; but we are of course as free as we want to believe we are.

“A puppet is free as long as he loves his strings.” – Sam Harris, Free Will

Is there a way out?!

press-to-exit_r1_c2

Docker and Cloud Foundry are two organisations talking about the ability to remove vendor lock in. This is great for giving you portability between public and private clouds however it brings up that little issue….you are locked into their solution.

https://www.docker.com/

http://www.cloudfoundry.org/

Even if there is no or little cost there is a considerable amount of time and effort you have to put into this solution to get it up and running and then, if a feature or entire product is impacted some how, you have to be able to migrate away from it.

The only solution….choose wisely!

In short we have to make our decisions wisely and weigh up all the options without emotion relating to what is best for our business. You could build up on Azure PaaS for example, and have a close working relationship with Microsoft giving you access to updates, new features, automatic scale, hyper-scale etc (shameless plug) or you could look at utilising Docker to enable you to switch between public cloud providers like Azure, Google and AWS depending on where the wind is blowing price wise and geography requirements.

Whatever you do just make sure you properly consider what you are trying to achieve and relate this to your design decision and, this is a big one, when you do make your decision take responsibility for it. Also make sure you have prepared at least a high level exit plan. If vendor X decided to deprecate a product you are using have an idea about what it would take to migrate off the product.

And the ultimate vendor lock in?

Grant Orchard (https://twitter.com/grantorchard) from VMware made a comment about the ultimate vendor lock in, and nailed it! It is…..drumroll……kids! Hopefully that is one kind of vendor lock in that you do like (I know I do)

WP_20140705_003

What really matters to small business – PaaS vs SaaS vs IaaS

I was on twitter the other day and someone with a server virtualisation (IaaS if you are talking in a cloud sense) background made the comment that small and medium business will never use PaaS;  IaaS is the only thing that is really relevant to them.

I disagreed with this comment and half forgot about it. Last week I had a friend talking to me about just this scenario;  maybe a little confirmation bias at play here. They went through this journey about six months ago and it was interesting to see the decisions they made and why. I must point out that this is not a Microsoft technology stack story, and they chose some of our competitors for parts of their solutions in order to best meet their business requirements.

Their current situation was:

  • Small business (less than 20 employees) that specialised in art installation systems
  • A lot of repeat customers but also new customers purchasing directly from their website
  • Single server under desk running a common SMB ERP package, website (running on WordPress, file server and Exchange (Windows Small Business Server). To us folk with an Enterprise background this lack of high availability is unimaginable, but the reality is a lot of small businesses run this way
  • Server backup to USB drive every week with 3 on rotation that were taken to the owners house. Better rigor than some small organisations
  • The server was already at end of support on the hardware
  • No dedicated IT staff…as expected in an organisation of this size
  • Ad hoc consultant they used to do their WordPress development work
  • Website interfaced with the ERP application to process online purchases and to manage inventory

So when they started looking at this process, from a perspective of what their business objectives were. Some of the things that became clear were:

  • They didn’t want to manage hardware
  • They didn’t want to manage technology (any more than they had to)
  • They didn’t want to manage applications (at least any more than they had)
  • They wanted Enterprise availability and the ability to scale when the business grew

So where did they end up with their investigation? They chose the below technology options.

 Email and File Storage

Google apps and Google Drive. They didn’t need the additional functions offered in a similar offering, like Microsoft O365, that a lot of Enterprises would require. So this was the most cost effective option for them. Google Drive across the few office workers they had provided more than enough capacity to store, share and protect their files.

Very easy to scale a service like this as they just add more users into it with the added bonus that they could access this from anywhere on any device.

Winner: SaaS

Accounting/ERP Package

At the despair of the rather old school book keeper they had they moved away from their previous accounts package and selected Xero. Xero are an interesting company based out of New Zealand that have built a cloud ERP system from the ground up for small and medium businesses. You can read more information about Xero here.

The company they currently utilised had a SaaS offering as well, but it didn’t provide an easily addressable API stack for their website, so it was out, hence the move to Xero.

So what were their challenges with moving ERP applications? It turns out they were retraining the book keeper and importing their records. Both things they managed to do in the space of a week. Like the previous selection this also allowed them to access their ERP system from any location on any device, something never previously available to them (unless they RDP’d to the server under the desk)

Winner: SaaS

 Website

So they final step was their website. They wanted to make sure it was highly available, low cost and took as little administrative effort as possible.

They looked at some IaaS based services first including AWS, Azure and some smaller Australian IaaS providers. All these solutions could do want they wanted but they wanted to keep their costs and management effort down.

When they spoke to their WordPress consultant recommended Azure Websites as an option. They looked at this option and found it matched their requirements. They provisioned a site up on a 30 day trial and tested the service. Within the space of a few hours (with the help of their WordPress consultant) they had provisioned a website running WordPress, copied over the customised WordPress content, tested and had a fully functional WordPress website.

Their consultant spent a bit more time on interfacing with the Xero API stack and changing some of the website workflows, a step that would have had to be done in any scenario they had chosen due to the change of ERP system. They then moved it to production and have been running it since.

In the view of fairness I also believe AWS offer a PaaS website offering (Elastic Beanstalk), but I’m not sure if the company looked at this option or not.

Winner: PaaS

 Conclusion

So here we see an example of what matters to a small business; their business needs, not technology and most definitely not whether they have a preference for IaaS, PaaS or SaaS. They chose what worked for their requirements, required low touch from an ongoing technology perspective. This is pretty powerful when you consider that 96% of all businesses in Australia and have similar challenges (source here).

There were some further comments on twitter from others with an IaaS focus that it is “all about the application”. As we see from this example this is patently wrong, companies don’t care about the application, they only care about what they need to run their businesses successfully, the applications are just tools they utilise to help achieve this outcome.

What are your thoughts? Disagree or agree would love to hear them

Microsoft Azure StorSimple – Cloud Integrated Storage AND DR in the Cloud

StorSimple, since acquisition by Microsoft at the end of 2012, has been a great success so far with over a thousand customers now adopting the technology world-wide to reduce their financial challenges around data storage, data management and data growth.

It is with a lot of excitement that we have now launched today our next version, Microsoft Azure StorSimple. This sees a range of advances in the platform, management and DR capabilities. With the most recent IDC/EMC Digital Universe study we are seeing data growth sits at 40% year over year, on average. This ever expanding data growth sees hardware upgrades that never stop and the threat of going over a capacity cliff, as well as increasing software licensing costs, administration effort and facilities cost.

The Microsoft Azure StorSimple Solution

StorSimple continues to be an on premise storage array that integrates seamlessly with existing physical and virtual machines. It can be put into production in a matter of hours and, with no application modification, allow you to start leveraging the cloud for storing cold data, backing up data via cloud snapshots and as a location to retrieve data from in the event of a DR.

Microsoft Azure StorSimple also provides intelligence around how it treats data. As per my previous blog posts on StorSimple, Microsoft Azure StorSimple starts data off in SSD and then intelligently tiers data, at a block level, between SSD, SAS and the cloud; but it also provides inline deduplication, compression and (prior to moving data to the cloud) encryption.

In summary the StorSimple solution provides the below, without the need for any application modification or additional software.

 

  • Highly available primary storage array
  • Optimisation of data via in line deduplication and encryption
  • Tiering (and encryption) of cold data to the Cloud
  • Backing up data to the cloud via cloud snapshots
  • Ability to recover data from the cloud, for DR, from anywhere

Platform Changes

The new Microsoft Azure StorSimple platform, labelled the 8000 series, introduces three new models and changes to the management. So what do these new releases bring us?

  • 10GbE interfaces – this is a feature which has been requested numerous times by our customers
  • Unified management of multiple appliances, via the Microsoft Azure StorSimple manager
  • Increased performance – 2.5 times increase in internet bandwidth capabilities
  • Higher capacity hybrid storage arrays
    • The 8100 comes with 15TB of useable capacity (before dedupe or compression, as well as flash optimisation) and can address up to 200TB of Cloud storage
    • The 8600 comes with 40TB of useable capacity (before dedupe or compressions, as well as flash optimisation) and can address up to 500TB of Cloud storage
  • A virtual appliance available as a service in Azure that can access data that has been uploaded by an 8000 series array

Use Cases

With a new platform comes expanded use cases. Previously the main use cases were file, archive and SharePoint (and other document management products).

With the 8000 series we now include support SQL Server and virtual machine use cases. Also, due to the virtual appliance, we can now start running some Azure specific use cases as well, using your data from on premise. This includes DR (and DR Testing), Cloud applications and Dev/Test workloads.

Disaster Recovery and IT agility

So now you have a copy of your data in Azure via cloud snapshot…what can you do next? This is one of the best parts around the new release, the ability to access your data within Azure. How can this be used?

Disaster Recovery

Having a dedicated DR site, especially for small and mid size organisations, is a huge financial strain. The ability to access and present your data inside Azure means that customers no longer need a secondary site and storage array. They can present the data up to the virtual appliance and very quickly be able to access their data from their last cloud snapshot

Development and Testing

Data has mass and moving it to compute can be slow. With the StorSimple solution a copy of your data already resides in Azure so you have the ability to quickly spin up VMs in Azure IaaS and for them to access your data for testing and development. You could potentially have a full test/dev environment Textup and running, in the cloud, in a matter of minutes or hours, rather than days or weeks.

On Demand Infrastructure

No need to  provision infrastructure in advance, with a full copy of all your enterprise data available in Azure spin up VMs for projects and special requirements quickly and easily….

 

Other information

Some other great blogs, articles and information can be found here

 

StorSimple home page http://microsoft.com/storsimple

Scott Guthrie’s blog http://weblogs.asp.net/scottgu

The StorSimple blog http://blogs.technet.com/b/cis/

Takeshi Numoto’s blog http://blogs.technet.com/b/server-cloud/archive/2014/07/09/introducing-microsoft-azure-storsimple.aspx

 

News Articles

http://www.theregister.co.uk/2014/07/09/microsoft_storsimple_azure/

http://www.storagereview.com/microsoft_azure_storsimple_8100_and_8600_storsimple_manager_and_virtual_appliance_announced

http://www.theinquirer.net/inquirer/news/2354415/microsoft-lifts-the-lid-on-the-azure-storsimple-8000-series

http://windowsitpro.com/azure/storsimple-renamed-azure-storsimple-gets-big-refresh-august

http://www.zdnet.com/microsoft-to-roll-out-new-azure-storsimple-cloud-storage-arrays-7000031407/

http://searchcloudstorage.techtarget.com/news/2240224202/Microsoft-StorSimple-tightens-integration-with-Azure

http://www.pcworld.idg.com.au/article/549676/microsoft_debuts_cloud_storage_service_enterprises/

http://blogs.wsj.com/cio/2014/07/09/microsoft-azure-moves-deeper-into-hybrid-cloud/

http://techcrunch.com/2014/07/09/microsoft-updates-azure-with-2-new-u-s-regions-improved-hybrid-storage-solution-and-more/

http://rcpmag.com/articles/2014/07/09/microsoft-preps-storsimple-azure-storage.aspx

StorSimple Deep Dive 3 – Backup, restore and DR from Azure

 Previously on StorSimple deepdive

So we’ve talked about the underlying hardware solution that StorSimple provides at in my first deep dive here, then we moved onto the storage efficiencies, life of a block and cloud integration here so in my third, and for now, final deep dive post I’m going to touch on how StorSimple provides the mechanism to efficiently backup, restore and even provide a DR solution without the need for secondary or tertiary sites and data centres.

 

Fingerprints, chunks and SnapShots…we know where your blocks live

StorSimple fingerprints, chunks and tracks all blocks that are written to the appliance. This allows it to take very efficient local snapshots that take up no space and have no performance impact. It doesn’t have to go out and read through all meta data to work out what blocks or files have changed, like a traditional backup. Reading through file information is one of the worst enemies of backing up unstructured data, if you have millions of files (which is common) it can take hours just to read through the data to work out what files have changed before you back up a single file. So StorSimple can efficiently give you local points in time for quick restores which are near instant to backup and restore from.

 

I disagree! A snapshot doesn’t count!

However it is my opinion that a snapshot is not a backup…so why the heck is my blog title about backup, restore and DR?! It is because I also believe that a snapshot is a backup if it is replicated. StorSimple provides another option for snapshots called “Cloud Snapshots”. This takes a copy of all the data on a single volume, or multiple volumes, up to Windows Azure, including all the metadata. Obviously the first cloud snapshot is the whole data set, we make this easier as all the data is deduplicated, compressed and protected with AES 256 bit encryption. After this first baseline only unique block changes, which are optimized with dedupe and compression and then encrypted, are taken up to Windows Azure. These cloud snapshots are policy based and can be kept for hours, days, weeks, months or years as required.

Data is offsite and multiple points of time are available and generally backup windows are reduced. Once data gets into Azure we further protect your information. Azure storage, by default, is configured with geo-replication turned on. This means that three copies of any block of data are copied to the primary data centre and three copies of any block of data are also copied to the partner data centre, even if you turn it off you still have three copies of your data sitting in the primary data centre. This means at least three, but generally six, copies of all data reside in Azure.

So we have simple, efficient and policy driven snapshots and all snapshot data replicated six times, across different geographies…I think I can safely call this a backup and probably with more resiliency than most legacy tape or local disk based backup systems customers are using now.

 

And now how do I restore my data?

So the scenario is someone requires some files back from months ago, or even a year ago. It is maybe a few GB at most but we still want to get it back quickly and easily, and the user also wants to search the directory structure too for some relevant information.

StorSimple offers the ability to connect to any of the cloud snapshots, create a clone and mount it to a server. This clone will not pull down any data, apart from any metadata which is not already on the StorSimple solution, so is extremely efficient. All data however will appear local and you can browse the directory structure and only copy back the files that are required….and all the blocks that constitutes these files is deduplicated, compressed AND only the blocks which are unique and not already located on the StorSimple solution are required to be copied back.

The process is as simple as going to your cloud snapshots in the management console, selecting the point in time you wish to recover and selecting “clone”. You will then be prompted for a mount point or drive letter and within seconds the drive is mounted up. Couldn’t be simpler!

restore1

restore2

 

How does this provide a DR solution?

Cloud Snapshots can be set up with an RPO as low as 15 minutes (rate of change and bandwidth dependent). In the event of a DR where your primary DataCentre is a smoking hole in the ground, or washed away in a cataclysmic tidal wave/tsunami, another StorSimple appliance can then connect up to any one of those cloud snapshots and mount it up. All it needs is an internet connection, the Azure storage blob credentials and the encryption key that was used to encrypt the data.

The StorSimple solution then only pulls down the metadata, which is a very small subset and very quick to download, and bingo all your data and files can be presented and appear to be local. Then as the users start opening their memes of their cats and other images to create YOLO memes the optimised blocks are then downloaded and cached locally on the StorSimple appliance. In this fashion the StorSimple appliance starts re-caching all the hot data which is requested and doesn’t have to pull down data which is cold as well.

My personal opinion is that we will only see enhancements to this solution; imagine being able to do this DR scenario all from out of Windows Azure, suddenly having a physical DR site and hardware no longer matters….now that would be cool

 

Extra benefit…faster tiering to the cloud!

In the previous deep dive here I talk about how StorSimple tiers data to the cloud based on it’s weighted storage layout algorithm but tries to keep as much data as possible locally so it provides optimal performance for hot and warm data. In the event that you want to copy a large amount of data to a StorSimple appliance, more than the available space left on the StorSimple appliance, you won’t have to wait for data to be tiered to be moved to the cloud if you have been taking cloud snapshots.

Where this already a copy of a block of data in the cloud, from a cloud snapshot, and it has to be tiered up only the metadata will change, to point to the block in the cloud, and no data will have to be uploaded letting you have your cake and eat it too. You get the efficiencies of tiering cold data to the cloud but the ability to still copy large amounts of data to the appliance without large data transfers immediately following the process.

Have your say

Don’t agree with me or agree with me about a snapshot being backup? Don’t like me using stupid sayings?  Give your opinion below

Creating a Virtual Machine on Azure IaaS

Azure offers a whole range of features, which you may or may not already have looked at. I shared a document a few days back which shows some examples of application architecture which can be found here https://cdrants.wordpress.com/2013/06/17/a-picture-is-worth-a-thousand-words/

Since I set up my MSDN account a few weeks ago I’ve been using it as my test lab to try out some things, including re-acquainting myself with Windows Server clustering, but with 2012 which is brand spanking new to me.  I thought I’d do a quick post to show just how simple it is to create a virtual machine within Azure, with the newly available IaaS functionality. It is an amazingly simple process and you can go from login Azure to connecting to your new VM in under 5 minutes.

Sign up

Set up your Azure account. You can do this with a credit card, and get a 60 day free trial and then pay as you go, or purchase an MSDN subscription and get a significant amount of Azure credits to utilise each month.

Creating VM

Once you have created your Azure account login to the Portal, which you can see below. This is pretty straight forward to use and you can see some of the services available on the left hand side. In this case select “Virtual Machines”. You can see a Windows VM I already created earlier is running right now.

vm1 (800x466)

A whole range of default VMs are available, as you can see below. This includes Windows and Linux VMs and many of them have been configured with the application service you require, even SharePoint Server 2013. For the new VM I’ve created I’m selecting Ubuntu server 13.04 as I haven’t touched Ubuntu in a few years and wanted to see what has changed.

vm3 (800x578)vm4 (800x577)

Now I select my settings around version release date, host name, certificate, local credentials, size of VM (memory/CPU), Storage pool to use (including DataCentre Region), public DNS record and if I want to make this part of an availability set.

vm5 (800x572) vm6 (800x576)

vm7 (800x576)

This is it. Now it kicks off the provisioning of the virtual machinevm8 (800x93)

Once created at any stage you can go in and change the performance configuration, disk attached, view performance metrics and shut down/start up the VM.

vm9 (800x519)

My VM is created now how do I connect and start using it?!

For my Ubuntu VM I use Putty and connect to the public DNS record created by Azure and then simply login and get working. If this was a windows server I would use RDP and connect to the DNS name and RDP port specified by Azure. Could not be easier!

vm10 (675x425)

A picture is worth a thousand words….

Sometime a visual helps to understand just how things work, and this Windows Azure poster is no exception! If you were curious about the different services and examples of architecture that you can get within Azure this is a great place to start before diving deeper. Screen grab below shows a part of the poster but download the full .pdf file at http://www.microsoft.com/en-us/download/details.aspx?id=35473

Azure Poster

StorSimple Deep Dive 2 – Data Efficiencies

Been an exciting few weeks since my first deep dive post into StorSimple last month and a very busy end to the Microsoft year! Been great seeing the interest from customers when we start discussions about being able to leverage the cloud without having to change the way their application architecture.

In my first StorSimple deep dive, found here, I talked about StorSimple from a hardware and platform perspective. In this post I want to talk about the efficiencies that StorSimple offers.

What do you mean by efficiency?

When you talk about efficiencies it can mean many things to many different people. In this case I’m talking about the way StorSimple optimises data for performance, capacity and moves data between tiers based on a smart algorithmic approach where it views data at a block (sub LUN/file) level . That is really good…but of course other companies do this as well, so what differentiates StorSimple? The biggest differentiator here is that the lowest tier isn’t SATA/NL-SAS but the cloud. And when we are talking about low cost, highly available storage, it is hard to beat the economies of scale that the cloud can offer and of course  it goes with out saying <AzurePlug> that Windows Azure is the best example of this with our price structure the same for any DC in the world and the fact we geo replicate all data by default </AzurePlug>.

How much of my data is really hot?

When data is created it is hot and will be referenced quite often however, after some time, this data generally grows cold very quickly. You only want to look at those old photos of your cat, when you have a new meme to create, on rare occasion. Generally anything north of 85% of the data can be cold. Keeping this data unoptimised and on local disk is obviously not the most effective use of your technology budget.

Life of a Block with StorSimple

So into the deep dive bit! I’m going to talk about the life of a block and how we treat this block, from a StorSimple perspective. I’ve done a series of whiteboards…or drawn on my screen with <MS_Plug>my touch enabled windows 8 laptop</MS_Plug> below to explain how we treat a block. Block sizes on StorSimple are variable, and we generally select the block size based on the kind of workload running on a specific volume (LUN). StorSimple deduplicates, compresses and encrypts data before it is tiered off to the cloud; this generally provides between 3x to 20x space savings, dependent on workload.

  1. A block is first written into NVRAM (battery backed DRAM that is replicated between controllers).
  2. The blocks are then written down on to a Linear tier, which is eMLC SSD drives; so low latency and high IO. 1
  3. Blocks of data generally don’t stay on the linear tier for long, unless they are subject to continuous IO requests. Blocks are taken, near immediately, down to the dedupe tier. This data remains on SSD, with the low latency and performance you expect, but the data is deduplicated in line before arriving here on a block level, providing significant space savings.2
  4. As the blocks start to cool then are then taken to a SAS tier and compressed in line on the way down there. This all happens on a block (sub file/LUN) level so if a VM system file, for example, was located on StorSimple the parts of that installable which are hot remain on SSD while the majority of the capacity that is infrequently accessed will be taken to SAS.3
  5. As the StorSimple appliance starts to use it’s local capacity it will then encrypt the coldest blocks of data and tier them off to the Cloud using RESTful APIs. When this is Windows Azure this means three copies of the data will be kept in the primary data centre and three copies of the data will be in the partner data centre by default. Suddenly you can use the cost and availability efficiencies of the cloud, without having to change your application, operating system or, most importantly, the way you view your files. The data is encrypted with AES256 bit encryption and private key is specified by the customer. 4
  6. Then in the event that is called back from the cloud it will be a seamless process. The metadata, which is always stored locally on the appliance, knows exactly which block of data is required. StorSimple will make a RESTful API call over HTTPS to bring the data back to the appliance in a very efficient manner, as the data is compressed, deduplicated and there is no need to search for the location of the block of data. Not only will the block you require be recalled but other corresponding blocks will be pre-fetched back for further performance optimisation based on the read pattern. For the below example I’ve show block “F” being recalled to the local appliance. This data will be stored on the deduped SSD tier as it is now hot data once more. This back end process is totally transparent and the only thing that will be that will occur is slightly higher latency on the blocks which have been tiered to the cloud.5

How do we know who is insane cold…and how

Easy! Whoever has this stamp is insane…

.

Now that my obsession with the Simpsons is addressed how do we decide what blocks are cold, when they are tiered and what happens, from a performance perspective, when blocks of data need to be accessed and they are located on a Cloud provider.

StorSimple uses a Weighted Storage Layout (WSL). This then goes through the below process:

  • BlockRank – All volume data is dynamically broken into “chunks”, analyzed and weighted based on frequency of use, age, and other factors
  • Frequently used data remains on SSD for fast access
  • Less frequently used data compressed and stored on SAS
  • As appliance starts to fill up optimised data is encrypted and tiered to the cloud

But what about what I think?!

Automation is great, but what if I want to manually influence the priority around data sets are tiered to the cloud? StorSimple offers a solution for this as well. You manually can specify a volume to “local preferred” so it is the last data set that will be tiered to the cloud, and only tiered off if all other datasets have been tiered off and the local capacity of the appliance is reaching capacity.

Examples of data sets you might set to prefer local are:

  • Log Files
  • DataBase Files
  • VM System files