Category: Azure

Microsoft Azure StorSimple – Cloud Integrated Storage AND DR in the Cloud

StorSimple, since acquisition by Microsoft at the end of 2012, has been a great success so far with over a thousand customers now adopting the technology world-wide to reduce their financial challenges around data storage, data management and data growth.

It is with a lot of excitement that we have now launched today our next version, Microsoft Azure StorSimple. This sees a range of advances in the platform, management and DR capabilities. With the most recent IDC/EMC Digital Universe study we are seeing data growth sits at 40% year over year, on average. This ever expanding data growth sees hardware upgrades that never stop and the threat of going over a capacity cliff, as well as increasing software licensing costs, administration effort and facilities cost.

The Microsoft Azure StorSimple Solution

StorSimple continues to be an on premise storage array that integrates seamlessly with existing physical and virtual machines. It can be put into production in a matter of hours and, with no application modification, allow you to start leveraging the cloud for storing cold data, backing up data via cloud snapshots and as a location to retrieve data from in the event of a DR.

Microsoft Azure StorSimple also provides intelligence around how it treats data. As per my previous blog posts on StorSimple, Microsoft Azure StorSimple starts data off in SSD and then intelligently tiers data, at a block level, between SSD, SAS and the cloud; but it also provides inline deduplication, compression and (prior to moving data to the cloud) encryption.

In summary the StorSimple solution provides the below, without the need for any application modification or additional software.

 

  • Highly available primary storage array
  • Optimisation of data via in line deduplication and encryption
  • Tiering (and encryption) of cold data to the Cloud
  • Backing up data to the cloud via cloud snapshots
  • Ability to recover data from the cloud, for DR, from anywhere

Platform Changes

The new Microsoft Azure StorSimple platform, labelled the 8000 series, introduces three new models and changes to the management. So what do these new releases bring us?

  • 10GbE interfaces – this is a feature which has been requested numerous times by our customers
  • Unified management of multiple appliances, via the Microsoft Azure StorSimple manager
  • Increased performance – 2.5 times increase in internet bandwidth capabilities
  • Higher capacity hybrid storage arrays
    • The 8100 comes with 15TB of useable capacity (before dedupe or compression, as well as flash optimisation) and can address up to 200TB of Cloud storage
    • The 8600 comes with 40TB of useable capacity (before dedupe or compressions, as well as flash optimisation) and can address up to 500TB of Cloud storage
  • A virtual appliance available as a service in Azure that can access data that has been uploaded by an 8000 series array

Use Cases

With a new platform comes expanded use cases. Previously the main use cases were file, archive and SharePoint (and other document management products).

With the 8000 series we now include support SQL Server and virtual machine use cases. Also, due to the virtual appliance, we can now start running some Azure specific use cases as well, using your data from on premise. This includes DR (and DR Testing), Cloud applications and Dev/Test workloads.

Disaster Recovery and IT agility

So now you have a copy of your data in Azure via cloud snapshot…what can you do next? This is one of the best parts around the new release, the ability to access your data within Azure. How can this be used?

Disaster Recovery

Having a dedicated DR site, especially for small and mid size organisations, is a huge financial strain. The ability to access and present your data inside Azure means that customers no longer need a secondary site and storage array. They can present the data up to the virtual appliance and very quickly be able to access their data from their last cloud snapshot

Development and Testing

Data has mass and moving it to compute can be slow. With the StorSimple solution a copy of your data already resides in Azure so you have the ability to quickly spin up VMs in Azure IaaS and for them to access your data for testing and development. You could potentially have a full test/dev environment Textup and running, in the cloud, in a matter of minutes or hours, rather than days or weeks.

On Demand Infrastructure

No need to  provision infrastructure in advance, with a full copy of all your enterprise data available in Azure spin up VMs for projects and special requirements quickly and easily….

 

Other information

Some other great blogs, articles and information can be found here

 

StorSimple home page http://microsoft.com/storsimple

Scott Guthrie’s blog http://weblogs.asp.net/scottgu

The StorSimple blog http://blogs.technet.com/b/cis/

Takeshi Numoto’s blog http://blogs.technet.com/b/server-cloud/archive/2014/07/09/introducing-microsoft-azure-storsimple.aspx

 

News Articles

http://www.theregister.co.uk/2014/07/09/microsoft_storsimple_azure/

http://www.storagereview.com/microsoft_azure_storsimple_8100_and_8600_storsimple_manager_and_virtual_appliance_announced

http://www.theinquirer.net/inquirer/news/2354415/microsoft-lifts-the-lid-on-the-azure-storsimple-8000-series

http://windowsitpro.com/azure/storsimple-renamed-azure-storsimple-gets-big-refresh-august

http://www.zdnet.com/microsoft-to-roll-out-new-azure-storsimple-cloud-storage-arrays-7000031407/

http://searchcloudstorage.techtarget.com/news/2240224202/Microsoft-StorSimple-tightens-integration-with-Azure

http://www.pcworld.idg.com.au/article/549676/microsoft_debuts_cloud_storage_service_enterprises/

http://blogs.wsj.com/cio/2014/07/09/microsoft-azure-moves-deeper-into-hybrid-cloud/

http://techcrunch.com/2014/07/09/microsoft-updates-azure-with-2-new-u-s-regions-improved-hybrid-storage-solution-and-more/

http://rcpmag.com/articles/2014/07/09/microsoft-preps-storsimple-azure-storage.aspx

Advertisements

StorSimple Deep Dive 3 – Backup, restore and DR from Azure

 Previously on StorSimple deepdive

So we’ve talked about the underlying hardware solution that StorSimple provides at in my first deep dive here, then we moved onto the storage efficiencies, life of a block and cloud integration here so in my third, and for now, final deep dive post I’m going to touch on how StorSimple provides the mechanism to efficiently backup, restore and even provide a DR solution without the need for secondary or tertiary sites and data centres.

 

Fingerprints, chunks and SnapShots…we know where your blocks live

StorSimple fingerprints, chunks and tracks all blocks that are written to the appliance. This allows it to take very efficient local snapshots that take up no space and have no performance impact. It doesn’t have to go out and read through all meta data to work out what blocks or files have changed, like a traditional backup. Reading through file information is one of the worst enemies of backing up unstructured data, if you have millions of files (which is common) it can take hours just to read through the data to work out what files have changed before you back up a single file. So StorSimple can efficiently give you local points in time for quick restores which are near instant to backup and restore from.

 

I disagree! A snapshot doesn’t count!

However it is my opinion that a snapshot is not a backup…so why the heck is my blog title about backup, restore and DR?! It is because I also believe that a snapshot is a backup if it is replicated. StorSimple provides another option for snapshots called “Cloud Snapshots”. This takes a copy of all the data on a single volume, or multiple volumes, up to Windows Azure, including all the metadata. Obviously the first cloud snapshot is the whole data set, we make this easier as all the data is deduplicated, compressed and protected with AES 256 bit encryption. After this first baseline only unique block changes, which are optimized with dedupe and compression and then encrypted, are taken up to Windows Azure. These cloud snapshots are policy based and can be kept for hours, days, weeks, months or years as required.

Data is offsite and multiple points of time are available and generally backup windows are reduced. Once data gets into Azure we further protect your information. Azure storage, by default, is configured with geo-replication turned on. This means that three copies of any block of data are copied to the primary data centre and three copies of any block of data are also copied to the partner data centre, even if you turn it off you still have three copies of your data sitting in the primary data centre. This means at least three, but generally six, copies of all data reside in Azure.

So we have simple, efficient and policy driven snapshots and all snapshot data replicated six times, across different geographies…I think I can safely call this a backup and probably with more resiliency than most legacy tape or local disk based backup systems customers are using now.

 

And now how do I restore my data?

So the scenario is someone requires some files back from months ago, or even a year ago. It is maybe a few GB at most but we still want to get it back quickly and easily, and the user also wants to search the directory structure too for some relevant information.

StorSimple offers the ability to connect to any of the cloud snapshots, create a clone and mount it to a server. This clone will not pull down any data, apart from any metadata which is not already on the StorSimple solution, so is extremely efficient. All data however will appear local and you can browse the directory structure and only copy back the files that are required….and all the blocks that constitutes these files is deduplicated, compressed AND only the blocks which are unique and not already located on the StorSimple solution are required to be copied back.

The process is as simple as going to your cloud snapshots in the management console, selecting the point in time you wish to recover and selecting “clone”. You will then be prompted for a mount point or drive letter and within seconds the drive is mounted up. Couldn’t be simpler!

restore1

restore2

 

How does this provide a DR solution?

Cloud Snapshots can be set up with an RPO as low as 15 minutes (rate of change and bandwidth dependent). In the event of a DR where your primary DataCentre is a smoking hole in the ground, or washed away in a cataclysmic tidal wave/tsunami, another StorSimple appliance can then connect up to any one of those cloud snapshots and mount it up. All it needs is an internet connection, the Azure storage blob credentials and the encryption key that was used to encrypt the data.

The StorSimple solution then only pulls down the metadata, which is a very small subset and very quick to download, and bingo all your data and files can be presented and appear to be local. Then as the users start opening their memes of their cats and other images to create YOLO memes the optimised blocks are then downloaded and cached locally on the StorSimple appliance. In this fashion the StorSimple appliance starts re-caching all the hot data which is requested and doesn’t have to pull down data which is cold as well.

My personal opinion is that we will only see enhancements to this solution; imagine being able to do this DR scenario all from out of Windows Azure, suddenly having a physical DR site and hardware no longer matters….now that would be cool

 

Extra benefit…faster tiering to the cloud!

In the previous deep dive here I talk about how StorSimple tiers data to the cloud based on it’s weighted storage layout algorithm but tries to keep as much data as possible locally so it provides optimal performance for hot and warm data. In the event that you want to copy a large amount of data to a StorSimple appliance, more than the available space left on the StorSimple appliance, you won’t have to wait for data to be tiered to be moved to the cloud if you have been taking cloud snapshots.

Where this already a copy of a block of data in the cloud, from a cloud snapshot, and it has to be tiered up only the metadata will change, to point to the block in the cloud, and no data will have to be uploaded letting you have your cake and eat it too. You get the efficiencies of tiering cold data to the cloud but the ability to still copy large amounts of data to the appliance without large data transfers immediately following the process.

Have your say

Don’t agree with me or agree with me about a snapshot being backup? Don’t like me using stupid sayings?  Give your opinion below

Creating a Virtual Machine on Azure IaaS

Azure offers a whole range of features, which you may or may not already have looked at. I shared a document a few days back which shows some examples of application architecture which can be found here https://cdrants.wordpress.com/2013/06/17/a-picture-is-worth-a-thousand-words/

Since I set up my MSDN account a few weeks ago I’ve been using it as my test lab to try out some things, including re-acquainting myself with Windows Server clustering, but with 2012 which is brand spanking new to me.  I thought I’d do a quick post to show just how simple it is to create a virtual machine within Azure, with the newly available IaaS functionality. It is an amazingly simple process and you can go from login Azure to connecting to your new VM in under 5 minutes.

Sign up

Set up your Azure account. You can do this with a credit card, and get a 60 day free trial and then pay as you go, or purchase an MSDN subscription and get a significant amount of Azure credits to utilise each month.

Creating VM

Once you have created your Azure account login to the Portal, which you can see below. This is pretty straight forward to use and you can see some of the services available on the left hand side. In this case select “Virtual Machines”. You can see a Windows VM I already created earlier is running right now.

vm1 (800x466)

A whole range of default VMs are available, as you can see below. This includes Windows and Linux VMs and many of them have been configured with the application service you require, even SharePoint Server 2013. For the new VM I’ve created I’m selecting Ubuntu server 13.04 as I haven’t touched Ubuntu in a few years and wanted to see what has changed.

vm3 (800x578)vm4 (800x577)

Now I select my settings around version release date, host name, certificate, local credentials, size of VM (memory/CPU), Storage pool to use (including DataCentre Region), public DNS record and if I want to make this part of an availability set.

vm5 (800x572) vm6 (800x576)

vm7 (800x576)

This is it. Now it kicks off the provisioning of the virtual machinevm8 (800x93)

Once created at any stage you can go in and change the performance configuration, disk attached, view performance metrics and shut down/start up the VM.

vm9 (800x519)

My VM is created now how do I connect and start using it?!

For my Ubuntu VM I use Putty and connect to the public DNS record created by Azure and then simply login and get working. If this was a windows server I would use RDP and connect to the DNS name and RDP port specified by Azure. Could not be easier!

vm10 (675x425)

A picture is worth a thousand words….

Sometime a visual helps to understand just how things work, and this Windows Azure poster is no exception! If you were curious about the different services and examples of architecture that you can get within Azure this is a great place to start before diving deeper. Screen grab below shows a part of the poster but download the full .pdf file at http://www.microsoft.com/en-us/download/details.aspx?id=35473

Azure Poster

StorSimple Deep Dive 2 – Data Efficiencies

Been an exciting few weeks since my first deep dive post into StorSimple last month and a very busy end to the Microsoft year! Been great seeing the interest from customers when we start discussions about being able to leverage the cloud without having to change the way their application architecture.

In my first StorSimple deep dive, found here, I talked about StorSimple from a hardware and platform perspective. In this post I want to talk about the efficiencies that StorSimple offers.

What do you mean by efficiency?

When you talk about efficiencies it can mean many things to many different people. In this case I’m talking about the way StorSimple optimises data for performance, capacity and moves data between tiers based on a smart algorithmic approach where it views data at a block (sub LUN/file) level . That is really good…but of course other companies do this as well, so what differentiates StorSimple? The biggest differentiator here is that the lowest tier isn’t SATA/NL-SAS but the cloud. And when we are talking about low cost, highly available storage, it is hard to beat the economies of scale that the cloud can offer and of course  it goes with out saying <AzurePlug> that Windows Azure is the best example of this with our price structure the same for any DC in the world and the fact we geo replicate all data by default </AzurePlug>.

How much of my data is really hot?

When data is created it is hot and will be referenced quite often however, after some time, this data generally grows cold very quickly. You only want to look at those old photos of your cat, when you have a new meme to create, on rare occasion. Generally anything north of 85% of the data can be cold. Keeping this data unoptimised and on local disk is obviously not the most effective use of your technology budget.

Life of a Block with StorSimple

So into the deep dive bit! I’m going to talk about the life of a block and how we treat this block, from a StorSimple perspective. I’ve done a series of whiteboards…or drawn on my screen with <MS_Plug>my touch enabled windows 8 laptop</MS_Plug> below to explain how we treat a block. Block sizes on StorSimple are variable, and we generally select the block size based on the kind of workload running on a specific volume (LUN). StorSimple deduplicates, compresses and encrypts data before it is tiered off to the cloud; this generally provides between 3x to 20x space savings, dependent on workload.

  1. A block is first written into NVRAM (battery backed DRAM that is replicated between controllers).
  2. The blocks are then written down on to a Linear tier, which is eMLC SSD drives; so low latency and high IO. 1
  3. Blocks of data generally don’t stay on the linear tier for long, unless they are subject to continuous IO requests. Blocks are taken, near immediately, down to the dedupe tier. This data remains on SSD, with the low latency and performance you expect, but the data is deduplicated in line before arriving here on a block level, providing significant space savings.2
  4. As the blocks start to cool then are then taken to a SAS tier and compressed in line on the way down there. This all happens on a block (sub file/LUN) level so if a VM system file, for example, was located on StorSimple the parts of that installable which are hot remain on SSD while the majority of the capacity that is infrequently accessed will be taken to SAS.3
  5. As the StorSimple appliance starts to use it’s local capacity it will then encrypt the coldest blocks of data and tier them off to the Cloud using RESTful APIs. When this is Windows Azure this means three copies of the data will be kept in the primary data centre and three copies of the data will be in the partner data centre by default. Suddenly you can use the cost and availability efficiencies of the cloud, without having to change your application, operating system or, most importantly, the way you view your files. The data is encrypted with AES256 bit encryption and private key is specified by the customer. 4
  6. Then in the event that is called back from the cloud it will be a seamless process. The metadata, which is always stored locally on the appliance, knows exactly which block of data is required. StorSimple will make a RESTful API call over HTTPS to bring the data back to the appliance in a very efficient manner, as the data is compressed, deduplicated and there is no need to search for the location of the block of data. Not only will the block you require be recalled but other corresponding blocks will be pre-fetched back for further performance optimisation based on the read pattern. For the below example I’ve show block “F” being recalled to the local appliance. This data will be stored on the deduped SSD tier as it is now hot data once more. This back end process is totally transparent and the only thing that will be that will occur is slightly higher latency on the blocks which have been tiered to the cloud.5

How do we know who is insane cold…and how

Easy! Whoever has this stamp is insane…

.

Now that my obsession with the Simpsons is addressed how do we decide what blocks are cold, when they are tiered and what happens, from a performance perspective, when blocks of data need to be accessed and they are located on a Cloud provider.

StorSimple uses a Weighted Storage Layout (WSL). This then goes through the below process:

  • BlockRank – All volume data is dynamically broken into “chunks”, analyzed and weighted based on frequency of use, age, and other factors
  • Frequently used data remains on SSD for fast access
  • Less frequently used data compressed and stored on SAS
  • As appliance starts to fill up optimised data is encrypted and tiered to the cloud

But what about what I think?!

Automation is great, but what if I want to manually influence the priority around data sets are tiered to the cloud? StorSimple offers a solution for this as well. You manually can specify a volume to “local preferred” so it is the last data set that will be tiered to the cloud, and only tiered off if all other datasets have been tiered off and the local capacity of the appliance is reaching capacity.

Examples of data sets you might set to prefer local are:

  • Log Files
  • DataBase Files
  • VM System files

No more data sovereignty blues…Azure coming to Australia

“Rest “Azure-d” we’re coming Down Under!
We’re excited to announce that we are expanding Windows Azure with a new region for Australia. This will allow us to deliver all Windows Azure services from within Australia, to make the cloud work even better for you!
The new Windows Azure Australia Region will be made up of two sub-regions in New South Wales and Victoria. These two locations will be geo-redundant, letting you store and back up data across the two sites. We know you’ve been asking for it, and we look forward to making this available to all Windows Azure users.”

With the launch of Windows Azure regions in Australia today it brings another option for customers looking at true cloud services where data can reside in Australian; particularly if they have any concerns about with where data resides and latency.

In my opinion this is a good thing for Australian consumers as when there is competition the consumer wins through price and innovation. It does help me when talking Cloud Integrated Storage (CIS) to customers.

Data Sovereignty

With StorSimple we obfuscate and encrypt any data before it goes into Azure, with the customer having sole knowledge and ownership of the private keys to decrypt the data, but there are still many instances of customers insisting that data remain on Australian soil. There are different driving factors behind this thought process which in some cases I do and don’t believe to be valid however it is still a hurdle we face. With the local regions now being available this challenge also goes away.

Network Cost and Performance

Data sovereignty is not the only thing that having local DCs helps with, it also helps with your network performance (latency) and potentially cost. Suddenly it is possible to get below 100ms latency from anywhere in the country and far less if you are located in Sydney or Melbourne. It also brings about the possibility of peering agreements with network service providers, including Telstra, AARNet, Pipe and others. I’m not saying when this will happen, or even that it will happen, as I don’t know, but it is a real possibility with DCs that are onshore.

Azure vs competition*

Since starting with Microsoft a little over two months ago I’ve been more impressed with every new feature and functionality that we can provide with Azure. When I look at what we provide with Azure for Australian hosted customers I see some clear value propositions:

1) Cost – other cloud providers with Australian DCs will charge different usage rates depending on which Geo you use (at least when I checked last week). With Azure we charge the same rate around the world, in US dollars, no matter which Geo you select.

2) Protection and redundancy – Azure Regions are always released in pairs…and for a very simple reason. Everything in Windows Azure Storage is replicated twice (to form a total of three copies) on separate hardware in the local Regions. Windows Azure then takes care of DataCentre redundancy, and takes in consideration the fault domains. This means that one copy is always in another geo-location. So if using the datacentres in Australia you will have three copies of your data in Sydney and three copies of your data in Melbourne by default.

3) Breadth – The variety of different services that we can provide with Azure are huge. The variety of different offers we can provide with Azure are huge….from PaaS (which supports Python, .NET, Java and many other development languages), Websites, IaaS, Mobile Services, Media, Storage and even Big Data as a Service.

Some of my colleagues have also blogged about this and you can read their blogs here:

http://blogs.msdn.com/b/ausblog/archive/2013/05/16/windows-azure-expands-downunder.aspx

http://blogs.msdn.com/b/davidmcg/

http://blogs.msdn.com/b/rockyh/

* When talking about cloud service providers I work on the assumption that a cloud service provider can provide  PaaS, IaaS, Storage and other services which are manageable and accessible via a self service portal AND RESTful API stack.

StorSimple Deep Dive 1 – An Enterprise iSCSI SAN

In my last post I did an introduction to Cloud Integrated Storage, the value proposition and how we address this at Microsoft. I figured for the next few posts I’d get into a bit of detail about StorSimple, which is the way we provide Cloud Integrated Storage to customers. This is a company that Microsoft acquired in October last year with much of the core team based out of Mountain View, CA. I was lucky enough to get over there in my second week in the role and meet the team, was great to get in there and hear how the solution came about and some of the success they have already had. This including a soothing ale with Marc Farley, who is currently writing a book on Cloud Integrated Storage.

StorSimple is a highly available, enterprise class iSCSI SAN. It is Flash optimised with redundant controllers, disks configured at RAID10 and hot spares….things you’d expect from your enterprise SAN.

SSHW

We currently provide between 2TB of useable capacity to 20TB of useable capacity on premise, at this stage, which is flash optimised, deduplicated and compressed….meaning you will be able to realise much more capacity both on premise and within Azure; generally anything from 3x to 20x space savings. There are soft limits on the amount of Azure capacity we can address and this ranges from 100TB on the 2TB appliance all the way to 500TB on the 20TB appliance.

ss models

I think it is great that Microsoft is embracing open standards with StorSimple. It is certified for and supports both Windows and VMware (VSS and vStorage APIs for Data Protection respectively are utilised) and it is still an open system that not only can connect to Azure, but still supports connectivity to Atmos, OpenStack, HP, AWS and other cloud providers. This is a big part of StorSimple’s DNA and, prior to acquisition, it was not only the Azure go to solution for Cloud Integrated Storage, it was the AWS go to solution. That said why would you want to use anything but Azure…but more on that in a future post.