No more data sovereignty blues…Azure coming to Australia

“Rest “Azure-d” we’re coming Down Under!
We’re excited to announce that we are expanding Windows Azure with a new region for Australia. This will allow us to deliver all Windows Azure services from within Australia, to make the cloud work even better for you!
The new Windows Azure Australia Region will be made up of two sub-regions in New South Wales and Victoria. These two locations will be geo-redundant, letting you store and back up data across the two sites. We know you’ve been asking for it, and we look forward to making this available to all Windows Azure users.”

With the launch of Windows Azure regions in Australia today it brings another option for customers looking at true cloud services where data can reside in Australian; particularly if they have any concerns about with where data resides and latency.

In my opinion this is a good thing for Australian consumers as when there is competition the consumer wins through price and innovation. It does help me when talking Cloud Integrated Storage (CIS) to customers.

Data Sovereignty

With StorSimple we obfuscate and encrypt any data before it goes into Azure, with the customer having sole knowledge and ownership of the private keys to decrypt the data, but there are still many instances of customers insisting that data remain on Australian soil. There are different driving factors behind this thought process which in some cases I do and don’t believe to be valid however it is still a hurdle we face. With the local regions now being available this challenge also goes away.

Network Cost and Performance

Data sovereignty is not the only thing that having local DCs helps with, it also helps with your network performance (latency) and potentially cost. Suddenly it is possible to get below 100ms latency from anywhere in the country and far less if you are located in Sydney or Melbourne. It also brings about the possibility of peering agreements with network service providers, including Telstra, AARNet, Pipe and others. I’m not saying when this will happen, or even that it will happen, as I don’t know, but it is a real possibility with DCs that are onshore.

Azure vs competition*

Since starting with Microsoft a little over two months ago I’ve been more impressed with every new feature and functionality that we can provide with Azure. When I look at what we provide with Azure for Australian hosted customers I see some clear value propositions:

1) Cost – other cloud providers with Australian DCs will charge different usage rates depending on which Geo you use (at least when I checked last week). With Azure we charge the same rate around the world, in US dollars, no matter which Geo you select.

2) Protection and redundancy – Azure Regions are always released in pairs…and for a very simple reason. Everything in Windows Azure Storage is replicated twice (to form a total of three copies) on separate hardware in the local Regions. Windows Azure then takes care of DataCentre redundancy, and takes in consideration the fault domains. This means that one copy is always in another geo-location. So if using the datacentres in Australia you will have three copies of your data in Sydney and three copies of your data in Melbourne by default.

3) Breadth – The variety of different services that we can provide with Azure are huge. The variety of different offers we can provide with Azure are huge….from PaaS (which supports Python, .NET, Java and many other development languages), Websites, IaaS, Mobile Services, Media, Storage and even Big Data as a Service.

Some of my colleagues have also blogged about this and you can read their blogs here:

http://blogs.msdn.com/b/ausblog/archive/2013/05/16/windows-azure-expands-downunder.aspx

http://blogs.msdn.com/b/davidmcg/

http://blogs.msdn.com/b/rockyh/

* When talking about cloud service providers I work on the assumption that a cloud service provider can provide  PaaS, IaaS, Storage and other services which are manageable and accessible via a self service portal AND RESTful API stack.

Advertisements

StorSimple Deep Dive 1 – An Enterprise iSCSI SAN

In my last post I did an introduction to Cloud Integrated Storage, the value proposition and how we address this at Microsoft. I figured for the next few posts I’d get into a bit of detail about StorSimple, which is the way we provide Cloud Integrated Storage to customers. This is a company that Microsoft acquired in October last year with much of the core team based out of Mountain View, CA. I was lucky enough to get over there in my second week in the role and meet the team, was great to get in there and hear how the solution came about and some of the success they have already had. This including a soothing ale with Marc Farley, who is currently writing a book on Cloud Integrated Storage.

StorSimple is a highly available, enterprise class iSCSI SAN. It is Flash optimised with redundant controllers, disks configured at RAID10 and hot spares….things you’d expect from your enterprise SAN.

SSHW

We currently provide between 2TB of useable capacity to 20TB of useable capacity on premise, at this stage, which is flash optimised, deduplicated and compressed….meaning you will be able to realise much more capacity both on premise and within Azure; generally anything from 3x to 20x space savings. There are soft limits on the amount of Azure capacity we can address and this ranges from 100TB on the 2TB appliance all the way to 500TB on the 20TB appliance.

ss models

I think it is great that Microsoft is embracing open standards with StorSimple. It is certified for and supports both Windows and VMware (VSS and vStorage APIs for Data Protection respectively are utilised) and it is still an open system that not only can connect to Azure, but still supports connectivity to Atmos, OpenStack, HP, AWS and other cloud providers. This is a big part of StorSimple’s DNA and, prior to acquisition, it was not only the Azure go to solution for Cloud Integrated Storage, it was the AWS go to solution. That said why would you want to use anything but Azure…but more on that in a future post.

Intro to Cloud Integrated Storage

Chris Mellor from The Register (http://theregister.co.uk) recently wrote an article about cloud storage starting to savage the market share of traditional/legacy storage vendors. Link to the article is here.

It is predicting that due to the economies of scale and custom built hardware, out of commodity parts, cloud providers will be able to provide far cheaper storage than any traditional array vendor. An estimated market value graph was included, with cloud intersecting legacy around 2024.

Cloud Storage v Legacy Storage
 

So how do businesses start to embrace and utilise this cost effective storage, which is protected with multiple copies and redundancies, better than any legacy storage array located in a standard data centre? All this with making sure data is obfuscated for compliance and security reasons.

One option is to re-architect your applications and use RESTful APIs to send, fetch and modify data within a cloud provider…and this works extremely well for a lot of business (Netflix are a great example). But is this approach for everyone? I’d say probably not at this stage, not many companies can put the development and DevOps effort into an application to ensure it’s robustness like Netflix can.

The other option, which is one of the things I’m focussed on with Microsoft, is a storage array that resides on premise and can integrate with the cloud to provide the best of both worlds. Provide local, enterprise class storage, to your existing servers and applications, and be able to utilise the efficiencies of the cloud without having to re-write your applications. The local storage array is StorSimple, which was acquired by Microsoft in October 2012; more info can be found here http://www.storsimple.com/.

Suddenly you have enterprise on premise arrays with the below features on premise

  • auto tiering
  • flash optimised (as a tier)
  • dedupe of production data
  • compression of production data
  • application integration
  • snapshots without performance impacts
  • restores
  • clones

Then, as blocks of data grow cold, encrypts this data and migrates it, at a block level to Azure. This means you could have constantly accessed VM hot blocks on SSD, warm blocks on SAS and then the cold blocks residing on Azure. With more than 80% of data generally being cold data this suddenly makes a lot of sense.

Not only does this remove a lot of cost around storage and complexity around archiving it also opens another door about how you can mange your backup. Once this data resides on StorSimple it is possible to take a “cloud snapshot”. A full copy of the data (encrypted and optimised) is taken to Azure on the first snap…after this only changed data, at an optimised block level, is copied to Azure for the next snapshot. Suddenly you can keep hourly, daily, monthly and even yearly backups/archives within Azure and significantly reduce your backup window, operational expenditure and effort around backup, as well as increase your ability to restore data quickly and efficiently.

Finally, from a compliance perspective, all this data that sits within Azure is encrypted (256 bit AES encryption), three copies of the data are made in the primary data centre and a fourth copy is made to the Azure partner data centre. This is the case for the cloud snapshots and for any data tiered to Azure.

I think it is pretty cool tech and will be writing more on StorSimple and Azure in the future.

Redmond bound…

So two weeks ago I started a new and exciting step in my career and took a role with Microsoft as a Technical Solutions Professional for Cloud Integrated Storage (more to come in coming blog posts on what the heck this is) covering Australia and New Zealand. For those of you who don’t work for or deeply with Microsoft this a pre-sales role focussed on a specified technology set; in my case Cloud and Storage and, you know, stuff.

I have to admit if you asked me a year ago, or even six months ago, if I pictured myself working for Microsoft I would not of imagined it happening. However upon seeing what this role was about and the strategic value it would play with Microsoft customers and within Microsoft itself my mind was very, very quickly changed.

I started two weeks ago, in the Sydney office and, after three days in the role, I hopped over to the Mountain View office for a nice leisurely three days in the US, to really start drinking from the fire hose (for those wondering the koolaid flavour is blue). Now back it feels great to be diving into the work, head first, and adding some value. I’ve found the people and the resources available to me simply astounding since starting and can see why Microsoft have been a truly great company over the last few decades.

Obviously there are some things I’m going to have to try and get used to, such as using IE as my primary browser, bing as my search engine and even swapping out my fruity phone and tablet for a Win 8 phone and a surface….maybe some more blog posts to come on those experiences too. I’m sure it will be a good learning curve for me and a chance to get to know and appreciate technologies outside my current comfort zone.

SNIAfest2 – Dimension Data

First cab off the rank for the blogfest was Dimension Data (DD). I was hoping to get this one up sooner but Justin Warren beat me to the punch (http://www.eigenmagic.com/) by a fair bit as I’ve been pretty slack.

We kicked this off with coffee and breakfast downstairs (thanks to SNIA) and then made our way up to see Martin Aungle (Corporate Communications Manager, Marketing); all apart from Rodney that is.

For some reason Rodney was excluded because he works for a competitor, but Chris, Sachin and I were welcome, even though we work for competitors as well. In my opinion the SNIA bloggers day is meant to be an open forum, with no confidential information exposed, to present what is publicly available. The people attending are meant to have their SNIA hat on and be looking at this through impartial, independent  eyes. It is just shame as this is a blight on what was an interesting presentation/discussion. Now back to the DD presentation.

DD had David Hanrahan (General Manager, Virtual Data Centres and Microsoft Solutions) and John Farrell (Enterprise Architect) presenting from Melbourne via Cisco Telepresence. If you haven’t used a Cisco Telepresence room it is worth checking them out at a Cisco office. Still not as good as being there in person but a huge improvement to existing video conferencing facilities.

This was more of a conversation than a presentation, which I think the bloggers appreciated. DD definitely touched on some common themes and trends that I am seeing out in the customer landscape. Also kudos to DD for avoiding any buzz words until near the end of the session…only after “Big Data” was first raised by Sachin (@sachchahal) did we briefly descend into the depths of marketing buzz words.

Themes

The conversation was quite opened and unstructured without  a powerpoint slide in sight (well done guys). Some of the key themes being discussed were:

  • More discussions with C level staff around entire “stack”
  • Business requiring compressed delivery time frames
  • Convergence of teams, data and technology
  • “Black Magic”

Compressed Time Frames and CxOs

Delivery of entire solutions, within a limited amount of time, simple to report on and low risk is becoming more of a theme currently. For example one company wanted to be at the operate phase within 30 days of signing the PO (design, hardware delivery, implementation, operational doco). To quote one CIO “I know I want a unicorn…but I still want it”.

Previously this would of been nearly impossible to do, just from a hardware lead time perspective, but vendors are now starting to understand and come to the table.

DD see VCE as their A game for delivering this. “Design guides are great, but not always followed by engineers and therefor still have inherit risk” (John Farrell). With VCE they can have something ready to hit the floor with minimal integration required in short time frames. That said they have other options to best fit the customer if VCE is not right.

This change in the way businesses and IT departments is leading to larger budgets, at least for a single project. This is down to customers buying the entire integrated stack, rather than just running isolated projects for one area.

Convergance, in more ways than one

Convergance is something that DD is seeing more and more across different areas. It is common that convergance is a happening across different traditional IT silos (network team, security team, storage team, server team etc). These are now getting to the stage where they can be quite counterproductive at times and this has lead to CxOs having the conversations about full stacks and getting more involved. The particularly need converged teams for the achievement of compressed time frames and de-risking of solutions. This is also challenging DD as they need to ensure that their staff are trained up enough to talk to the different teams and the business folks.

The other meaning of convergance it seems is that data is getting more and more intermingled. The advent of VDI means business and personal data is getting intermingled on data centre storage, where previously the personal would stay on the user device. “How did my itunes library end up on the SAN…” was how John Farrell phrased it. This then leads to the challenge of managing the data.

“If you can’t drive the managment costs down you aren’t truly making a saving. Can we afford to manage the environments we are creating?” was a question posed. The same thing could happen with storage as has happened with virtualisation (VM sprawl) and this is a risk.

Black Magic and Roadmaps

Black magic was used as the way some solutions used to be sold. You would buy an appliance and it just worked. Customers didn’t care or want to know how it works, as long as it worked. David and John believe that you can no longer sell black magic. Customers are smarter and understand that you still need smart people to put it together and make it work. The customer sees the value in the out of the box solution but also understand the value that integrators or internal staff add to the solution and that they are necessary to make the solution meet their expectations.

With customers spending money on staff augmentation, residency services, and training for their own people to get the cloud benefits a roadmap is key. AWS is now the benchmark for internal IT and internal IT needs to meet their users expectations. It is not possible to simply take that jump to “private cloud”, the challenge is showing a roadmap with the invidual components/decisions that will take the customer to that end goal. To this end DD are seeing a lot of customers are now doing showback (few doing chargeback though),  measuring ROI achieved (in excess of 50% of customers) and greater awareness around cost and allocation.

And on SNIA and standards

I asked about a few open standards and DD’s opinion on these. First up was Cloud Data Management Interface (CDMI) – a standard that SNIA are currently working on. They believe this is a key standard as you need a common framework across the industry and it shouldn’t/couldn’t come from a specific vendor. If you haven’t read up on CDMI you can have a look here http://www.snia.org/cdmi.

This then went over to the networking side – what about L2 portability? Several standards are available, but all are vendor specific…I can’t remember where this quite trailed off to but we all agreed that it is important and any mobility assumes layer 2 portability.

Summary

DD had a very consultative approach and this was more a conversation than a presentation (which is good). They are definitely seeing a lot of the same issues and requirements that a lot of us (as competitors) see out with customers and are have their own way to approach them. A key theme I’ve seen in the integrator space is repeatable, low risk solutions that are easy to report on and provide a quicker time to market/operate of a solution.

Another thing coming out of the day was that the day of the specialist (very technical minded) guys driving the purchasing and strategy discussions is numbered; with the generalist, who understands the entire stack and business, is now the one most likely to get the budget for an uplift.

Only negative of the visit was Rodney not being allowed to join us, as mentioned previously.
Merch: Notebook and pen, which I didn’t take as I already had my own, and some morning tea.

SNIA bloggers day (SNIAFest2) – Intro

SNIA ANZ (www.snia.org.au) organised a  bloggers day on the 21st of July 2011. I think it is a great idea and Paul Talbut from SNIA has done a heck of a lot of work to get this set up; just like he did for the previous one last year which I also attended (the line up then was EMC, IBM, HDS and NetApp). Even a mini bus and driver were organised to ferry us from the Sydney CBD to Rhodes to Frenches Forest and finishing up in North Sydney – all on a very tight schedule.

Up this year from the vendors was:

  • Dimension Data
  • HP
  • Dell
  • Symantec

And the bloggers for the day:

  • Rodney Haywood of AlphaWest – @rodos and rodos.haywood.org
  • Justin Warren of PivotNine – @jpwarren and http://www.eigenmagic.com
  • Sachin Chahal of Nexus IT – @sachchahal and Nexus Corporate blog -(independent blog expected soon?)
  • Chris Wintle of Data#3 Sees this as an opportunity to start up on twitter and start a blog

Based on this an interesting day was expected, not just from the vendors but also from the other bloggers who have some very interesting insights to add (and make me realise how little I really know). I have to admit that I was particularly curious what Dimension Data had to say, if I would be allowed to stay and how much they would reveal as I work for one of their competitors in Australia (Data#3).

Bit of disclosure before going into the different vendors. I work as a solution architect/specialist (pre-sales) around Data Centre Technology for Data#3. Data#3 are a competitor with Dimension Data, the largest HP and Symantec partner in Australia and have no affiliation, that I am aware of, with Dell. That said the views on this blog are mine and not that of my employer.

NetApp VSC – simplifying vSphere storage configuration

I was setting up a demo in our lab for a customer recently and decided to use it as an opportunity to rebuild the VMware vSphere environment to a different design.

I’ve used the VSC before but it has been a while since I’ve installed/configured it and I’m still impressed ease of use and ability to simplify tasks. Following the steps within the NetApp Technical Report TR-3749 within 30 minutes I had:

  • Created a role with the appropriate permission I wanted – (provisioning, cloning and modify datastores)
  • Created group, assigned role to group and assigned user to group
  • Installed the VSC
  • Registered the VSC within vCenter
  • VSC discovered the array, including vfilers
  • Entered the newly created account, with customised permissions created earlier, for the VSC to access the NetApp array

After this quick, and simple, installation I went through and had look at the datastores I had already setup. At this stage I had configured the iGroups (LUN Masking) for ALUA on the array but had not configured the correct settings for ALUA or  the HBAs within vSphere, I decided to use the VSC to do this.

To confirm manually that the path to the LUNs wasn’t configured for Round Robin below is the path selection configuration of a datastore  before utilising VSC. I also confirmed my queue depth settings and this was not in line with NetApp recommendations.

The process I followed was to select the NetApp icon under “Solutions and Applications”

I was then shown a list of my controllers and vFilers and could see the status of each datastore (sorry, forgot to take screenshot, but MPIO and Adapter settings were in red)

Then I selected all datastores, with status errors and select “Set Recommended Values

Launched it to set the recommended values for NFS, MPIO and HBA/CNA adapters. It then works through the process and took a couple of minutes to do this for multiple datastores (both NFS and VMFS), across two clusters and 8 x ESXi servers (each server was booting from the array via FCP)

Upon completion the status came up all green via the VSC but, just to confirm, I went and looked at my path selection policy on the LUNs being utilised and found that it had changed to the correct settings.

I should also point out that most of my VM datastores, with VMs residing on them, were actually being presented via NFS. The information and administration options here are also very useful, as you can see below.

In conclusion the NetApp VSC not only provides VMware and Storage administrators with the ability to monitor, configure and manage  storage, from within vCenter,  but also makes it simple to comply with the vendor recommended practices for configuration and get the optimal performance and stability out of your VMware and storage.