All things Storageous in Storage!

Leave a comment

Consumer Drones & CIA Cloud?

I was fascinated to read about Amazon CEO Jeff Bezos mention two things this week in a press appearance “Consumer Drones” & “Private Cloud”. Amazon as a company still continue to impress me, they are always at the bleeding edge and tend to see or imagine a long time before the remainder of the industry catches up.

Firstly lets take the word private cloud which Amazon go to lengths never to mention. Earlier in the year I blogged about the potential rumours that Amazon were building a CIA private cloud and the implications that this could have on competitors, please see the link below.

CIA Cloud

So it seems that this was not media hype and in fact this is a reality, So if you put two and two together and think if Amazon are embarking on a large scale secure cloud with the CIA they will surely apply the lessons learnt there and be even more formidable in the market place. This is certainly a wake up call for the old school of the industry who will not move PROD & enterprise workloads in to the cloud? It seems more and more now that Amazon are targeting this sector and with their pricing model and success you would not bet against them achieving such a feat. Amazon are the dominant player in this arena but they have mostly focussed on public cloud and if they can make the CIA cloud work then surely others would follow. 

The market share for cloud at the moment seems to be ramping up even more, with VMware announcing at VMworld 2013 the vCloud Hybrid services and drawing a line in the sand to target Amazon. It will be fascinating to see what un folds with this next year. 

Consumer Drones? 


Secondly Jeff talked about consumer drones, and this fascinated me. Small electrically powered “octocopters” (Amazon Name) will deliver lightweight consumer goods in urban areas with a distance of 10 miles. This is to drastically cut down energy costs from fuel and traffic in built up areas. The whole thing seems like something out of a Sci-Fi movie though, imagine a drone landing on your front garden delivering a package, I think it would be fantastic. Obviously this is all still in the R&D phase right now and there would be endless amounts of testing and red tape to get through but just think if anyone can do this it would be Amazon! 

Food for thought anyway!


Leave a comment

Amazon gets (more) serious about government cloud

Another Step towards cloud security becoming acceptable even for the likes of the Government.


Not that you probably needed it, but here’s more proof that Amazon Web Services(s amzn) is dead serious about getting more government work.

First, CloudFormation, the company’s tool that lets systems administrators manage sets of related AWS services from simple templates, is now available on Amazon’s GovCloud service. GovCloud was built for government agencies and other entities with special compliance and security needs.

And, on Monday, AWS opened a brand new office in Herndon, Virg., near two motherships — Washington D.C. and Amazon’s gigantic US-East data center farm. Amazon plans to add 500 jobs in the space which will be home to engineering, customer support, security and other functions, according to this Washington Business Journal report.

As almost everyone knows by now, Amazon has targeted some mission-critical government cloud applications including the once-secret CIA cloud, where AWS is now challenging an IBM-prompted Government Accountability Office review of its CIA…

View original post 101 more words

Leave a comment

Inception of EMC ViPR

You could be forgiven for thinking that if data storage companies were dinosaurs, EMC would be Tyrannosaurus Rex. If they were all fish, EMC would be a shark. Nothing wrong with being a T-Rex or a shark. They’re just large, fearsome and old fashioned.

This might be how EMC is perceived from the outside in, as a large cut throat arrogant organisation. I can tell you from working here that this is definitely not the case, I have never worked in a more fluid organisation where change and creativity is embraced.



This for me is no more highlighted than the release of EMC ViPR at EMC World 2013, what EMC are doing is introducing a software layer to control the data centre. True EMC’s play has always been in the hardware layer coupled with software controlling this. these two layers can be defined as “Control Plane” which is the software and the “Data Plane” which is the physical hardware that sits below.

Typically what the IT industry has done is marry the control and data plane together which creates vendor lock in. As an end user this is frustrating as the “Cloud Era” by definition is supposed to be open, and while many adopt the philosophy of Cloud Computing customers still have silo’s of compute, network and storage managed by different softwares, plug in’s, skill sets etc and it is too complex. As an example of this imagine you have an EMC storage estate and a Net App storage estate, in reality these are two separate estates. You then also have to think about the compute and network layers, the management complexity of having these two infrastructures etc the list goes on. It is understandable why companies do not put all their eggs in one basket with one single vendor and it is also understandable why people chose different technologies as I will be the first to admit many technologies excel in different areas.

But what is harder to fathom is why this single control plane has not been introduced before? Well it is not from not trying, it is only now where the technology is at a level where the concept can be put in to practice. When I think of EMC ViPR I think of Facebook, Google & Twitter with their Compute, Network and Storage approach, they labelled hardware as commodity and placed a very intelligent API on top to ease management of these huge infrastructures. While this API is locked down by them and does very specific tasks in their environment EMC ViPR is planning on being much more than that.

The first thing you should know about EMC ViPR is that the source code and API’s are open, EMC are not guarding this close to their chest, this is designed to be industry wide not vendor specific. I think that this is huge, EMC have been quietly publishing API’s for some time now, a lot of it has gone un noticed, but EMC knows they cannot out manoeuvre each startup or very bright graduate student with a fantastic idea, and by hiding what you are doing you limit innovation by these very parties. This is no more evident from the story behind how ViPR began which is a collaboration from different software companies coming together each with their own expertise.

So whats the vision, just another software layer yeah? Well no this is why this is so ground breaking, think if you could aggregate your compute, network and storage layers and configure, manage and report in to one control plane regardless of vendor, protocol etc. Pretty cool right? This is pretty out there in terms of its ability, EMC will be the first ones to admit that you need different Hardware to do different things, high transaction workloads require different hardware than archives etc, but the issue with all these is that they all create islands.

What ViPR is doing is aggregating all these together, imagine if you could do that with your environment, ViPR is a software control plane that ties in to all the below API’s of your hardware and controls them through policies. Masking the user from the complexities of managing all these separate entities. Think of it like the picture below:



You can see above that a single API can reach in to a virtual pool which is actually presented from Storage below. This has been dubbed Abstract, Pool and Automate. This is truly moving towards a singular control plane and the data plane existing at a layer where we do not touch this.

I have been fortunate enough to try the lab’s with EMC ViPR and immediately what was apparent was the fact I could create services quickly and easily through a wizard which was configuring my services to a policy. For example think like this, If I wanted the best possible service, be that Gold or however you dubb this, what ViPR goes away and does is create this service based on policies, so let’s for argument sake say a VM gets created, is protected by HA, the network and zones are setup and the storage layer is protected by VPLEX. All this can be driven through a wizard and literally took me minutes to do. The most important fact to point out here is that not once did I go in to V-Centre, log in to a storage array to carve out volumes, log in to switches to create zone sets, configure firewalls and security policies etc etc, it was all done for me.

Now that for me is a pretty huge deal as it was all transparent to me and as scary as it was impressive. I am not saying that EMC ViPR is the solution to everything as it is still due to be GA and I imagine it will have a lot of tuning/change to come, but for me this is a huge step. Anyone who thinks EMC are the dinosaur or shark clearly have a view of EMC which perhaps is outdated by 10 years and I would encourage you to look at these latest offerings. What has been achieved is quite incredible and yet the concept is hugely simple, hardware is hardware and its intelligence is derived from software, by identifying and separating these two planes (Control and Data) you gain greater flexibility to innovate and this is exactly what EMC are doing.

If you would like a very in depth focussed view on EMC ViPR please visit this blog provides a great insight in to the inception and deliverance of this software.

On a personal note, I have not had much time to dedicate to this blog of late but I will from now be doing weekly posts.




The Secret Cloud…

Secret Cloud

Secret Cloud

To start this blog I thought a quote from Henry Ford would work quite well “A Business absolutely devoted to service will have only one worry about its profits, that they will be embarrassingly large!”.

From my previous post I blogged about the fist fight in the enterprise public cloud, in which there are many players including, Amazon, ATOS and more recently VMware. The list goes on, but they are all fighting about one space in particular and this is aquiring the enterprise applications that vendors and suppliers have safe guarded for years.

Amazon are making more and more in roads in to this space and I could not resist to blog about the CIA cloud. Thats right the CIA cloud, it has been reported that the CIA are in talks with Amazon around a private cloud solution. This was first reported by “Federal Computer Weekly”, now you have to think about this in context. This is the US governments first real move in to cloud computing after acknowledging the benefits of cost saving, flexibility and able to keep up with the latest trends in computing such as big data and analytics. So if the leader in cloud computing at the moment can build a highly secure, highly performant stable environment for the CIA applications, what is stopping the rest of the enterprise world merging to a similar approach?

One of the clouds biggest questions is “Is my data safe?” surely if the CIA are looking at this they deem the cloud model to be safe. Now lets not get carried away here its not like Amazon will place the CIA on a shared infrastructure with every other man and his VM! They are apparently building a private cloud for the CIA.

This is falling inline with their recent communications that they are beefing up their VPC (Virtual private cloud) capabilities, such as giving you more storage per relational database up fro 1TB to 3TB and up to 30,000 IOPS. Their whole game plan here is to attract the IT old school who will not move enterprise applications to the cloud. It certainly sounds appealing to me!

So if you put two and two together and think if Amazon are embarking on a large scale secure cloud with the CIA they will surely apply the lessons learnt there and be even more formidable in the market place.

Naturally no one will comment on this from the CIA or Amazon but I am sure this will have got tongues wagging in the industry. To go back to my quote at the start of this blog, Amazon seems ever more focussed on the service it is delivering and if they do get this contract it is reported that this is worth $600 over ten years. Just think if they can then coax the enterprise application crowd over to their services also, I think the quote rings true, they will be embarrassed by profits!

So in jest this blog never happened! “Gentlemen, congratulations. You’re everything we’ve come to expect from years of government training. Now please step this way, as we provide you with our final test: an eye exam…” FLASH (quote men in black).


Acceptable Downtime in our World? Introducing “Chaos Monkeys”

netflixI first read an article about this on GigaOM, and this really got me thinking in ways that companies go about down time. All lines of businesses are different and some can accept downtime others cannot. If you think of the likes of services we depend on and use daily such as Google, Amazon, Ebay etc we would not accept downtime. This is a rapid change of thinking as a few years ago we might have accepted this but not now, not in our online 24/7/365 world, and as I always allude to this is due user experience and choice that we as end users have.

An Example of this is Google Mail which went down between 8:45 AM PT and 9:13 AM PT when they were upgrading some of their load balancing software which turned out to be flawed so they had to revert back to a previous version  Now why would Google  deploy code at that time? The answer is simple when you think of it, in the online world we now live in there is no acceptable window of downtime so companies like this are constantly rolling out code upgrades to give more benefits to the end users and the business.

So a particular section of this article intrigued me which was how Netflix works, the full article can be found here. Netflix employs a service they created called “Chaos Monkeys” and this is an open invitation to break systems and cause downtime, because their philosophy is “The best defense against major unexpected failures is to fail often”. So they learn from failures and by doing this systems become more resilient.

Netflix were quoted saying “Systems that contain and absorb many small failures without breaking and get more resilient over time are “anti fragile” as described in [Nassim] Taleb’s latest book,” explains Adrian Cockcroft of Netflix. “We run chaos monkeys and actively try to break our systems regularly so we find the weak spots. Most of the time our end users don’t notice the breakage we induce, and as a result we tend to survive large-scale outages better than more fragile services.”

So the Chaos Monkey seeks out and terminates instances of virtual machines (AWS in this case) on a schedule usually within quiet hours, but this means that Netflix learn where their application is weak and they can identify ways to then keep this service running despite what goes down.

The thing is failures happen, everyone accepts this but when you find out about your applications stability is critical as with the case of Netflix the videos must keep on streaming!

I really like the philosophy of a “Chaos Monkeys” and it has really intrigued me as this is a different perspective to what I have viewed and experienced with scheduled DR testing, what Netflix are essentially doing is constantly trying to bring down this service.

This got me thinking about EMC VPLEX, which is designed to give you an active-active data center but more importantly giving you outage avoidance through such mediums as a stretched HA cluster spanning geographic distances. When I think of Netflix and in particular their infrastructure  if automated VMware high availability restarted their services on the other cluster then the outage windows would be smaller as it would only need an application restart but they could maintain online services while still hunting for errors.

Everything I seem to read these days is about availability, cloud, users and demand. VPLEX is addressing this zero tolerance availability, I will post an article explaining more about VPLEX soon, in the meantime have a look here.

So to sign off enjoy your Xmas and New year!



Leave a comment


The Cloud is Closer than you think

So the cloud is here, but are you moving with the times or are you behind in your thinking? It’s a question people will never admit to, but the reality is becoming very apparent that SAN and NAS do not scale to large clouds such as Amazon or AT&T. So how do the big guns do cloud?

So lets take a service such as Amazon who have one huge infrastructure which spans global data centers with one huge flexible namespace which can grow with no complexities and minimal management costs. Amazons new offering which is Amazon Glacier released back in September, which is now 1 penny per GB per month! How do you get costs down to that price point and still turn a profit, and to give you some perspective on how big the Amazon infrastructure is this year alone 260 Billion objects were added! Imagine trying to manage that with traditional thinking such as silos of storage in SAN and NAS storage? Amazon’s pricing has dropped 12 times this year alone and they just under cut the market every time.

So lets look at their thinking, one thing that all the big cloud names have in common is that they do not use file systems for their cloud, this includes Facebook, Twitter, E bay, Amazon, You Tube. So why do they not use this, the answer is simple cost and scale. These infrastructures are huge and when they set about creating their clouds they wanted massive scale 10’s of petabytes if not hundreds with minimal growth disruption and management over head.

So lets take a different angle for one moment, it was us the end-user which created this costing point, because technology is so readily available now, for example if you have a credit card you can get a server and some storage from Amazon in a matter of minutes, but they are by no means the only ones doing this. The consumer market is so diverse now that if the price point is not right we just move on it’s as simple as that. How many times have you checked the price of something on the internet while shopping in a store?

So back to the point lets use Amazon, they use an object based API which incorporates, security, encryption, security, billing and a policy engine talking to commodity x86 servers and commodity storage. Hardware fails we accept that as everyone does, but the key is the software at the top layer. Object based storage does not work like traditional file systems and it spans one single namespace meaning you can geographically disperse your data centers and have one giant object store which according to policies set it replicates and protects your data. The simplest way of explaining this is “Drop box”, we all love and use drop box and it may surprise you to know that it too uses Amazons philosophy as above. Policies are the key in this object based world, lets take your free 10GB subscription with Drop Box, as that is a “Free” service it is very unlikely that a copy of your data is made, it is replicated or encrypted and they do not guarantee it will always be there.  But what if I pay the fee per month? Well then you would have a paid policy which would replicate to another data center, encrypt your data and more importantly bill you on usage metrics such as bandwidth and space used.

Now this is the key component here Object storage is subject to policies, an object contains meta data and the content, the intelligent API’s look at the meta data and decide what to do with this data according to policies set. This is key to understanding the management of the cloud. Let’s take E Bay how many photos do you think get uploaded to E Bay every single minute? Well I imagine this number is huge, but how do you manage how long those photos stay there? Before policies E Bay were having to run jobs to delete these photos every night, but there came a point where they were doing this constantly, so with policies in the API they simply set one up to delete these after 3 months has passed. It is as simple as that, all that management has gone and is automated.

The technology that Amazon and E Bay use is EMC Atmos. It is the intelligent API with commodity hardware underneath defining it as a purpose-built cloud platform giving you up to 1.3PB per floor tile. Atmos allows you to easily scale your cloud over geographic distances as it acts as one great big storage pool with one namespace, the API abstraction layer takes care of all the storage calls so developers who are paving the way in browser-based applications which are WAN friendly do not have to care what goes on below the software layer. Atmos takes care of all this, so lets imagine you have 5 data centers globally all connected and your objects are behaving according to your policies and automatically billing your end users based on your policies set (security, replication etc), which you don’t have to back up, Isn’t that the way to do things? Just imagine trying to back up Amazons cloud……………….no thanks.

As the intelligence in an Object store and resilience is also built-in you can lose multiple drives or nodes and your service does not go down, People such as Amazon and E Bay accept that hardware eventually fails, so they just stock pile this and when drives fail they replace them eventually, as it is not critical. Has E Bay ever gone down? The answer is no and there is good reason for this if E Bay went down it would cost E Bay $3,900 per second!

So EMC Atmos is arming the cloud, and the service providers are monetizing this platform in to services that me and you consume every single day. SAN and NAS ways of thinking are fast becoming limiting in the way they can scale in comparison to Object stores and this is now in my personal view why service providers are switching on to this change, traditional service providers are offering things such as “back up to the cloud” etc, and what they need to be doing is appealing to the developers who have written so many of their programs for Amazon S3, as their applications could run on Atmos as it understands S3. This would enable them to keep with the curve in this changing marketplace. And the best part is that this Atmos API is yours, you can edit, modify it, do what ever you like with it to make it work for your company and give a portal to your end users, and bill them accordingly.

So to sign off is Amazon trail blazing the way ahead? No, they have just done this before anyone thought of it, and they are now so large that they can just dynamically grow, and everyone thinks that cloud is slow, but look at Amazon using commodity hardware and servers, the sheer scale means the amount of compute and storage available is all there for the taking!


Leave a comment

Embrace the cloud, but be afraid of the cloud

Entering the Cloud Era, it’s a word for me which had little significance less than 3 years ago and now is all that seems to be pitched by technology companies. The giants such as Amazon and Microsoft Azure are the clear trail blazers here but why?

For a start Amazons business model is so adaptable it is the clear choice for start-ups who do not want to invest in compute\network\storage and prefer to have a pay as you go scheme. Interestingly the amount of customers I visit where by their developers use Amazon Web Services is quite interesting especially when I ask the question in a meeting and the answer turns out to be “yes”. You then see minds working over time, “cloud, is it safe? how much data do we have there? how much is it costing?”. This is the point with Cloud, it is great and the technology allows us to clearly be elastic. The old days of having to build more infrastructure are gone and this means that anyone can develop, run, and analyze their business using a variety of platforms as a service on offer.

One conversation and discussion point I have is “how do you keep costs down” this may not be an issue for large enterprises, but what if you are offering free services such as Flip Board? You don’t want to get in the situation where you have large amounts of machines running consuming resources and in turn you are getting big bills through your door. The cloud is great but like anything it needs to be planned correctly. We seem to have lost our way a little here in the sense that we have all these options but because its automated and next to no fuss we feel that it requires no planning?!

Well the simple fact is that it does and people are starting to wake up to this now. There are many ways to do this such as reserved instances instead of on demand as the cost model is different and when used effectively it is much more beneficial.

The next big sticking point of the cloud for many people is what do they send to the cloud? It is interesting that most enterprises have now begun to lock down sync and share applications. This is because quite simply people have denied that their customers have been using things like Google Drive and Drop box for too long now and they are slowly realizing that they need to control this. How much sensitive data could leave their organisation is quite worrying especially when people find out that in the T&C’s of Drop box they are permitted to look at your data, do analysis on it and when it is on their storage it is theirs! Scary! I personally am a little saddened by this as I really like the sync and share it is such a simple rather old school idea that has become a joy to use and moved people away from relying on email. Email is in every company a massive share of files that can span out of control and simply sending a cloud drive link is so much neater, simpler and cheaper!

On the plus side these sync and share applications have really been driven by the hand-held market with tablets and smart phones but this has spawned new companies and opportunities. Companies that can put corporate policies on your tablets and phones such as Zenprise, giving the control back to IT.

But the root of these AWS web services and drop box is that they are so easy to obtain, very little red tape and when used correctly they are extremely cost-effective. This compared to going to your IT department, requesting a service/technology, waiting for endless approvals, finally get it weeks if not months later. Any wonder why people use this? It does to me feel like IT restrict everything I do, and it frustrates me beyond belief, yet I can always find a way around it and get what I need done, but my mind tells me I should not have to do this!

The one aspect for me that remains is security, there are so many security concerns but you watch. I liken this analogy to cliff diving, everyone waits until one jumps first and its safe then they can all follow! As soon as someone in the financial or government decides to fully embrace this then everyone will follow!

To sum up, Cloud is here, accept it, we use it everyday in our lives and it is only getting bigger and smarter and personally I am pleased with this as it opens up technology more in to our lives and brings us racing towards the information and screen led society we will eventually become. Software is becoming smarter and more usable by the day and it will soon interface with our lives seamlessly and will probably run in the cloud.