storageous

All things Storageous in Storage!


2 Comments

Twitter Under Attack

What started as a social media tool can now impact stock markets

What started as a social media tool can now impact stock markets

Never underestimate the determination of someone who is time-rich and cash-poor. Hackers in this case are experimenting with technology and pushing it in ways people have never imagined, bringing the issue to closer to home is the example below.

Lets take Twitter this week in which the associated press Twitter account was hacked, and the group responsible tweeted that there had been explosions in the White House and the President was injured. This must send out alarm bells to every major company that uses Twitter as such this Tweet was read by thousands instantly spreading like a rumour  being retweeted and posted over the internet even causing the Dow Jones stock market to slip 1%. The AP account came out shortly after announcing that this was a false Tweet and the account suspended, but it just shows how impactful a single Tweet can be.

This just shows how important Twitter has become, Twitter is now used in Business for all different purposes, for example Bloomberg now have Twitter feeds in their terminals to monitor market activity and chatter relating to their field of expertise, and they base market decisions not solely on this information set but it has an influence that is for certain. Businesses all over the world see Twitter as not only a social tool but a business tool, and as such they must not under estimate the people who are hacking these accounts, the reputation able damage that can be done to a company now is huge.

Stocks fell 1% due to one Tweet

Stocks fell 1% due to one Tweet

I went to a presentation by Peter Hinsenn only 12 months ago and listened to him speak about the importance and value of Twitter as a tool for businesses. True it is a great tool with audiences that are huge but the risk is now also becoming a concern. When you consider what you need to login, a UN and PW it is not difficult to imagine this could be broken. I have had my social media accounts hacked but they are personal and hold no key information on there, the most damage they did was upset my friends with dodgy links on their walls.

Twitter needs to incorporate a two factor authentication so the UN and PW entered can be verified, such as a text coming to you with a code in so you can verify to the server that you are the owner of this account, Microsoft, Google and Apple all now offer this but it seems the one behind the curve is arguably the most influential.

Security is a buzz word in the industry at the moment and everyone has different views on what security entails, especially with how they apply security in their business. I have said this before and I will say it again, if you understand security in this industry and can articulate it in a way that people get you will be hugely successful, its almost a grey area that no one wants to get involved in and why is that? Because it is not easy and is very complex! Twitter are actively recruiting now for senior security people to re vamp their security, the  new generation of cyber criminal needs a new generation of cyber police and Twitter are starting to get this.

If I were involved or responsible for a large companies security I would be looking at the above and giving this some serious thought along the lines of where are the lines of my security? and the answer to that is there are no strict lines of defence anymore, the internet and social media are accessible by all, the slightest leak in your ship and a flood will be immanent. I have been to a lot of RSA sessions and also security centres and the ability they have to monitor, track and analyse security is quite incredible and in my view if people are not looking to these measures or similar it is a big risk.

On another note you have to also consider how quickly this Tweet was revoked, do you think that this was just chance, someone casually browsing Twitter that happened to stumble upon this? Think again, major organisations have social media monitoring and analytics used for varieties of purposes, from customer satisfaction to rebuffing negative comments about their respective business on the internet. For example it was made clear through various blogs and posts about the Obama Office using data analytics to monitor peoples thoughts, posts and counter argue on forums.

That’s another story for another day but my point being here, look at what has been done with a single Tweet and some invested time resulting in huge impact. Security may be a subject on most IT professionals minds but it extends much further and should be looked at very seriously.


4 Comments

Standing on the Shoulders of Giants…

Have VMware realised that Amazon are standing on their Shoulders?

Have VMware realised that Amazon are standing on their Shoulders?

“One who develops future intellectual pursuits by understanding and building on the research and works created by notable thinkers of the past” I thought this an appropriate opening to the topic of discussion here today. I will explain this metaphor at the end of my ramblings, but in the most basic form Amazon are the ones standing on the giant VMware shoulders at the moment…

My thoughts and reading this month have been mainly of VMware’s talk of taking down the giants that are Amazon, VMware was as bold to say “Amazon will kill us all”. This sparked my attention and intrigue in to investigate this a little further.

This all stems from the brand that Amazon have in the industry, in previous posts I have blogged about how Amazon are ahead of the curve and anticipated this era of cloud computing. True VMware may have enabled this at a hypervisor layer but Amazon are the true victors here. In my opinion I think Amazon crept up on everyone in their blind spot, maybe the big corporations were a little too relaxed in what they were doing and just did not see this coming. You could argue that VMware were riding so much of a growth wave that they didn’t think this could effect them. It is the typical market pattern that the big innovators grow fast and then become stagnant. Constantly innovating is tough and I think that VMware have been right to spin off the platform as a service business as it brings more focus to the company.

Okay so on to the point of Amazon killing us all! This is referring to the shift in focus to Amazon. Every customer I go to has some form of services running in AWS whether that has been approved or not is a different story. And as I have stated before the reason people do this is choice, conventional IT is restrictive and does not give even half the speed people want especially if they have a credit card handy.

Just lately though in my experience Companies are starting to look over their shoulder’s at Amazon and wonder, how much data do I have there? Is it safe? does my data live on a shared platform with other companies data? Can I have my data back when I want? In this world at the moment intellectual property is king and the risk of companies employees sending out data to the cloud is one that will keep some awake at night.

So what are VMware bringing to the table? They are looking to announce in the 2nd half of this year a public cloud offering to compete head to head with Amazon. They are looking to extend their vCloud Director technology from existing customers in to the cloud enabling the mobility of production workloads. And here we get to the real fight that is happening, traditional IT vendors are now fighting to keep production applications on their infrastructure, AWS has predominantly been for test and development applications, but the nosies made by Amazon at its AWS re invent show last year are that they now have these production applications clearly in their sites!

So no wonder the gloves are now coming off with the big guns, as their comfort zone is at stake.

Do I think VMware can pull this off? One official at VMware was quoted as saying “surely we can beat a company who sells books”. That is a harsh statement, if you follow Formula One and are aware of Red Bull, the big manufacturer teams once said “They are a drinks manufacturer” but look at them now, they are triple world champions. For me VMware has to make a public cloud service which I want to buy in to and enable its channel to effectively sell these services. Certainly they are making the right noises and in my view focussing their direction which they seem to have lost a little bit. Bringing on someone like Pat Gelsinger is a real message to the industry that they mean business, look what he has done at EMC for the last 3 years.

In my verdict the jury is still out and VMware have to prove they have what it takes, so to sign off how I started, Amazon in my view are currently standing on the giant VMware’s shoulders as they understood what VMware had created and how to make a service from it. VMware are left looking up at what their intellect has created and it now seems the red mist has descended and they are focussing on this space also.


3 Comments

Acceptable Downtime in our World? Introducing “Chaos Monkeys”

netflixI first read an article about this on GigaOM, and this really got me thinking in ways that companies go about down time. All lines of businesses are different and some can accept downtime others cannot. If you think of the likes of services we depend on and use daily such as Google, Amazon, Ebay etc we would not accept downtime. This is a rapid change of thinking as a few years ago we might have accepted this but not now, not in our online 24/7/365 world, and as I always allude to this is due user experience and choice that we as end users have.

An Example of this is Google Mail which went down between 8:45 AM PT and 9:13 AM PT when they were upgrading some of their load balancing software which turned out to be flawed so they had to revert back to a previous version  Now why would Google  deploy code at that time? The answer is simple when you think of it, in the online world we now live in there is no acceptable window of downtime so companies like this are constantly rolling out code upgrades to give more benefits to the end users and the business.

So a particular section of this article intrigued me which was how Netflix works, the full article can be found here. Netflix employs a service they created called “Chaos Monkeys” and this is an open invitation to break systems and cause downtime, because their philosophy is “The best defense against major unexpected failures is to fail often”. So they learn from failures and by doing this systems become more resilient.

Netflix were quoted saying “Systems that contain and absorb many small failures without breaking and get more resilient over time are “anti fragile” as described in [Nassim] Taleb’s latest book,” explains Adrian Cockcroft of Netflix. “We run chaos monkeys and actively try to break our systems regularly so we find the weak spots. Most of the time our end users don’t notice the breakage we induce, and as a result we tend to survive large-scale outages better than more fragile services.”

So the Chaos Monkey seeks out and terminates instances of virtual machines (AWS in this case) on a schedule usually within quiet hours, but this means that Netflix learn where their application is weak and they can identify ways to then keep this service running despite what goes down.

The thing is failures happen, everyone accepts this but when you find out about your applications stability is critical as with the case of Netflix the videos must keep on streaming!

I really like the philosophy of a “Chaos Monkeys” and it has really intrigued me as this is a different perspective to what I have viewed and experienced with scheduled DR testing, what Netflix are essentially doing is constantly trying to bring down this service.

This got me thinking about EMC VPLEX, which is designed to give you an active-active data center but more importantly giving you outage avoidance through such mediums as a stretched HA cluster spanning geographic distances. When I think of Netflix and in particular their infrastructure  if automated VMware high availability restarted their services on the other cluster then the outage windows would be smaller as it would only need an application restart but they could maintain online services while still hunting for errors.

Everything I seem to read these days is about availability, cloud, users and demand. VPLEX is addressing this zero tolerance availability, I will post an article explaining more about VPLEX soon, in the meantime have a look here.

So to sign off enjoy your Xmas and New year!

 

 


43 Comments

Twitter’s Blob store and Libcrunch – how it works

TwitterYou may have read one of my previous posts “Arming the cloud” where I talked about why and how large cloud providers are using commodity hardware with intelligent API’s to separate the dumb data and intelligent data to give us a better service. Well in a world of distributed computing and networking you will probably not find larger than Twitter.

To me and you when we upload a photo to the cloud its in the “cloud” we do not care much for what goes on in the background all we care about is how long is takes to upload or download. And this has been Twitter’s challenge, how do they  keep all this data synchronized around the world to meet our immediate demands? It is a common problem of how do large-scale web and cloud environment’s allow users from anywhere in the world to use the photo sharing service overcoming latency which ultimately boils down to me and you waiting for the service to work.

So Twitter announced a new photo sharing platform, but what I am going to look at is how the company manage software and infrastructure to enable this service. Here is what Twitter released yesterday;

“When a user tweets a photo, we send the photo off to one of a set of Blobstore front-end servers. The front-end understands where a given photo needs to be written, and forwards it on to the servers responsible for actually storing the data. These storage servers, which we call storage nodes, write the photo to a disk and then inform a Metadata store that the image has been written and instruct it to record the information required to retrieve the photo. This Metadata store, which is a non-relational key-value store cluster with automatic multi-DC synchronization capabilities, spans across all of Twitter’s data centers providing a consistent view of the data that is in Blob store.”

Sound familiar to what I was discussing in my previous posts? Of course it is, this is a classic example of commoditizing storage\compute\network hardware and having the software API intelligently manage this data.

So what you have to consider with a platform like Twitter is speed and cost, they want users to be able to see the tweet with the picture as soon as possible but they have to be conscious of cost to deliver this service. Twitter has many data centers with many resources but the trade off is always going to be cost.

The next element of this is reliability, how do Twitter ensure that your photos exist in multiple locations on file but not too many to cost too much to Twitter, it also has to think about how and where it stores information on servers which indicate where the actual file exists (meta data). If we took the servers for example, and then thought about how many photos are uploaded to Twitter each day, that’s a lot of meta data to store, what if one of those servers then fails? Then you would lose all meta data and the service would be unavailable. To remedy this the original way of thinking is to replicate this data, but that is costly and time-consuming to keep synchronized and lets not forget will be using some serious space.

So Twitter introduced a library called “libcrunch” and here is what they had to say about it;

“Libcrunch understands the various data placement rules such as rack-awareness, understands how to replicate the data in way that minimizes risk of data loss while also maximizing the throughput of data recovery, and attempts to minimize the amount of data that needs to be moved upon any change in the cluster topology (such as when nodes are added or removed).”

Does that sound familiar again? This is the Atmos play from EMC which is using intelligent API’s to manage all aspects of an element of data, I referred to this last time as an “Object Store”, and the point of this that the API itself understands what to do with a particular piece of data in terms of replication, security, encryption and protection. So we are no longer administering pools of storage but the API is self managing itself, and in the case of Twitter you have to admit that this would be the only way of doing this.

So what does the infrastructure look like, well they use cheap hard drives to store the actual file and the meta data is served from EFD drives for increased speeds. Think of meta data as a search engine it allows you to find articles related to a query very quickly rather than looking at the entire web.

So to sum this up as we place more and more information in to the cloud which is a blend of distributed compute and network, locating information across them is becoming more difficult and slow. Thinking like this with API’s controlling the data according to policies is the right direction to take when using large cloud services.

If you are interested in looking at a cloud solution platform delivering intelligence like this go to EMC Atmos