As is fairly obvious, this blog hasn't been updated for a while. The reason is simple. After many happy years involved with virtualisation, I have now moved into a different area. I now manage channel partners at Blue Coat Systems. No, that's nothing to do with Pontins. Our areas of expertise are security and network optimisation, meaning a blog entitled Virtualisation Tribulations now, sadly, has little relevance.
I enjoyed writing the blog, but it did sometimes get me into trouble. I was outspoken and passionate and am certainly not ashamed of anything I wrote (although, admittedly, I do regret the bit where I predicted the early demise of VMware - somewhat embarrassing now they're are a $4bn company!) but I have mellowed with age. Never again do I want to be angrily accused of potentially affecting the stock price of a company for the sake of a premature blog entry. (Although I'm now on very friendly terms with the Director in question!) And my attention span has shortened, 140 characters is about all I can sum up these days.
So why not follow me on Twitter for more attempts at acerbic wit and incisive wisdom - and possibly a little less enthusiasm to bite the hand that feeds me: https://twitter.com/rupertcollier
Thanks for reading.
Rupert
Virtualisation Tribulations
My humble opinion on trends in virtualisation.
Tuesday, 22 January 2013
Thursday, 1 March 2012
Buttery biscuit base - part I
As published recently (sadly print only, not online) in ComputerScope magazine, Ireland's leading IT magazine. I can only lay official claim to part I, a colleague did part II and I nicked it. But I defy anyone else to get the phrase "buttery biscuit base" into an article on storage virtualisation...
What's stalling virtualisation?
With only around 17-20% of servers worldwide having been virtualised and the hosted desktop model still spluttering into gear, the apparently indisputable benefits of virtualisation don’t appear to be going mainstream. What are the reasons for this and how can we solve them?
Depending on which major market analyst firm you listen to, only around one fifth of servers worldwide have been virtualised with a hypervisor. Whilst other reports do paint rosier pictures, widespread adoption, at least in live environments, would seem to be yet to come. Perhaps this is good news for the virtualisation vendors, or at least their shareholders. However, to view it from a different perspective, if virtualisation is not actually being deployed as extensively as we presumed it would be, are the supposedly undeniable benefits preached by those vendors really as rock solid as they would have you believe?
The honest answer is probably yes. Those organisations which have deployed virtualisation in whatever guise are unlikely to want to go back to the way we used to do things.
Virtualisation has revolutionised organisations’ IT infrastructures in many ways. It has allowed them to re-utilise and get more out of existing hardware and reduce costs in a variety of areas. Perhaps most importantly it has also allowed them to take a much more pro-active approach to end user IT delivery and therefore productivity.
Although not a new technology, VMware and others have successfully re-invented the wheel. VMware sees virtualisation as a three stage process:
Server virtualisation, however, has brought with it some unforeseen sticking points. The proliferation of virtual machines, spun up for certain tasks or test and development activities and not touched again, can have a major impact. It can affect not only the resources of that hardware but also software asset management and security best practices.
So whilst the three stage journey mentioned above is a realistic and achievable aim, there are other challenges that key stakeholders are becoming aware of. Whilst servers are certainly a key lynchpin in a company’s IT world, to not be aware of the impact of server virtualisation on other areas would be to cook your lemon cream cheese cake without its buttery biscuit base. Two of these challenges are often the ultimate, decisive factors in the success or failure of a virtualisation project: management and performance of storage.
Storage and I/O performance are key to the success of any virtualisation project, whether it’s server, application or desktop - particularly the latter. Many applications that, before the advent of virtualisation, enjoyed the luxury of direct attached storage are now having this comfort blanket ripped off them and are being asked to do the same as they always have, if not more, but using shared storage instead.
And many don’t like it. The introduction of just a millisecond of latency is enough to send them into an almighty tailspin. Multi-tenancy also brings about considerable challenges with regard to resilience. One big advantage of single-purpose servers was that, in the event of something going wrong, failures didn’t necessarily have an impact on other resources so the rigidity and continuity of a SAN infrastructure gains huge importance. Add into the mix the demands of cloud-based applications and virtual desktops, and we can begin to pick out the potential Achilles heel in all of this.
One answer to these demands is the concept of a storage hypervisor. A major enabling factor of any virtualisation product is the de-coupling of software from hardware. Ask VMware what vSphere runs on these days and they will probably give you a slightly quizzical look. Essentially, it doesn’t really matter what the badge on the front of the server says, as long as the chipset is virtualisation-aware (and pretty much all are nowadays), they don’t care who made it. You can mix and match your server manufacturers to suit requirements in exactly the same way as Declan Kidney would pick his rugby team.
So should the same not apply to storage? Are the good old days of antiquated single vendor storage silos numbered? Well the trend towards hardware agnosticism is certainly becoming more and more apparent as priorities become performance-based. Just as Kidney would not put Cian Healy at fly half or Tommy Bowe at number eight, so must IT managers be given the flexibility to choose what goes where, not be dictated to by their storage manufacturer.
What's stalling virtualisation?
With only around 17-20% of servers worldwide having been virtualised and the hosted desktop model still spluttering into gear, the apparently indisputable benefits of virtualisation don’t appear to be going mainstream. What are the reasons for this and how can we solve them?
Depending on which major market analyst firm you listen to, only around one fifth of servers worldwide have been virtualised with a hypervisor. Whilst other reports do paint rosier pictures, widespread adoption, at least in live environments, would seem to be yet to come. Perhaps this is good news for the virtualisation vendors, or at least their shareholders. However, to view it from a different perspective, if virtualisation is not actually being deployed as extensively as we presumed it would be, are the supposedly undeniable benefits preached by those vendors really as rock solid as they would have you believe?
The honest answer is probably yes. Those organisations which have deployed virtualisation in whatever guise are unlikely to want to go back to the way we used to do things.
Virtualisation has revolutionised organisations’ IT infrastructures in many ways. It has allowed them to re-utilise and get more out of existing hardware and reduce costs in a variety of areas. Perhaps most importantly it has also allowed them to take a much more pro-active approach to end user IT delivery and therefore productivity.
Although not a new technology, VMware and others have successfully re-invented the wheel. VMware sees virtualisation as a three stage process:
- Bringing production systems onto virtualised servers
- Tackling the priority one applications like Oracle and Exchange
- Delivering an entire IT infrastructure on demand or “as a service”.
Server virtualisation, however, has brought with it some unforeseen sticking points. The proliferation of virtual machines, spun up for certain tasks or test and development activities and not touched again, can have a major impact. It can affect not only the resources of that hardware but also software asset management and security best practices.
So whilst the three stage journey mentioned above is a realistic and achievable aim, there are other challenges that key stakeholders are becoming aware of. Whilst servers are certainly a key lynchpin in a company’s IT world, to not be aware of the impact of server virtualisation on other areas would be to cook your lemon cream cheese cake without its buttery biscuit base. Two of these challenges are often the ultimate, decisive factors in the success or failure of a virtualisation project: management and performance of storage.
Storage and I/O performance are key to the success of any virtualisation project, whether it’s server, application or desktop - particularly the latter. Many applications that, before the advent of virtualisation, enjoyed the luxury of direct attached storage are now having this comfort blanket ripped off them and are being asked to do the same as they always have, if not more, but using shared storage instead.
And many don’t like it. The introduction of just a millisecond of latency is enough to send them into an almighty tailspin. Multi-tenancy also brings about considerable challenges with regard to resilience. One big advantage of single-purpose servers was that, in the event of something going wrong, failures didn’t necessarily have an impact on other resources so the rigidity and continuity of a SAN infrastructure gains huge importance. Add into the mix the demands of cloud-based applications and virtual desktops, and we can begin to pick out the potential Achilles heel in all of this.
One answer to these demands is the concept of a storage hypervisor. A major enabling factor of any virtualisation product is the de-coupling of software from hardware. Ask VMware what vSphere runs on these days and they will probably give you a slightly quizzical look. Essentially, it doesn’t really matter what the badge on the front of the server says, as long as the chipset is virtualisation-aware (and pretty much all are nowadays), they don’t care who made it. You can mix and match your server manufacturers to suit requirements in exactly the same way as Declan Kidney would pick his rugby team.
So should the same not apply to storage? Are the good old days of antiquated single vendor storage silos numbered? Well the trend towards hardware agnosticism is certainly becoming more and more apparent as priorities become performance-based. Just as Kidney would not put Cian Healy at fly half or Tommy Bowe at number eight, so must IT managers be given the flexibility to choose what goes where, not be dictated to by their storage manufacturer.
Buttery biscuit base - part II
One of the magical things about virtualisation is that it’s
really a sort of invisibility cloak. Each virtualisation layer hides the
details of those beneath it. The result is much more efficient access to lower
level resources. Applications don’t need to know about CPU, memory, and other
server details to enjoy access to the resources they need.
Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualised servers, especially when considering performance management. In theory, storage virtualisation ought to be able to hide the details of the media, protocols and paths involved in managing the performance of a virtualised storage infrastructure. In reality, the machinery still tends to clatter away in plain sight.
The problem is not storage virtualisation per se, which can boost storage performance in a number of ways. The problem is a balkanised storage infrastructure, where virtualisation is supplied by hardware controllers associated with each “chunk” of storage (e.g. an array). This means that the top storage virtualisation layer is human: the hard-pressed IT personnel who have to make it all work together.
Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. However, even if you do commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. Also, the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure. They have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.
It’s for these reasons, we are seeing many companies turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks that have been orphaned by the consolidation of a virtualisation initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.
As touched on earlier, storage performance is often the single most critical aspect of any virtualisation project. Or, to turn that on its head, it’s often the reason that rollout pilots fail. Desktop virtualisation is a very particular case in point. The sheer amount (and therefore expense) of storage hardware required to service a truly hosted desktop model, where users can access their complete desktop from any internet-enabled device, can be vast just for a stateful model. Should the requirement be for stateless desktops, where a complete virtual desktop is assembled on demand, the difficulties intensify yet further. Then consider what happens if everyone wants this at the same time. Boot-storming is the bane of every desktop virtualisation vendor’s life. Server virtualisation is, admittedly, not quite as dramatic but right-sizing and adequate performance of storage resources is still a must.
Storage hypervisors provide performance boosts in three main ways:
Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualised servers, especially when considering performance management. In theory, storage virtualisation ought to be able to hide the details of the media, protocols and paths involved in managing the performance of a virtualised storage infrastructure. In reality, the machinery still tends to clatter away in plain sight.
The problem is not storage virtualisation per se, which can boost storage performance in a number of ways. The problem is a balkanised storage infrastructure, where virtualisation is supplied by hardware controllers associated with each “chunk” of storage (e.g. an array). This means that the top storage virtualisation layer is human: the hard-pressed IT personnel who have to make it all work together.
Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. However, even if you do commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. Also, the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure. They have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.
It’s for these reasons, we are seeing many companies turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks that have been orphaned by the consolidation of a virtualisation initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.
As touched on earlier, storage performance is often the single most critical aspect of any virtualisation project. Or, to turn that on its head, it’s often the reason that rollout pilots fail. Desktop virtualisation is a very particular case in point. The sheer amount (and therefore expense) of storage hardware required to service a truly hosted desktop model, where users can access their complete desktop from any internet-enabled device, can be vast just for a stateful model. Should the requirement be for stateless desktops, where a complete virtual desktop is assembled on demand, the difficulties intensify yet further. Then consider what happens if everyone wants this at the same time. Boot-storming is the bane of every desktop virtualisation vendor’s life. Server virtualisation is, admittedly, not quite as dramatic but right-sizing and adequate performance of storage resources is still a must.
Storage hypervisors provide performance boosts in three main ways:
- Caching
- Automated tiering
- Path
management.
Within a true
software-only storage hypervisor, the RAM on the server that hosts it is used
as a cache for all the storage assets it virtualises. Advanced write-coalescing
and read pre-fetching algorithms deliver significantly faster I/O response.
Cache can also be used to compensate for the widely different traffic levels
and peak loads found in virtualised server environments. It smooths out traffic
surges and balances workloads more intelligently so that applications and users
can work more efficiently. In general, performance of the underlying storage is
easily doubled using a storage hypervisor.
Tiering.
Tiering.
You can also improve performance by
tiering the data. All your storage assets are managed as a common pool of
storage and continually monitored as to their performance. The auto-tiering
technology migrates the most frequently-used data—which generally needs higher
performance—onto the fastest devices. Likewise, less frequently-used data
typically gets demoted to higher capacity but lower-performance devices.
Auto-tiering uses tiering profiles that dictate both the initial allocation and
subsequent migration dynamics. A user can go with the standard set of default
profiles or can create custom profiles for specific application access
patterns. However you do it, you get the performance and capacity utilisation
benefits of tiering from all your devices, regardless of manufacturer.
As new generations of devices appear such as Solid State Disks (SSD), Flash
memories and very large capacity disks; these faster or larger capacity devices
can simply be added to the available pool of storage devices and assigned a
tier. In addition, as devices age, they can be reset to a lower tier often
extending their useful life. Many users are interested in deploying SSD
technology to gain performance. However, due to the high-cost and their limited
write traffic life-cycles there is a clear need for auto-tiering and caching
architectures to maximise their efficiency. A storage hypervisor can absorb a
good deal of the write traffic thereby extending the useful life of SSDs and,
with auto-tiering, only that data that needs the benefits of the high-speed SSD
tier are directed there. With the storage hypervisor in place, the system
self-tunes and optimises the use of all the storage devices.
Path management.
Path management.
Finally, a storage
hypervisor can greatly reduce the complexity of path management. The software
auto-discovers the connections between storage devices and the server(s) it’s
running on. It then monitors queue depth to detect congestion and route I/O in
a balanced way across all possible routes to the storage in a given virtual
pool. With so much reporting and monitoring data available, this allows
administrators to get a level of performance across disparate devices from
different vendors.
To conclude, virtualisation is not a silver bullet, but, to
be fair, it was rarely touted as such either. The negative side-effects can
often be mitigated, it just depends on how you approach the main one, which is
storage. Do you continue to go along with the hardware manufacturers’ 3-year
life cycle approach, buying overpriced hardware that is deliberately not
designed to work alongside anyone else’s kit or do you take the view that disk
is just disk after all, regardless of who made it? The real value is in the
software that manages it and this management can be provided by a storage hypervisor.
With the cost of hard drives increasing rapidly due to the floods in Thailand
in 2011, never has it been more important to give yourself choice.
Thursday, 15 December 2011
DataCore Customer Conference 2012
If you are a DataCore distributor, channel partner or end user - or even none of the above but would like to take a closer look at what we do - you are cordially invited to our annual Customer Conference on 2nd February in London. Several of our most senior VPs will be in attendance and it promises to be a hugely rewarding day. We would love to see you there - please let me know if you intend to come.
Date: Thursday 2nd February, 2012
Venue: Andaz Hotel, right beside Liverpool Street station, London
Registration: http://tinyurl.com/dxedtgr
Agenda:
Date: Thursday 2nd February, 2012
Venue: Andaz Hotel, right beside Liverpool Street station, London
Registration: http://tinyurl.com/dxedtgr
Agenda:
08.00
- Registration opens.
09.30 - Keynote 1 – Analyst perspective
on the state of the industry.
10.15 - Keynote 2 – DataCore CTO's update on our vision of the industry.
11.15 - A year in the life of DataCore - CEO George Teixeira on 2011.
11.45 - Technical Roadmap – DataCore Chief Engineer's outline on what’s coming.
12.15 - Working alliances – News/updates from selected alliance partners.
10.15 - Keynote 2 – DataCore CTO's update on our vision of the industry.
11.15 - A year in the life of DataCore - CEO George Teixeira on 2011.
11.45 - Technical Roadmap – DataCore Chief Engineer's outline on what’s coming.
12.15 - Working alliances – News/updates from selected alliance partners.
14.00 - Technical track (End users
and Channel partners). "How to..." sessions:
How to: Hot upgrade from SAN Melody and SAN Symphony to SAN Symphony-V.
How to: Understand and make use of storage allocation units in SAN Symphony-V.
How to: Use Powershell basics / host integration kits.
How to: Avoid the Top 10 support gotchas.
How to: Understand and make use of storage allocation units in SAN Symphony-V.
How to: Use Powershell basics / host integration kits.
How to: Avoid the Top 10 support gotchas.
14:00 - Sales track (Channel
Partners only). "Learn more..." sessions.
Learn more: Taking the market on and winning.
Learn more: Product positioning and sales strategies.
Learn more: Lead management and working with DataCore.
Learn more: Product positioning and sales strategies.
Learn more: Lead management and working with DataCore.
16.30 - Key strategic partnerships –
who we like to work with and why.
17.00 - Customer case study.
17.30 - Panel and wrap-up.
17.00 - Customer case study.
17.30 - Panel and wrap-up.
18.00 - Drinks reception
commences in Masonic Temple (no secret handshake required)
18.30 - Keynote 3: Futurologist David Smith and his crystal ball.
21.00 (ish...) Close
18.30 - Keynote 3: Futurologist David Smith and his crystal ball.
21.00 (ish...) Close
Registration: http://tinyurl.com/dxedtgr
Wednesday, 14 December 2011
HDD = heavy discounts disappearing
Whilst I, as a DataCore employee, certainly don't want to be seen to be taking advantage of other people's misfortune, the facts are that there is a shortage of hard disk supply at the moment due to recent flooding in Thailand. As has been widely reported, Western Digital, Toshiba, Intel and others have been directly affected and, with the current situation being as it is, the usual rules of supply and demand are taking over, i.e. costs of hard drives are going up. This is of course perfectly normal, just ask the hoteliers in Newmarket when there's a big weekend of horse-racing. Room prices double at least. And, just as punters need to be flexible as to where they stay to avoid being fleeced during Guineas weekend, IT consumers also need to find ways around paying through the nose for what are normally very inexpensive hard disks.
A storage hypervisor can provide the answer.
Firstly, what is a storage hypervisor? Well, it's a software layer that sits directly above the storage hardware and, amongst other things, turns whatever type of disk it finds beneath it - be it direct-attached, fibre-attached, iSCSI-attached, SAS, SATA, SSD, Flash, old, middle-aged, new, made by, sorry bought from HP, bought from IBM, bought from EMC, bought from HDS, bought from whomever - into one completely anonymous pool of resource for the application servers that need it. This of course isn't the official DataCore company description and there is a lot more that a hypervisor does than just unify heterogeneous storage resources, but, for the purposes of this blog entry, I think you get the point.
So how does this help avoid HDD price hikes? There are three main benefits (plus one handy side-effect):
1) Open up the whole HDD supplier market. With a virtualised storage infrastructure, you can buy whatever disk you like - from whomever you like. No longer are you tied in to one manufacturer; DataCore just sees a disk, not the vendor badge on the front. Independence = flexibility = choice = lower cost.
2) Consider SSD. Solid state disks are not as badly affected in terms of supply as "spinning rust" as I've heard HDD described. With previously inexpensive hard drives becoming expensive, the gulf in cost to SSD is rapidly decreasing. So why not try out some solid state memory and use DataCore's auto-tiering functionality to ensure it is not clogged up with "dead" data but hosts just the data your users are accessing most frequently.
3) Sweat the assets. Enabling customers to re-purpose existing or aging kit is one of the main reasons DataCore has done so well, particularly in sensitive financial times. Disk doesn't suddenly stop spinning after 3 years, despite what the tier 1 storage vendors may have you believe when they present you with the support quote for year 4. With a bit of thought, planning and the use of technologies like thin provisioning, existing storage or even server hardware can be re-purposed within your environment and you may not actually need to buy new disk.
And finally, the handy side-effect I mentioned is looking to the CLOUD. The current swear word in the IT industry. It means everything to some, nothing to others. In this scenario, I purely mean off-site storage resource that you can avail of, possibly temporarily, located in a huge data-centre that doesn't belong to you and you'll probably never see. I'll leave the whole security and data retrieval thing to someone else but the fact is, there are plenty of companies out there who will sell you a virtual hotel that your data can live in for a while, or even for ever, perfectly happily, perfectly comfortably and perfectly securely.
How to get it there? Storage virtualisation makes data mobile, dynamic and flexible. With DataCore (and TwinStrata, our cloud gateway partner), you can auto-tier workloads off to your chosen cloud or IAAS ("Infrastructure as a Service") provider at the click of a button. I'll cover this in more detail another time.
So you see, you don't necessarily have to wait another 6-12 months for prices to come back down again, you just have to re-think the way you view (and buy) your disk. We're all doing it with VMware on servers, what makes you think you can't do the very same thing with storage?
A storage hypervisor can provide the answer.
Firstly, what is a storage hypervisor? Well, it's a software layer that sits directly above the storage hardware and, amongst other things, turns whatever type of disk it finds beneath it - be it direct-attached, fibre-attached, iSCSI-attached, SAS, SATA, SSD, Flash, old, middle-aged, new, made by, sorry bought from HP, bought from IBM, bought from EMC, bought from HDS, bought from whomever - into one completely anonymous pool of resource for the application servers that need it. This of course isn't the official DataCore company description and there is a lot more that a hypervisor does than just unify heterogeneous storage resources, but, for the purposes of this blog entry, I think you get the point.
So how does this help avoid HDD price hikes? There are three main benefits (plus one handy side-effect):
1) Open up the whole HDD supplier market. With a virtualised storage infrastructure, you can buy whatever disk you like - from whomever you like. No longer are you tied in to one manufacturer; DataCore just sees a disk, not the vendor badge on the front. Independence = flexibility = choice = lower cost.
2) Consider SSD. Solid state disks are not as badly affected in terms of supply as "spinning rust" as I've heard HDD described. With previously inexpensive hard drives becoming expensive, the gulf in cost to SSD is rapidly decreasing. So why not try out some solid state memory and use DataCore's auto-tiering functionality to ensure it is not clogged up with "dead" data but hosts just the data your users are accessing most frequently.
3) Sweat the assets. Enabling customers to re-purpose existing or aging kit is one of the main reasons DataCore has done so well, particularly in sensitive financial times. Disk doesn't suddenly stop spinning after 3 years, despite what the tier 1 storage vendors may have you believe when they present you with the support quote for year 4. With a bit of thought, planning and the use of technologies like thin provisioning, existing storage or even server hardware can be re-purposed within your environment and you may not actually need to buy new disk.
And finally, the handy side-effect I mentioned is looking to the CLOUD. The current swear word in the IT industry. It means everything to some, nothing to others. In this scenario, I purely mean off-site storage resource that you can avail of, possibly temporarily, located in a huge data-centre that doesn't belong to you and you'll probably never see. I'll leave the whole security and data retrieval thing to someone else but the fact is, there are plenty of companies out there who will sell you a virtual hotel that your data can live in for a while, or even for ever, perfectly happily, perfectly comfortably and perfectly securely.
How to get it there? Storage virtualisation makes data mobile, dynamic and flexible. With DataCore (and TwinStrata, our cloud gateway partner), you can auto-tier workloads off to your chosen cloud or IAAS ("Infrastructure as a Service") provider at the click of a button. I'll cover this in more detail another time.
So you see, you don't necessarily have to wait another 6-12 months for prices to come back down again, you just have to re-think the way you view (and buy) your disk. We're all doing it with VMware on servers, what makes you think you can't do the very same thing with storage?
Tuesday, 9 August 2011
Please release me, let me go...
After the announcement last week that the Govt are spending over the odds on IT due to what was termed an "oligopoly" of vendors and suppliers, infrastructure decision-makers may now feel obliged to look at their options. On the topic of hardware which, in many cases, is one of the most expensive parts of any project, virtualisation technologies have enabled choice. But not, it would seem, in the storage industry.
Storage seems to be one of the last remaining bastions of quasi-compulsory vendor lock-in – something that is no longer the case with desktops or servers. If we take desktop virtualisation as an example, the main proponents provide the opportunity to access a hosted desktop with whatever device you like, whether that be a desktop, laptop, tablet, smartphone or whatever. In fact diversity is positively welcomed with Citrix being amongst the leaders in the “BYOD” (Bring Your Own Device) initiative and the embracing of what is often described as the consumerisation of IT.
Equally VMware revolutionised the server hardware industry and enabled customers to rationalise their antiquated server purchasing routines, which generally consisted of buying new boxes very cheaply and very often. This was neither green nor cost-efficient and the success of the hypervisor has provided choice of both manufacturer and technology. The old 3-year hardware lifecycle, in this economic climate, has all but disappeared in some verticals and IT admins are being increasingly forced to “sweat their assets” for longer. This is now possible because the hypervisor takes care of software advances during that time.
Storage appears to be different. Companies still seem to grudgingly accept they are locked in to one vendor, regardless of whether that vendor is right for them, both in terms of their products as well as their price. Need a second site for DR? Great – but you’ll pretty much need to buy the same, often very expensive, manufacturer as your primary site, despite the fact your DR site may never even be used! Had enough of vendor A and want to migrate to vendor B? Good luck!
DataCore is the “Switzerland” of storage if you will. We act as a hypervisor for the storage infrastructure, giving customers the choice, both of vendor and of technology. We can often provide zero-day ROI just on hardware savings alone, let alone all the soft cost-savings and increased performance and manageability we offer. When customers have a choice, they can buy what’s right for them, when it is right for them to do so. It also then gives them the option to take ruthless advantage of end of quarter deals throughout the year from whichever hardware vendor or distributor happens to be slightly shy of target.
Cisco recently announced what they see as the top 10 technology trends and number 2 on the list was the unstoppable tsunami of data. Apparently, we're creating a zettabyte of data globally every year and this will only accelerate. All this data needs to be stored and managed so surely performance and enablement of disk choice must be top of the list of critical factors? Not all of that data is needed all of the time but frequently accessed information requires fast disk. Fast disk is not cheap. Added to that, legislation and insurance companies are now demanding secure data archival with realistic accessibility timeframes (i.e. not tape).
With such disparity in data requirements, customers need flexibility to ascertain what’s right for them. There are plenty of smaller storage hardware companies out there that have fantastic solutions at a fraction of the cost of premium label kit. Surely this is therefore now the time for customers to embrace storage virtualisation as they did with server and are now doing with desktop and finally realise some of the cost-savings that can be achieved by avoiding vendor lock-in?
This piece formed the basis of an interview with CRN in the UK, the result of which you can read here.
And, for those of who still remember the title of this post and are partial to some cracking 80's mullets and moustaches, here's Engelbert for you. Nice.
Storage seems to be one of the last remaining bastions of quasi-compulsory vendor lock-in – something that is no longer the case with desktops or servers. If we take desktop virtualisation as an example, the main proponents provide the opportunity to access a hosted desktop with whatever device you like, whether that be a desktop, laptop, tablet, smartphone or whatever. In fact diversity is positively welcomed with Citrix being amongst the leaders in the “BYOD” (Bring Your Own Device) initiative and the embracing of what is often described as the consumerisation of IT.
Equally VMware revolutionised the server hardware industry and enabled customers to rationalise their antiquated server purchasing routines, which generally consisted of buying new boxes very cheaply and very often. This was neither green nor cost-efficient and the success of the hypervisor has provided choice of both manufacturer and technology. The old 3-year hardware lifecycle, in this economic climate, has all but disappeared in some verticals and IT admins are being increasingly forced to “sweat their assets” for longer. This is now possible because the hypervisor takes care of software advances during that time.
Storage appears to be different. Companies still seem to grudgingly accept they are locked in to one vendor, regardless of whether that vendor is right for them, both in terms of their products as well as their price. Need a second site for DR? Great – but you’ll pretty much need to buy the same, often very expensive, manufacturer as your primary site, despite the fact your DR site may never even be used! Had enough of vendor A and want to migrate to vendor B? Good luck!
DataCore is the “Switzerland” of storage if you will. We act as a hypervisor for the storage infrastructure, giving customers the choice, both of vendor and of technology. We can often provide zero-day ROI just on hardware savings alone, let alone all the soft cost-savings and increased performance and manageability we offer. When customers have a choice, they can buy what’s right for them, when it is right for them to do so. It also then gives them the option to take ruthless advantage of end of quarter deals throughout the year from whichever hardware vendor or distributor happens to be slightly shy of target.
Cisco recently announced what they see as the top 10 technology trends and number 2 on the list was the unstoppable tsunami of data. Apparently, we're creating a zettabyte of data globally every year and this will only accelerate. All this data needs to be stored and managed so surely performance and enablement of disk choice must be top of the list of critical factors? Not all of that data is needed all of the time but frequently accessed information requires fast disk. Fast disk is not cheap. Added to that, legislation and insurance companies are now demanding secure data archival with realistic accessibility timeframes (i.e. not tape).
With such disparity in data requirements, customers need flexibility to ascertain what’s right for them. There are plenty of smaller storage hardware companies out there that have fantastic solutions at a fraction of the cost of premium label kit. Surely this is therefore now the time for customers to embrace storage virtualisation as they did with server and are now doing with desktop and finally realise some of the cost-savings that can be achieved by avoiding vendor lock-in?
This piece formed the basis of an interview with CRN in the UK, the result of which you can read here.
And, for those of who still remember the title of this post and are partial to some cracking 80's mullets and moustaches, here's Engelbert for you. Nice.
I'm back...
I know, I know, it's been a while - almost a year in fact - since I last posted any thoughts on here. I never went away but a few things have certainly changed and, for a while, I had neither the time nor the inclination to continue writing this blog if I'm honest. All that is different now though, I have a new motivation and energy - a change is as good as a rest, as they say - and a new area of IT to get my teeth into.
In March, I moved on from my role at COMPUTERLINKS, where I was looking after the Citrix franchise for 5 years and, latterly, getting a newly-formed virtualisation division on its feet. COMPUTERLINKS and Citrix parted company at the end of 2010 and, for the few of you who have followed this blog since its inception (for which I am extremely grateful!), you may expect me, at this point, to go into great detail on my thoughts around that decision. I have though, for better or worse, changed my attitude slightly since then. The controversy that I have unleashed in the past (which occasionally got me into scrapes with various figures of authority!) is now all but gone. As a result, whilst continuing to try and air my views honestly and openly, I will refrain from making comments that might get me into trouble! Maybe I have just grown up and the angry little man has disappeared...
My new working life is quite different. I am now in the promised land of "vendor-dom". I work for DataCore Software - still virtualisation (so I can retain the title of this blog for now), just this time it's storage, rather than servers and desktops. Life as a vendor is very different from distribution. You feel you have a lot more control over your own destiny, rather than continually being crunched from both sides - the resellers and the vendors. Having said that, distribution gave me an invaluable grounding in what the channel needs and, more importantly, doesn't need and I hope I can bring some of my experience to bear now.
DataCore makes fantastic technology, the equal of which is pretty much non-existent. We are a hypervisor-type layer that sits above an organisation's storage infrastructure and virtualises their disk arrays, regardless of what type of disk it is and which hardware manufacturer made it. More on our company and technology in later blogs.
Anyway, I'll make some sort of attempt to keep this as up to date as I can. News comes thick and fast in the IT industry and Twitter just doesn't cut it sometimes. I would love to read the odd reaction from time to time too, so please do comment.
In March, I moved on from my role at COMPUTERLINKS, where I was looking after the Citrix franchise for 5 years and, latterly, getting a newly-formed virtualisation division on its feet. COMPUTERLINKS and Citrix parted company at the end of 2010 and, for the few of you who have followed this blog since its inception (for which I am extremely grateful!), you may expect me, at this point, to go into great detail on my thoughts around that decision. I have though, for better or worse, changed my attitude slightly since then. The controversy that I have unleashed in the past (which occasionally got me into scrapes with various figures of authority!) is now all but gone. As a result, whilst continuing to try and air my views honestly and openly, I will refrain from making comments that might get me into trouble! Maybe I have just grown up and the angry little man has disappeared...
My new working life is quite different. I am now in the promised land of "vendor-dom". I work for DataCore Software - still virtualisation (so I can retain the title of this blog for now), just this time it's storage, rather than servers and desktops. Life as a vendor is very different from distribution. You feel you have a lot more control over your own destiny, rather than continually being crunched from both sides - the resellers and the vendors. Having said that, distribution gave me an invaluable grounding in what the channel needs and, more importantly, doesn't need and I hope I can bring some of my experience to bear now.
DataCore makes fantastic technology, the equal of which is pretty much non-existent. We are a hypervisor-type layer that sits above an organisation's storage infrastructure and virtualises their disk arrays, regardless of what type of disk it is and which hardware manufacturer made it. More on our company and technology in later blogs.
Anyway, I'll make some sort of attempt to keep this as up to date as I can. News comes thick and fast in the IT industry and Twitter just doesn't cut it sometimes. I would love to read the odd reaction from time to time too, so please do comment.
Subscribe to:
Posts (Atom)