Thursday 11 September 2008

On a somewhat more personal note...

There comes a time in most men's lives when they must kiss a fond farewell to youth, freedom and singleton-ism (if that's even a word) and devote their undivided attention to the future mother of their children. That time has now, happily, arrived for the otherwise completely devoted creator of this blog. It is with perhaps a touch of excited nervosity, but yet every confidence, that, at about 1:00pm on Friday 19th September 2008, I will be standing by the altar of the Catholic church in Newmarket, about to turn round and watch my esteemed marketing colleague at COMPUTERLINKS, Maria Curley, walking up the aisle, preparing to admit she wants to spend the rest of her life with me.

I honestly couldn't be a happier chappy but, ultimately, something must give. Blogs and new wives just don't mix, so the blog will just have to get used to life with its master as a married man and take a break for a couple of weeks.

I shall resume my ramblings upon my return when, I happen to know, there will most certainly be one or two things to discuss. (It's VMWorld next week, don't forget, and I can safely say now that Citrix may well out-manoeuvre VMWare on the announcement front once again - the first time being the XenSource acquisition the day before VMWare's IPO. Sorry, but I am not permitted say any more than that at this stage.)

Until then, I bid thee goodbye. I promise to think of you all, suffering the early UK winter, whilst I contemplate whether to take a 6 or a 7 iron on the first par 3 of the Old Course at Vilamoura.

My tips for a wife? Find someone who organises events for a living and has previously been called The Incredible Sleeping Woman whilst on holiday. You need do next to nothing for the wedding and you get to play golf on your honeymoon while she sleeps. That, my friends, is the perfect wife!

Adios amigos.

Green IT not a spin-led "greenwash"

Again, just like the "Virtualisation - a bit of a snapshot" article below, this piece was written a while ago for a broader audience than this blog typically addresses, but I would like to post it nonetheless.

Whilst doubters remain, cynically citing vendor marketing spin, concerning the real impact that such things as virtualisation have on energy consumption and carbon footprints, companies such as Citrix do in fact provide undeniable and measurable improvements.

Getting fewer servers to do the same job, using XenServer for example, is the fundament. Other products though, particularly within the Citrix portfolio, such as XenApp Platinum or Citrix Online, enable secure remote access and fuel-saving work-from-home environments and these also contribute substantially.

As most IT managers are now aware, server virtualisation vastly reduces the amount of servers required to service the environment. This impacts not only energy consumption in powering the actual devices, but also the considerable air conditioning required to keep the data-centres at optimum temperature.

Citrix incorporate a product called Provisioning Server into the Platinum version of their XenServer solution. Taking things to the extreme, this could provide another stepping stone towards the Holy Grail of using energy only when that energy is actually required. Instead of having a fully functional data-centre running all day and all night, regardless of whether it's actually in use or not, Provisioning Server, in tandem with one or two other solutions, provides the capability to fire up servers and take them down again according to time of day and the usage requirements of the organisation.

In an ideal world, an administrator could effectively dismantle the data-centre, or reduce it down to a bare minimum, at 10pm and then start it all back up again at 6am the next morning. This could involve the servers running the same workloads as the previous day or indeed completely different ones. Provisioning Server transforms a server from an SAP server, say, into a Navision server, in just a few minutes. The workloads are stored on virtual disks at the back-end and streamed out to the hardware as and when required.

Taking this one step further, Citrix EdgeSight provides metrics around the end user experience and another complementary vendor, RES Software, manages hardware remotely. So, rather than performing these tasks manually, the administrator could set up rules according to the metrics being delivered back from the EdgeSight Server and then configure RES Wisdom to shut down the servers in the evening when demand goes below a certain point. The next morning, when demand begins to grow again, RES wakes up the machines and Provisioning Server churns out the workloads. Again, demand could also determine what type of server each turns out to be.

These products will undoubtedly integrate closely with one another to enable something approaching a 100% dynamic, on-demand data-centre. Citrix’s soon-to-be-released Workflow Studio will then add a further piece to the pie, providing a simple graphical interface for these processes to be simply and quickly put into place. Throw in remote infrastructure management solutions such as Avocent and you create a remarkably flexible, energy-efficient data-centre.

Moving to the desktop, when you consider a PC uses around 75 watts of power and a WYSE thin client requires about 10 or 15% of that, CFOs will surely demand increasingly sound justification from their CTOs for not choosing the “server-based computing” or application delivery methods over traditional user scenarios.

More dedicated device management also comes into the frame – not only in the data-centre but also on the desktop. RES Software, for example, can enforce a power-down of PCs left on overnight. For a company with 150 employees leaving just 15% of the PCs running overnight, the total cost of the software is offset by the energy savings in just 9 months. Such is the effect of high energy prices, the software pays for itself in under a year on that one functionality alone.

IT departments are now becoming cost centres with their own profit & loss accounts, making energy reduction IT’s problem. Reed Managed Services are Citrix’s flagship case study for green IT and energy usage reduction. Admittedly other non IT-related measures were also implemented, but with 64-bit Citrix virtualisation solutions (and rolling out thin clients to almost all their users), Reed: a) got 100 users on a blade instead of 29, b) made their company 100% carbon neutral, c) saved 26% of their IT budget and d) won them a Green Oscar.

Not bad for vendor marketing spin!

Friday 5 September 2008

The future of the IT industry

Thanks to the market-shaping friends I keep, I have received (on extremely good authority) some information that I think will be of great interest to you. In the very near future, you will see several ground-breaking mergers and acquisitions in this wonderful, crazy IT world that we work in. Please keep these under your hat for now, as I'm pretty sure they are supposed to remain absolutely confidential for the time being, but, just so that I can say: "You heard it here first", here they are:

Kensington will acquire Ipswitch and re-brand as Chelsea Tractor Boys.
Apple will merge with Blackberry and re-brand as Crumble.
Blue Coat will merge with Red Hat and re-brand as Purple Haze.
Gordon Ramsay Enterprises are looking to add Sage and Juniper to the mix.
Extreme Networks, Leostream and Tumbleweed will form a major joint venture, possibly to be re-branded as Exstream Weed.
Riverbed will flow into Seagate (groan).
Computer Associates will buy the boxer shorts division of Calvin Klein, re-branding as CACK.
Adder and Brother will become one and target the lucrative adoption market.
And finally, I have heard that Expand and Palm are approaching the climax of their negotiations with Siemens.

If you happen to get wind of any more interesting developments on the grapevine, feel free to add them.

Thursday 4 September 2008

Virtualisation - a bit of a snapshot

Note: I wrote this for our company's newsletter, so a very broad audience. It may be somewhat low-level for the hardcore amongst you but I'll post it anyway.

Currently one of the hottest topics in the IT marketplace, virtualisation has opened up new and exciting avenues of expertise (and, ultimately, revenue) for the entire IT channel. Many billions of dollars have been pocketed by the likes of VMWare, Citrix, Microsoft and many others off the back of a) something incredibly simple if we're being honest with ourselves and b) something incredibly old (in IT years that is). Virtualisation, as a term, has come to mean many different things these days, but, for the purposes of this article, I am talking about server virtualisation.

Virtualisation is hardly revolutionary, even if it still seems fresh and shiny to a lot of us. You wouldn't manufacture a carriage for each and every person wanting to travel on a train, you wouldn't provide a drawer for each and every piece of cutlery you wanted to store and you wouldn't send an articulated lorry from London to Inverness with one box on it. So why does it appear to have taken so long for these principles to be applied to servers? Whose idea was it to dedicate a whole server to one single application anyway, when that server should supposedly be capable of "changing the way you do business" or whatever the latest vendor marketing catchphrase happens to be?

Well, IT vendors have been their own worst enemies, to a point. Software having been as infuriatingly unreliable as it is useful, administrators found out many years ago to their cost, that the more you ask a server to do, the more likely it is to fail in one or more of those tasks. Install just one application per server and you give it a fighting chance.

Unlike that piece of cutlery, whose single and only function in this world is to reduce the size of your steak and chips to the point you can eat it with some semblance of manners, software has a thousand and one things to do, on top of delivering your users the information they want from that application. Added to that, unlike software applications, a fork doesn't have to deal with millions of people all over the world thinking up increasingly ingenious ways to kill it. So, in essence, the more you cut down on what a server is expected to provide, the higher the likelihood it'll provide it.

I am being overly simplistic here, of course. But my point is that, despite the fact virtualisation solutions were invented a long time ago, up until the last few years, there were two main reasons they never really took off.

Firstly, software applications and operating systems weren't technically capable of ignoring their neighbours on a server; each application was like a spoilt child. It wanted all the hardware resources to itself and if anything else tried to butt in, it complained bitterly, then sulked, then went home and took its football with it.

Secondly, tin got cheaper. So cheap, in fact, that it didn't really make a lot of difference whether you had racks and racks of servers doing very little. Tin is still cheap today, perhaps as cheap as it will ever be, but the big differences now are that organisations are coming under increased pressure to reduce waste and run their departments in a more responsible, ecological fashion and, of course, in the last couple of years, the costs of running that cheap tin have sky-rocketed.

The development of the modern hypervisor and improvements in general software architecture changed some of this. Although IBM developed virtualisation technology back in the 1960s, long before VMWare brought out their ESX product, it is widely accepted that VMWare were the founders of modern virtualisation techniques. The ESX hypervisor effectively stood as a sort of strict parent in the midst of those spoilt children, making sure none of them got too obnoxious. The hypervisor sat between the mechanics of a server and the software running on it, in this case an operating system, dishing out commands back and forth between the two entities.

Then, along came Professor Ian Pratt from Cambridge University and his open source Xen project. They decided to make a hypervisor the way they would have done all along if anybody had actually thought about it properly. Rather than dishing out these commands yourself (emulation), why not get the component parts to talk to each other directly (paravirtualisation)? The chipset knows it's being virtualised and the operating systems are also aware. Everything runs quicker and is more stable. In fact, this solution was so successful, not only did Citrix agree to shell out 500 million dollars for the project's commercial arm in 2007 but Microsoft, late to the party, also went down this route with their Hyper-V product.

Software vendors are also starting to play their part. Making applications act less like spoilt children was actually their duty, but, increasingly now, they are also adapting them to better suit virtualised environments. Add in the soaring costs of electricity and a general malaise in the global economy (I refuse to succumb to sensationalism and mention the R word), IT Directors now owe it to themselves and their businesses to look at virtualisation, simply as a cost-saving mechanism, irrespective of the IT management advantages it offers.

Companies such as Citrix (who, arguably, have been "virtualising" applications with their Presentation Server product, now XenApp, for over 15 years) are taking the fundament that is server virtualisation and building on it - to great effect. Citrix call it the Dynamic Data Centre. By this, they mean adapting what is currently little more than a static data repository into a constantly-changing, ever-flexible delivery hub for information and data.

Instead of server farms running all day and all night, requiring irritating downtime (albeit scheduled) when maintenance needs carrying out plus always running the same workload (i.e. applications), Citrix products enable those servers to be put into production as and when required - and, if you so wish, with a different workload on them each time. Automating all of this according to usage metrics and applying the same principles to the desktop, as well as the data centre, are the next exciting steps on a journey that started almost 50 years ago but is only now becoming a reality.