Tikiri Wanduragala, Lenovo’s EMEA x86 Server Systems Snr. Consultant, writes for ThinkFWD on software-defined and hyper-convergence – two of technology’s hottest topics.
I have had lots of great feedback on these pieces so far, including some insightful suggestions for future pieces. One of those was a request for an article looking at software-defined, hyper-convergence, and the relationship between the two.
What do we mean when we say ‘software-defined’?
What we’re trying to get across with this term is virtualisation. So, when you virtualise a server and run multiple instances on it, what you’ve done is taken something that used to be physically tied to a piece of hardware and turned it, in effect, into a binary file. That means you can load multiple copies, enabling you to better utilise your server and get much greater flexibility.
That’s what’s happening with servers, but variants of this technology can also be used in networking and storage. Once you’ve combined all three elements under one umbrella, you then have the possibility of what’s called ‘software-defined’, meaning the entire environment can be controlled by software.
The benefits of this software-defined approach include getting closer to the application, which is important because, as I have outlined before, that’s where the business logic is currently taking us. It also makes for greater automation and simplicity – because you don’t have to physically move hardware around as much.
So the whole industry is now gearing up to this change as virtualisation spreads across storage, networking and servers.
As I also touched upon in my what’s in store for datacentres is 2016 piece, this is all to do with silos. Businesses used to be made up of silos of databases, servers, storage, applications, networks and so on, but these all have to merge in the software-defined world.
Essentially, the nature of silos is now changing. To date, you’ve had a network silo, for example, where all your networking experts built the best network they could. The same applies for storage, servers, etc. In the software-defined world, however, these silos will be based around the applications. Again, that’s where the business logic lies.
The hype behind hyper-converged
To understand the ‘hyper’ element, we have to understand converged infrastructure. Primarily blade systems, in other words. What made blades revolutionary was that you had one big box acting like its own little datacentre: it had servers, switches, networking and storage, as well as a management tool to harmonise everything, so you were basically converging different technologies.
Each of these elements could be chosen. You could choose the storage fabric, server types and networking switches – whatever suited your needs.
Hyper-converged offers these locked down: fixed storage, fixed networking switches, fixed server capabilities and so on. Then it virtualises the whole thing. So what you now have is like a building block, which is excellent for deploying things quickly. You can buy this box with the clever software on it, and then buy another one and scale up that way.
The big thing about hyper-convergence is the linking and clustering of the storage units. Until now, storage boxes sat separately from the server and the two talked to each other through clever protocols. In the hyper-converged world, the storage sits inside the server, making use of the intelligence that’s there – which obviously explains why all this is playing so heavily in the storage market. Because it’s redefining it, in fact.
So there’s the link between the two: hyper-converged is a subset of software-defined.
What is Lenovo’s role in all this?
Well, in both cases what you’re moving to is a world where the server becomes very important. The different roles are now being controlled by the server, because that’s where everything is; it’s where the applications run. And so Lenovo’s role is to build servers that will underpin all this, servers of all different specifications that meet different customer needs.
We’ve also partnered with a major software provider for their products to run on our servers – which, in turn, offer the reliability needed in software-defined environments. So, in effect, we’re very well placed – we have the critical hardware needed to run such software, without being tied to any particular one.
And why is all this happening now? Well, flash memory is a big part of it, as it enables you to do far more in the server. Let me explain. If you’re buying rotating drives, you’re doing so for two reasons – performance and capacity. They’re locked together. So, if performance isn’t great, you’re forced to buy lots of capacity, because you can’t get one without the other.
With flash storage, you can crucially now de-couple performance and capacity. That means considerably fewer spinning discs in the datacentre, improving performance, power and cooling, as well as reducing the physical space being used. It’s like a ripple effect in the datacentre, with flash storage at the centre.
This article was originally published on Think Progress.