Edge computing and hyperconvergence: the formula for maximum uptime?
- Details
- Published: Thursday, 29 August 2019 07:33
There is plenty of hype around edge computing and hyperconvergence, but how useful are these technology approaches to business continuity? Alan Conboy explains why the combination can help to reduce downtime to the absolute minimum.
There is no doubt that we live in an increasingly technology and data-driven world. Because of this, IT downtime is bad news for any business, not least because of the impact it can have on company reputation, capital, and customer satisfaction. Recent high-profile cases, such as the data centre / center outage which led British Airways to pay an estimated £100,000,000 after cancelling more than 400 flights and stranding 75,000 passengers in one day, are prime examples of how much downtime can actually cost a business.
Of course, British Airways is a well-established business, so is generally able to recover from the monetary and reputational backlash. But, what about smaller businesses? The fact is that they could be so adversely impacted by an outage that they never fully recover. More than this, an established business is more likely to be able to afford the latest technology in detecting sophisticated threats, and protecting valuable data, as well as a multitude of expert IT staff to manage it all.
However, not all is lost. There is an answer to the challenges facing SMEs and distributed enterprises as they try to keep pace with ever-sophisticated cyber-threats and the growing requirement for their business to remain ‘always-on’. For smaller businesses, and those in industries with distributed enterprises like retail or financial services, they can look to the latest technologies, like hyperconvergence and edge computing, that offer high availability, lower total cost of ownership (TCO), and easy deployment and management. By investing in technologies that are simple to manage and don’t require onsite IT experts, distributed organizations can achieve a sophisticated strategy that mitigates the risk of costly downtime.
No more single point of failure
Edge computing is all about putting the computing resources close to where they are needed most. With a traditional single point of failure, when there are devices at branch locations, like point-of-sale cash registers in retail stores, that all connect to a centralized data centre, then an outage at that central data centre can affect all the branch locations.
But, by putting an edge computing platform at individual branch locations, a failure at the central data centre would not bring down everything because each branch can run independently from it. A solid virtualized environment can run all of the different applications needed to provide customers with the high-tech services they have come to expect.
Many might ask why this hasn’t been done before, and there is a simple answer: until very recently, it was cost-prohibitive to implement the kind of infrastructure needed to make this work - highly-available infrastructure. Creating a highly-available virtual infrastructure involved a sizeable investment in a shared-storage appliance, multiple host servers, virtual hypervisor licensing, and then a disaster recovery solution.
How hyperconvergence makes it work
Hyperconvergence has consolidated all those components into an easy-to-deploy, low-cost solution. However, not all hyperconverged infrastructure (HCI) solutions are cost-effective for edge computing, because some HCI solutions are still designed like traditional virtualization architectures and emulate SAN technology to support that legacy architecture. This storage emulation results in resource inefficiency requiring bigger systems that are not cost-compatible with edge computing.
The answer is HCI with hypervisor-embedded storage, which can offer smaller, cost-effective, highly-available infrastructure that allows each branch location to run independently, even if the central data centre goes down. A small cluster of three HCI appliances can continue running despite drive failures or even the failure of an entire appliance. There is no way to prevent downtime completely, but edge computing, with the right highly-available infrastructure, can insulate branches to continue operating independently.
With HCI, the central data centre is still a vital piece of the overall IT infrastructure. It consolidates data from all of the branch locations for analysis to make key business decisions. That doesn’t need to change with edge computing. On-site edge computing platforms can provide local computing while communicating key data back to the central data centre for reporting and analytics. By taking the single point of failure out of the central data centre, outages at any location need not have far-reaching effects across the whole organization.
A step in the best direction
The importance of high availability grows in parallel with the various technologies becoming increasingly commonplace in all aspects of business, industry, and our daily lives. With change happening fast, HCI and edge computing are quickly replacing traditional virtualization infrastructure and making that necessary high availability more accessible to all.
Small businesses and those with distributed branches should be considering HCI for highly available edge computing, keeping their business ‘always-on’ in this rapidly growing digital age.
The author
Alan Conboy, Office of the CTO, Scale Computing.