Servers

Medical Cloud Hosting Australia

DHS Gives The Latest Mandatory Policy on Medical Data Management in Australia for 2020

DHS Gives The Latest Mandatory Policy on Medical Data Management in Australia for 2020 1920 1280 Greenlight Managed IT Support Services | Sydney | Melbourne
DHS Gives Mandatory policy on medical data management in Australia

The medical data management system in Australia is not where it should be. According to Dr. Bernard Robertson-Dunn, who chairs the health committee of the Australian Privacy Foundation (APF), says that rather than focusing on improving patient health, or reducing the cost of healthcare, all the government is doing is putting patients data at risk.

According to the Notifiable Data Breaches Scheme (NBDS) report from April 2018 to March 2019, there has been a 712% percent increase in data breach notifications alone.

60 percent of those data breaches were recognized as malicious attacks with 28 percent off the attacks coming from unknown sources.

55 percent of the attacks and notifications were attributed to human error in the health sector and 41 percent in the financial sector.

Across all industries, 35 percent of those data breach notifications were set off by human error which can be attributed to the loss of a storage data device or the unintended disclosure of personal information. We wrote about more of these Cybersecurity Statistics here.

All in all, it’s safe to say that between the data storage systems and infrastructure as well as the medical community have failed their patients in terms of information privacy and the management of their security. That is why stronger rules have been put in place to patient’s worries on privacy at ease.

Mandatory DHS Rules, Requirements and Consequences

The Department of Health Services (DHS) plays the role of ensuring that health providers comply with the requirements of the Medicare Benefits Schedule (MBS) and other programs, including incentive payment programs.

To help maintain the privacy of patients’ personal information, the DHS has adopted new requirements for third-party software providers. It’s a part of their campaign for the Digital Transformation Agency’s (DTA) Secure Cloud Strategy. Under the Secure Cloud Strategy, the DHA requires that all applicable Australian software companies undergo a process of accreditation and compliance of their data management practices.

The new policy applies to any party using cloud-hosted services that connect with the DHS to provide services such as Medicare, PBS, NDIS, DVA, MyHealthRecord, Child Care, and Aged Care.

The accreditation process involves earning the Australian Signals Directorate Certified Cloud Services List (CCSL) certification and can maintain assurance that all data will remain within the Australian jurisdiction. Additionally, the policy encourages the physical separation of the infrastructure as well as limiting access to patients’ private data to those with Negative Vetting 1 (NV1) security clearance. 

Failure to comply with the DHS’s rules and policy under the Secure Cloud Strategy by the deadline on April can result in major consequences. Those consequences could mean fines, suspended licenses, and ultimately the loss of your practice.

Managing DHS Requirements and Running Your Practice

Under the DHS’s policy, all practices are required to utilize a DHA certified infrastructure to ensure the privacy of their patients. So, how do you manage that and still do work for your practice? —Managed Cloud Services, i.e., medical hosting.

What is Medical Cloud Hosting?

Medical cloud hosting is private hosting (or, more specifically, private cloud hosting). When we talk about cloud hosting, we’re referring to hundreds of individual servers that work together as one. With cloud hosting, there’s no need for an on-premise infrastructure that costs money, space, and time in maintenance. With cloud hosting, everything is managed and stored for you via a cloud service provider.

In general, you have the option of public and private cloud hosting. Of course, medical hosting is private, but for your information, here’s the difference:

Public cloud hosting involves a standard cloud computing framework consisting of files, storage, applications, and services that are available on a public network. (Think Gmail).

Private cloud hosting is comprised of the same things—only all of those things are protected by a corporate firewall controlled by the corporate IT department. (Think Microsoft Exchange, as it requires authorized users and a secure VPN connection).

In other words, private medical cloud hosting equals privacy and protection. If you’ll recall, the DHS policy applies to all third parties using cloud services that connect with the department to deliver services such as Medicare, PBS, DVA, NDIS, and so on. This could only mean that private hosting is viable.

It’s also a necessity considering the fact that it’s DHS Compliant, ISO Certified, and handled offsite via your service provider but remains within the Australian jurisdiction.

How much Should I budget for Cloud Hosting?

Cloud computing and data management within a compliant industry isn’t going to be cheap—but it will become cost-effective in the long run. Ultimately, your budget will come down to your industry and the data capacity that you need, managed services, private vs public cloud hosting, and so on.

Of course, sticking with your outdated, on premise hardware, you’re looking at heaps of unnecessary spending in system maintenance, upgrades, equipment—not to mention paying an IT team to take care of it all for you.

Is you current provider DHS compliant?

If you’re a medical practitioner responsible for running a practice and wondering where to turn for your medical cloud hosting, Greenlight ITC is here to help.

We have one of the few providers of DHA certified cloud infrastructure for medical hosting. We are your ultimate technology solutions partner. Our medical cloud hosting capabilities can make your staff more efficient, and ultimately, your businesses more profitable under its data management practices. Not to mention, we’ll keep you safe from phishing scams and serious data breaches so that your patients can rest easy knowing that their private information is safe while they’re getting the care they need.

Greenlight is also a Tier-1 Microsoft Azure Partner and 2017 Watchguard ANZ Partner of the Year.

If you want to know more about how much switching to private medical hosting is going to cost you and your practice, your best bet is to call Greenlight ITC at 02 8412 000 to get a custom quote today. You’ll get to speak directly with one of our IT experts (aka, Data Doctors) who will walk you through the entire process.

Top 5 Risks When You Stay with Windows 7 this 2020

Top 5 Risks When You Stay with Windows 7 this 2020 1920 1280 Greenlight Managed IT Support Services | Sydney | Melbourne

Windows 7 End of Life

Microsoft will no longer be supporting Windows 7 after a ten-year stint—as announced last December 16, 2019.

The purpose of this direction is for the tech giant to pour its ample resources into more lucrative, newer technologies. Essentially, Microsoft has squeezed as much juice as it can out of Windows 7. It doesn’t make sense for them to continue providing technical assistance and software updates that protect PCs using the outdated program.

The Impact on Your Businesses

First and foremost, Windows 7 is still being used on 39% of all PCs.

And a year from now in January 2021, Windows 7 will be used on 18.7% of all PCs, which accounts for around 281 million machines.

What does that mean, exactly? That even at under 20% of total usage on PCs, there are still over 280 million systems with Windows 7 in place. At such a vast number, it’s fair to guess that small to medium businesses make up enough of those totals. Especially since many owners might wish to sidestep the costly nature of upgrades.

Though, with the removal of all support for the system, keeping Windows 7 installed will end up being more costly than merely paying for the upgrade to Windows 10.

So, as a business owner or stakeholder who may still be using Windows 7, you’re likely interested in knowing what might happen after neglecting to upgrade.

Let’s examine some of the most significant risks involved in continuing to use Windows 7 on your PC and how it can affect your workplace:

The Top 5 Risks of Staying with Windows 7

Risk # 1.      No More Technical Support

Last year, Microsoft patched 29 Windows 7 vulnerabilities in April alone.

Of those vulnerabilities, 6 were rated critical with the other 23 being deemed important.

Think about it, that’s one month alone—and the only reason those changes were made was due to an extended support phase focused on fixing flaws. Those snafus on Windows 7, since it’s an older system, have continually surged over the years. Now, without support, these issues will be seemingly never-ending.   

Risk # 2.      Heightened Cybersecurity Risk

Studies from 2018 about cybersecurity in small-and-medium-sized businesses reported that 67% of survey respondents experienced some form of cyber-attack. On top of that, another 58% went through a data breach with employee or customer information.

These stats prove that businesses are already susceptible to these attacks. With Windows 7 now lacking the infrastructure to deal with these issues, if you don’t upgrade, the consequences could be disastrous.

Risk # 3.      Additional Costs

On a per-system basis, it’ll be possible to receive extended security support. However, there’s an additional fee paid to Microsoft per computer to obtain the appropriate security updates. Furthermore, the dollar amount owed to Microsoft will double every year and caps out at a maximum of three years.

The price begins at $50 per machine—meaning by 3 years it’ll be $150 per device. If you have around 10 machines, that can prove quite costly.

Risk # 4.      Falling Behind the Competition

As Windows 7 becomes more obsolete, more businesses will be using Windows 10. 

As technologies keep improving, functionality enhances. If you’re on an outdated system, the slower it’ll run, and it’ll be less equipped to handle the state-of-the-art tools that’ll keep you ahead of your competitors.

Risk # 5. Frustrating Your Team

When your employees end up with inferior technology, their morale tends to suffer. For many staff members, it can be a bad look when you fail to equip them with systems, programs, and software that isn’t – at the very least – up to date.

If you don’t update to Windows 10 soon, your team might get the idea that you aren’t invested fully in their success. From there, frustration brews, work tends to be negatively impacted, and you’ll have an office full of employees at their wit’s end.

The Very Real Problem of Malware in the Workplace

As technology keeps reaching new heights, so does its propensity to be malicious and dangerous. Businesses far and wide must be eternally vigilant in the face of these threats that can damage both their reputation and bottom line.

For instance, recently, Landry’s, Inc., an American, privately owned, multi-brand dining, hospitality, entertainment, and gaming corporation, identified malware on its payment processing system.

The malware was designed to access payment card data from cards used in person. Interestingly, the card data wasn’t readable due to the end-to-end encryption technology used at points-of-sale. So, the malware was prevented from accessing payment card data.

Although this hasn’t occurred on a Windows 7, these attacks come from everywhere when sensitive information or money is involved. As proven by Landry’s, with state-of-the-art systems in place, these cyber breaches situations can be nipped in the bud.

So how do you mitigate these risks?

The short answer is to move to Windows 10 to receive the full support from Microsoft. This can be done by upgrading your operating system on your existing PC’s, assuming your hardware is capable of handling the demands of a modern operating system. If you PC is more than 3 or 4 years old, it is probably more cost effective to simply replace the PC.

email

Why choose Office 365 over an on-premises Exchange Server

Why choose Office 365 over an on-premises Exchange Server 1200 1190 Greenlight Managed IT Support Services | Sydney | Melbourne

Everything is in the cloud now, and that has a lot of benefits. If you are currently on the fence about whether you should jump to a software solution rather than an on-premises one for your email management there’s a lot to consider.

It can be scary to give up control of your private mail to another corporation, and some companies may be reluctant to commit to a subscription service. However, there are a lot of very good reasons to do so, and it’s almost always cheaper to use a cloud service than to host your own solutions.

In this article, we’re going to talk about the benefits of using Office 365 over an on-premises exchange. You’ll learn how it can save you money and the added benefits you’ll get for using the service which includes awesome tools that can improve your productivity.

Security is easier

While an on-site solution gives you more opportunities for customisation you should stop to ask yourself if that’s really worth the hassle of managing it yourself. When you use a cloud-based solution like Microsoft 365 all of your security concerns are taken care of.

Your IT department no longer needs to worry about performing updates because everything is handled seamlessly by Microsoft with no downtime for your office. There are also better options for user-level security as these applications include easy to use and familiar 2FA authentication to protect your accounts.

Managing a server room is a full-time job, and if you let somebody else handle that work for you then you won’t need nearly as big of a budget for your IT staff. This allows you to cut down on a costly department’s budget.

It’s more affordable

An in house solution sounds great until you realise exactly how much it will cost you to run it. First, you’ll need your own server equipment, and you’ll need a lot of experienced IT specialists to run it.

Don’t forget about the massive power bills required to run those machines and any service contracts you might need for them. Servers run hard and they run 24/7. That means tons of wear and tear on the hardware and constant power burn.

Don’t forget that technology also ages incredibly fast. If you were to host your own solution then you might be looking at replacing that expensive equipment every five years. There’s nothing to replace with a cloud solution which allows you to scratch that cost from your expense sheet.

It’s also much easier to budget a cloud solution because it’s considered opex rather than capex. This allows you to creatively shift your spending while accomplishing the same goal and preserving your cash flow with lower upfront costs.

You get even more useful software

Office 365 isn’t just about Outlook and Word. If you go this route then you’ll also get some very useful software which could help your business. This includes some excellent productivity tools that help you to communicate better and be more productive as a team.

The software bundle for Office 365 includes 1TB of storage with OneDrive, SharePoint for collaborating on projects, Flow for creating automated WorkFlows and Teams for helping your company’s employees to communicate better.

If you don’t already have solutions for some of these issues then subscribing to Office 365 can be a phenomenal deal. Purchasing these software packages from competing providers separately could cost you hundreds per month in subscription fees.

Plus, having a single software bundle for your employees to manage and learn makes things much easier and keeps them more organised. They can get access to their workflows, have video conferences, collaborate on documents and more without ever leaving Office 365.

Unlimited scalability

While an on-site solution might be limited in its capabilities if you use Office 365 there are no limits to how far you can scale your business solutions. Your system can now be as complicated or as simple as you need it to be.

Cloud solutions are extremely flexible, that’s part of what makes them so great. Depending on who actually needs to use this software at your organisation you can turn authorisations on and off with flexible licensing options.

Your users also have the ability to activate the software on their own computers if they work from home so that they can have all the tools they need available to them.

If you need to upgrade an on-premises exchange then you’ll be looking at massive costs in both hardware and software to do so. You’ll also need to pay an army of IT professionals to handle the upgrades and updating your server’s software isn’t cheap either.

It’s more reliable than a standalone server

All computer equipment breaks eventually. That’s just a fact of working with technology. If you don’t have a great server setup then this could end up being very bad for your business.

Even if you have people on staff to handle the emergency it’s likely that you’ll be facing some serious downtime that could slow you down. This could mean a lot of money lost when you’re not selling products to customers or acquiring new leads.

A cloud solution which is managed by a huge corporation like Microsoft doesn’t have downtime. They have so many servers that if one goes down it’s barely a blip on their radar. Their system will route that traffic to a new one and their customers won’t even know.

While you could try to build this kind of reliability for yourself the costs would be quite large, and you can get the same level of protection much cheaper by using a cloud-based product instead.

In closing, Office 365 offers excellent value to its subscribers, and it’s worth a look if wasting valuable time and resources managing servers just doesn’t sound like something your company wants to do.

It’s a particularly attractive option if you require the other software products that they offer with the subscription because it saves you from needing to purchase them separately from other providers.

Microsoft Business Server

My Microsoft Small Business Server is Nearing End of Life – what are my options?

My Microsoft Small Business Server is Nearing End of Life – what are my options? 1000 667 Greenlight Managed IT Support Services | Sydney | Melbourne

End of lifecycle (EoL) is looming for many essential IT products, Microsoft Small Business Server (SBS) being just one of them.

Companies who depend on this architecture will soon need to modernize or face problematic security and stability issues going forward as there will be no further updates or support available. Even with a solid disaster recovery plan (DRP) in place, there is no guarantee you will be able to recover should a problem arise.

Planning your infrastructure transformation is a complex undertaking, but having it completed prior to EoL is as important to your business continuity as it is to the stability of your IT operations.

Here are just some of the options you might consider:

1. Migrate to Microsoft Office 365 for Business

While the official replacement for SBS is Windows Server Essentials, this option does not include Exchange or SharePoint. In this scenario, you will, ultimately, be forced to purchase Exchange separately or integrate with Microsoft Office 365 and Lync Communications if you are committed to an on-premise server solution. For most small businesses, however, Microsoft Office 365 is clearly the best choice. Some of the advantages include:

  • Licensed and billed on a subscription basis provides predictable IT spend
  • Updates and bug fixes are automatically deployed, saving IT hours for higher-value tasks
  • Ongoing access to the complete line of Office Suite software
  • Anywhere access to your files enables your remote and mobile workforce
  • Centralized configuration for device management and deployment

2. Keep SBS as a virtual machine on new hardware

Virtualizing your SBS can be accomplished with VMWare or Hyper-V, essentially software that virtualizes your server either to a partition on another physical server or into the cloud. These solutions allow you to virtualize and consolidate your servers into a cloud or hybrid cloud environment, maximizing your server capacities and driving value through simplified IT operations. This also gives you anytime access to essential files and apps that you have stored on SBS. Some may feel that a complete virtualization is like putting all your eggs in one basket—and there are certainly pros and cons—but the benefits in terms of disaster recovery are compelling. Other advantages to virtualization include:

  • Reduced operating, energy, and IT costs
  • Automated processes for backup and recovery

On the downside, there may be higher up-front costs involved with virtualization, but most will agree that over the long-term it pays dividends.

3. Deploy a new server with Windows Server 2016

Deploying a new server may seem like the most logical move for some, but there are considerations. Server products have a limited lifespan and you will likely have to revisit this process in the future. There may also be compatibility issues with legacy applications, requiring an entirely new solution for such tools. On the plus side, since it is a Microsoft product, it may represent less of a learning curve for IT and other stakeholders.

Ultimately, all of these solutions require a degree of thought into where this is all headed in the future. Consider your IT goals for the future carefully, and choose a route that will best support your needs in terms of cost, ease of management, security, and other features. For many businesses, a migration to Office 365 may be the best bet as a future-proof option.

Greenlight ITC is Melbourne and Sydney’s favourite IT consultancy

If you have any lingering questions about how to transition from your Microsoft SBS environment or would like to schedule a free consultation, call Greenlight today. Our technicians are well-versed in all issues related to upgrading your IT infrastructure and can help you decide which solution is right for your needs.

Windows Server 2016 Licensing

Windows Server 2016 Licensing 1920 480 Greenlight Managed IT Support Services | Sydney | Melbourne

With the advent of Windows Server 2016 in mid-October comes Microsoft’s transition to core licensing. For those familiar with SQL Standard and Enterprise licensing the core licensing model for Windows Server is much the same albeit with different minimum requirements. What it means is that Microsoft has deprecated the processor (socket) licenses and now Windows Server and Datacenter can only be licensed under the new core model.

What you need to know:

  • Windows Server 2016 Standard and Datacenter licensed per physical CPU cores.
  • Core licensing is sold in 2-core packs with all physical cores (not sockets) required to be licensed.
  • Minimum core licensing requirement is for 8 physical cores or 4x 2-core packs.
  • If all physical cores are licensed with Standard cores the entitlement is 2x virtual OSEs under OEM/VL and 1x virtual OSE under SPLA. Under SPLA each subsequent virtual OSE requires the re-licensing of all physical cores.
  • If all physical cores are licensed with Datacenter cores the entitlement is unlimited virtual OSEs under OEM, VL or SPLA.
  • Core licensing model applies retrospectively to all Windows Server operating systems once a Windows Server 2016 instance is deployed or your licensing agreement is renewed.

When this new model applies to your business will depend on your licensing arrangement with Microsoft and the agreement type it falls under.

If you are reporting license consumption via an MSP under SPLA then the transition will occur either as soon as Windows Server 2016 is in use anywhere or when the license provider’s agreement is renewed whichever occurs first.

If you have your own volume licensing as either OVL or OVS then the licensing model and pricing changes will not apply until the end of the agreement term. If you have bought perpetual licenses, then the core licensing model will not apply to you unless at some point you decide to renew Software Assurance.

Microsoft is notorious for being unclear and ambiguous on the details of their products especially when it comes to license compliancy, for that reason we have compiled a few example scenarios of how to license various environments correctly.

Window Server 2016 Licensing model

In the previous licensing model only the number of processors (sockets) and the number of operating system environments (OSEs) determined the licensing requirements. When applying the new licensing model to the same environment the 3 Windows Server OSEs now need to be licensed individually with the amount of Standard 2-core license packs equal to the number of physical cores on all sockets. Since there are 20 physical cores in the host, 10x Standard 2-core license packs are required for each two OSEs under OEM or Volume Licensing and the same amount for every one OSE under SPLA. So under an OEM/VL licensing model the environment requires 20x Standard 2-core packs for an entitlement of up to 4x virtual OSEs which covers the current 3 virtual servers. As under SPLA the entitlement is only one virtual OSE per fully licensed physical cores, 30x Standard 2-core packs are required. 10x for each virtual server.

server2016-2

In this scenario the environment has been upgraded with two new physical servers both running Server 2016 workloads. The virtual host has a total of 16 physical CPU cores. 4 virtual OSEs need to each be licensed with core licensing under SPLA (8x 2-core packs per OSE) or every 2 OSEs needs to be core licensed under OEM or Volume Licensing (8x 2-core packs per 2 OSEs). The bare metal server also needs to be licensed under core licensing with 4x 2-core packs for the single physical OSE. Even though it only has 4 physical cores, Microsoft licensing minimums dictate that at least 8 physical cores must be licensed regardless. Since only one OSE needs to be licensed the requirement is the same under both OEM/VL and SPLA.

How to speed up Virtual Servers

How to speed up Virtual Servers 1000 450 Greenlight Managed IT Support Services | Sydney | Melbourne

virtual server

Cost savings from Virtualisation is driving today’s large and enterprise-scale IT infrastructure deployments. However with this comes the need to make more advanced considerations when planning and provisioning compared to the relatively simpler bare-metal configurations of days of old.

Virtual server – Wikipedia

Virtual server may refer to: Virtual environment (container), a container-based environment where the underlying hardware and OS is unchanged, but the …

There is still a small group of the IT community that maintain a hostile stance towards virtualisation as a whole which almost always stems from the uninformed decisions made during an ultimately unsuccessful wander into the world of virtualisation at some point in the past which has left a bad taste in the mouth.

Virtualisation is obviously a more complex configuration of software and hardware than a bare-metal deployment but the benefits are enormous and can be surmised with one word – density. The ability to run scores of servers in a virtual environment on a single physical host is far more efficient and massively increases cost effectivity.

However, with these benefits comes the requirement of sound planning and knowledge of how hypervisor hosts and their guests operate in order to create a consolidated yet high-performing virtual environment.

One of the primary considerations is virtual CPU allocation and this affects CPU wait time which if carelessly configured can have a drastic performance impact on all the guests on the host.

Conventional logic dictates that if you want a virtual machine’s processing performance to increase then you simply assign it more cores but this is somewhat misunderstood because a machine with a higher number of cores will potentially have to wait longer for that number of processing threads to be available on the physical CPU.

There are of course other factors that determine wait time as well such as the core assignment distribution between all the guests on the host, the number of physical sockets on the host and the amount of logical cores available per socket if hyperthreading is supported on the particular CPU model.

Consider the following example:

  • You have a single-socket host running the vSphere, Hyper-V or XenServer hypervisor.
  • The host has an Intel 10-Core CPU with hyperthreading enabled for a total of 20 logical CPU cores.
  • 9 running virtual machines reside on this host. 8 with 4 vCPUs and 1 with 8 vCPUs. Total provisioned CPU cores = 40 = oversubscription ratio of 2:1.
  • When the virtual machines with 4 cores need to access the physical CPU they must wait for 4 logical cores to become simultaneously available for processing to occur.
  • When the virtual machine with 8 cores needs to access the physical CPU it must wait for 8 logical cores to become simultaneously available for its processing to take place which depending on the workloads of the other guests could be twice the wait time that the others need to conduct processing.

So from experience the lesson to be learnt here is to allocate a fewer number of CPU cores to a virtual machine to start with and increase it as necessary and if required. This is also stated within the best practice guidelines for virtual machine deployment of VMware, Microsoft and Citrix.

  • Plan the vCPU assignments wisely for all VMs on a given host to ensure the best performance.
  • Be conservative not carefree with allocations. Don’t assign a large number of CPU cores to a virtual machine that does not need them.
  • Check the average CPU allocation across the guests and keep it as uniform as possible.
  • If there is a large number of guests with fewer cores and a handful of performance intensive guests with many cores on a moderately-highly oversubscribed host then there is a good chance you will encounter CPU wait time issues.

[social-bio]

Server Virtualisation explained

Server Virtualisation explained 1000 450 Greenlight Managed IT Support Services | Sydney | Melbourne

This is the first in a 10 part series on Server Virtualisation.servers

You’d probably be surprised to learn that the concept of virtualisation actually goes all the way back to the infancy of computing when in the 1960s virtual partitioning was used to allow segregated space on the same storage medium. In much more recent years, many businesses have begun to embrace the continuously growing benefits of server virtualisation. Whether it be on-premise virtual infrastructure or cloud-based hosting, the expanding number of available platforms and competition between technologies has made virtualisation options increasingly attractive for businesses of any size.

So what is virtualisation? For those new to the concept; virtualisation is a rapidly developing technology that combines hardware and software engineering that is fundamentally changing the way people make use of their IT infrastructure. In simple terms, hardware virtualisation is the ability to run your Windows or Linux server infrastructure as operating systems running on a software layer, known as a hypervisor, over underlying physical hardware. What this means is that previously the only option was to run physical hardware for each server you required whereas you now have the ability to run many servers in a software-container on a single piece of hardware.

Example: You are looking at refreshing your current IT infrastructure which consists of three physical servers: A domain controller, Exchange server and an SQL server. Instead of purchasing new hardware for each, you decide to purchase a single piece of hardware and run all three, still as individual servers but now running as a software instance known as a “guest”, on that one physical device or “host”.

There are a number of methods for performing bare-metal conversions (physical to virtual) each varying depending on the hypervisor you decide to deploy but the benefits of virtualising your servers in this way applies to all.

  • Consolidation of servers into virtual machines greatly reduces the amount of physical hardware required as well as the associated costs of space, power, cooling and maintenance requirements.
  • Virtual machine density (i.e. the number of guests capable of running and performing as required on any given host) can be increased by refining required virtual resources across the virtual machines allowing you to potentially run more guests on the same hardware.
  • Servers running as virtual machines will start/restart faster and can dynamically adjust their memory and storage resources with no downtime.
  • Management and monitoring complexities of IT infrastructure is greatly reduced and can be controlled from a single pane of glass.
  • Virtual infrastructure deployments are created with expansion in mind providing fast new server provisioning and scalability as needed.
  • Extending the life of older/custom applications that can run alongside newer operating systems on the same hardware.

By now you are probably thinking “Doesn’t grouping all my servers onto one physical piece of hardware create a single point of failure for all?”. This is where clustering comes in. Most virtualisation deployments will include at least two hypervisor nodes containing either local storage or connected to one or more SANs creating redundancy at both the virtual machine and storage level. A deployment of this type can and often does include considerations for high-availability as well as redundancy. The exact technologies available vary between hypervisors and the level of licensing but common HA functions include:

  • The use of virtual machine replicas (offline continuously updated copies) residing on multiple hosts able to be automatically failed over in the event of one or more hosts being lost.
  • The shifting of running virtual machine workloads including storage from one host to another without disruption.
  • Hybrid deployments incorporating both on-premise and cloud virtualisation providing geographical redundancy for mission-critical systems.

A clustered infrastructure implementation can be suitable for any sized business but is more common in medium to enterprise-scale deployments. Having said that, it is understandable that most smaller businesses may only want to budget for a single virtual host when planning for a hardware refresh and whilst it does present a single point of failure it can and should be complimented by a robust backup and disaster recovery solution. Ultimately the final configuration must be reviewed on a case by case basis and determined whether or not the inherent risks are acceptable for the business.

Next week: Virtual Storage.