Marcin Bednarz
on 6 June 2019
Need to set up servers in remote locations?
Use bare metal provisioning with a top-of-the-rack switch
When deploying a small footprint environment such as edge computing sites, 5G low latency services, a site support cabinet or baseband unit, its critical to establish the optimal number of physical servers needed for set up. While several approaches exist, bare metal provisioning through KVM can often be the most reliable option. Here’s why.
For every physical server in such a constrained physical environment, there is an associated cost.
In the case of an edge deployment, this cost can be measured in (among other properties):
- Capital and operational expenses
- Power usage
- Dissipated heat
- The actual real estate it occupies
Ways to set up servers in remote locations
One approach would be to have a dedicated server shipped to every remote location to act as an infrastructure node. Typically this would require an additional node (or committed shared resources) which might not align with remote site footprint constraints.
This approach might introduce unnecessary latency and delays in server provisioning
Another option is stretching the provisioning and management network across WAN and provisioning all the servers from a central location. This approach might, however, introduce unnecessary latency and delays in server provisioning. It also requires quite sophisticated network configuration to account for security, reliability and scale of remote site deployments.
So what other options exist? What is the common infrastructure component always present in every remote location? The answer is quite straightforward – in every single site one needs to provide basic network connectivity through a top-of-the-rack/site switch. It’s this critical component that enables servers to communicate with the rest of the network and provide required functions such as application servers, VNFs, container and virtualisation platforms.
How do I re-purpose nodes to provision different operating systems?
Modern switches can run Linux as their underlying operating system, enabling infrastructure operators to run applications directly on these top-of-the-rack devices either through KVM or snaps support.
A great example of a workload that can run on a top-of-the-rack switch is a bare metal provisioning solution such as MAAS. By deploying MAAS we can solve the system provisioning challenge without unnecessary complexity. By running a lightweight version of MAAS on a top-of-the-rack switch, we reduce friction in small footprint environments as well as providing an open API-driven way to provision and repurpose nodes in every remote location. This enables not only fast and efficient server provisioning but also eliminates drawbacks of other alternatives mentioned above.