Application deployments at the edge come with unique challenges that set them apart from cloud environments. From managing secrets and configurations to deployment pipelines, edge developers must adopt tailored approaches to ensure security, efficiency, and scalability in their practice, while automating almost everything.
Deployment versus orchestration: Defining the difference
First, we need to draw a distinction between two ideas that are often conflated: deployment and orchestration.
When considering an edge environment, we define deployment as:
Deployment is the process of releasing, installing, and configuring software into a specific target environment (e.g., production, development, or testing). This process typically involves packaging the application, ensuring necessary configurations are applied, validating dependencies, and running the application in a way that is compatible with the target environment.
This differs from orchestration, which is concerned with the coordination and management of applications, services, and resources across a distributed edge environment, providing for functions such as provisioning, scaling, deployment, monitoring, and lifecycle management of applications across multiple edge nodes or clusters.
Containers are the default edge artifact for greenfield deployments
Since ~2018, containers have become the de facto standard for greenfield edge deployments. This shift is driven by several key factors: developers’ deep familiarity with containerization technologies, the portability containers offer across diverse hardware environments, and their ability to create consistent development-to-production workflows. The lightweight nature of containers, combined with orchestration tools like Kubernetes, has made it easier to manage distributed edge applications efficiently, supporting scalability, resource optimization, and iteration.
We have one strong recommendation: if you can, stick to a single artifact type for your edge deployment. “Just because you can doesn’t mean you should” applies here. Cost and management complexity are likely to scale beyond the business value that is unlocked.
What about environments that aren’t greenfield?
Juggling multiple artifacts may be a necessity
Are containers the only answer for the edge? While we would recommend that you start with containerization whenever possible, it is still viable to unlock business value at the edge with a number of other approaches, such as virtual machines (in their various forms) or WASM. Many enterprises are faced with this reality, and it is a better choice to find a path forward vs. kicking the proverbial can down the road.
There are a few different approaches that could be taken in this scenario:
- Parallel Integration – in this approach, a new “greenfield” edge environment is deployed and the legacy environment is left in place. Integrations can be created that center on the new environments technology stack, unlocking data and capability in the legacy environment.
- Heterogeneous Edge – in this approach, the new environment supports the legacy artifact profile and business applications as well as the greenfield, resulting in a heterogeneous artifact environment (VMs, containers, etc). If you are going this route, we would suggest finding a vendor partner that has already solved this particular problem at scale, as rolling your own creates a lot of technical complexities and duplicative problem-solving.
How aggressive should you be on migrating to a modern technology stack and retiring the legacy footprint? This will depend on a lot of unique factors, but consider the compounding effects of aging technologies over time.
There is nothing wrong with running virtual machines, containers, or the emerging WASM. Especially if you can, pick one. If you must run many, be aware of the costs of doing so.
How will you deploy your artifacts?
Edge Monsters have had significant success with GitOps as a means for deploying artifacts to the Edge, and thus we recommend this as a best practice. GitOps has a number of nice benefits such as being declarative, enabling reproducible configurations, clean rollbacks, native change tracking, developer friendly tools, and a familiar and reasonable API.
Some sort of eventual consistency model is critical for edge environments since they may or may not be addressable at any given time based on network or other constraints. This allows edge app teams to initiate an app or configuration deployment pipeline, provide instructions and then await a report back later. The process of reporting on deployments must be automated.
Another advantage to this approach is the pull-based model. Edge Monsters recommends disallowing inbound traffic from the internet to edge environments and driving all actions that occur by pulling from the edge. The GitOps model works great in this paradigm.
Secrets management and configuration at scale
Configuration and secrets management is a challenge at the edge, especially as security requirements have ramped up due to the array of cyber threats that exist in today’s operating environment. Gone are the days when the same password is hard-coded into hundreds or thousands of devices without much concern. Secrets, and configuration, are almost always unique per location.
Teams need robust templating systems to generate location-based configurations and manage secrets securely. Without off-the-shelf tools to fully address this need, many teams rely on custom-built toolchains, integrating systems like Vault for secret management and configuration maps for templating.
Open-source projects, such as Open Horizon and EVE at LF Edge, are addressing some of these challenges by providing standardized mechanisms for distributing secrets to edge nodes in an automated manner. However, teams should still be prepared to customize their solutions to fit specific deployment needs.
What happens when it goes wrong?
When Edge Deployments fail, teams often have to take action. Before we examine the scenarios, let’s cover two things.
- Failure is defined as a “successful application of configuration that did not result in the expected outcome”, not a disconnected node that is late checking in.
- This is a bad situation since it often requires the application developers and/or platform team to get involved. This is one of the most expensive scenarios that we can find ourselves in, so make every effort to have bulletproof pipelines with circuit breakers (such as canary deployments) that prohibit these changes from reaching many edge locations.
There are a few models that we recommend choosing from when managing this situation:
- Rollback: if a team is using GitOps, they can revert to a previous commit and allow the GitOps pipeline to repair the problem.
- Fail Forward: Address the application issue with code and deploy a new version (using canary or similar, of course)
Regardless of what you choose, do not fix the issue in place in the environment. Edge environments should never be subject to direct human change.
Keeping deployments on the rails
By following these practices, you can keep your edge deployments on the rails and stay focused on delivering business value instead of troubleshooting app deployment issues.
In our next post, we will share the best practices for Edge AI. Be sure to subscribe for updates and follow us on LinkedIn.
The Edge Monsters: Brian Chambers, Erik Nordmark, Joe Pearson, Alex Pham, Jim Teal, Dillon TenBrink, Tilly Gilbert, Anna Boyle & Michael Maxey
Prepared by Tilly Gilbert, STL Partners