August 15th, 2024

The Edge is not the Cloud: Stop pretending they are the same thing

Edge computing exists to enable business outcomes by putting an application as close to the users as is necessary to achieve the desired user experience. It is increasingly easy to write an application for the cloud… but what does a good edge application look like?

Edge computing brings computation and data storage closer to where the data is being generated. This architectural shift offers several benefits, including reduced latency and improved bandwidth efficiency.

There are several considerations that are important to consider when building applications intentionally for the edge. We can begin with some anti-patterns (the opposite of best practice!) like…

  • Duplication of services: Application stacks bringing their own ingress controllers, message brokers, and caching or persistence layers are standard patterns in the cloud that create chaos in resource-constrained deployments at the edge.
  • Cloud copy-paste: “Write once, run anywhere” works in the cloud but is currently not a best practice for an application that is edge-bound because the environments are radically different. 
  • A/B testing: Duplicating infrastructure stacks with advanced load balancing rules are a best practice in the cloud, not on an edge device or cluster.
  • Autoscaling: There is no “autoscale” of infrastructure at the edge. It seems obvious, but it often still shows up in edge application requirements. 

The common theme in these anti-patterns is due to the inherent constraints of edge environments.

Key constraints in edge environments

Most application developers are familiar with cloud technologies. The cloud is elastic, it scales, it is pay-on-demand, and it has virtually unlimited resources. While edge deployments strive to mimic as many cloud-native paradigms as possible, cloud and edge applications are not exactly the same. This requires application developers to adapt their strategies:

  • Limited resources: Unlike the cloud, edge environments do not offer virtually unlimited resources. This means common cloud practices, such as autoscaling, A/B testing, and expansive supporting frameworks should not be expected by application developers for use at the edge. With edge sites being self-contained regions of value generation, they tend to not experience the types of traffic spikes associated with cloud-based, centralized applications.
  • Connectivity: Edge devices often operate in environments with intermittent connectivity. Applications must handle scenarios where the edge device cannot always connect to the cloud.
  • Diversity of environment: Edge environments also differ significantly from each other, ranging from applications deployed on very small footprints like IoT gateways, through to more typical data center environments. Instead of the cloud model of “write once, run anywhere”, the big variations in edge environments mean that applications must be written to support different microarchitectures and devices with different constraints. 
  • Multi-tenant workloads in constrained environments: If your edge deployment isn’t multi-tenant today, it’s likely to be something you want to have an eye on for the future.  When edge deployments are multi-tenant, they face unique challenges because of the resource-constrained environments. A true “noisy neighbor” consuming a large number of resources, for instance, is much more likely and damaging at the edge where there is less elasticity of resources to ensure other applications are not impacted by the issue. 

“Shared resources are easy when you have complete control in the environment. But you go multi-tenant, things get challenging”

While there are many valid approaches to deploying edge applications, we will focus on enterprise edge applications deployed in a multi-tenant environment at a site owned by the enterprise customer. This type of deployment is popular in many industry verticals, including manufacturing, oil and gas, retail, and construction. 

Practical considerations for edge application design and deployment

There are five key recommendations for building successful edge applications: 

1. 12 Factor Applications at the edge: Edge Monsters are big fans of the 12-Factor Application pattern as a blueprint for  development of code for edge applications. These development principles have generally been applied to modern data center and cloud applications, but we also endorse them as great principles for application development that target constrained edge environments. We also highlight that a “fail-fast” paradigm that lets clients deal with error conditions is a best practice, particularly due to the scale and remote nature of edge environments.

“Some apps will need to run 30+ different containers – that just isn’t feasible in an edge environment”

2. Only deploy functionality to the edge that truly needs to be there: Application developers should default to deploying their applications to the cloud and only use edge deployments when truly necessary. This reduces the infrastructure costs and operational complexities for the edge environment. When an application can benefit from better local availability, lower latency, or has special compliance needs and can tolerate disconnected states and limited scale, it may be suitable for an edge deployment. However, good edge environments will have a “must be this tall to ride” sign that asks application developers to comply with certain principles, practices, and implementation decisions to be optimized for the unique challenges of the edge. Building edge applications that utilize cloud capabilities–when available–are a good pattern to follow to minimize the footprint at the edge. Finally, a good edge application will consider artifact size and minimize the need for large data transfer on deployment (since bandwidth is often a key constraint) and high amounts of resource utilization (CPU, RAM, SSD) at the edge. In short, deploy “just-enough” at the edge.

3. Leverage shared edge services where possible: Edge deployments should provide a series of foundational shared services, and edge applications should seek to reuse them instead of bringing their own competing approach. This minimizes resource needs in the edge environment and promotes a default to sharing data and interoperability within the application portfolio at the edge. However, in an edge environment sharing resources is a valuable strategy to reduce unnecessary overheads. Services for application build and deploy, messaging (such as an MQTT broker, persistence stores or caches, and system observability services are ideal candidates for re-use. We encourage application developers to build around widely adopted industry protocols (such as MQTT or SQL-compliant databases) to make this possible while also acknowledging that edge clusters will vary in implementation of these standards from company to company (one may use HiveMQ, another MosquitoMQTT).

4. Build stateless applications: Most edge applications will likely benefit from the simplicity of a stateless architecture (the application does not retain any information about previous interactions) and a default to ephemerality. Persistence at the edge is achievable, but the initial cost to achieve a high persistence SLA and the resulting complexities in management across thousands of locations are generally not paid back in the business value obtained. Edge developers should expect that their application could be terminated at any time. Robust fail-forward or rapid-recovery models should be in place.

“The first app is never the problem, it’s the second, third, fourth, fifth…”

5. Engage with vendors and encourage standardization: Most enterprises we have observed are currently building and deploying their own applications. While this qualifies as multi-tenant, the true challenges of multi-tenant edge become most acute when there are different organizations building applications for the same edge environment. All of the Edge Monsters desire to see these challenges alleviated and would like to see more “blueprints” and standards deployed. We hope to produce more of that content in the near future. To begin, we recommend the edge vendor ecosystem adopt the above paradigm shifts.

Conclusion and what’s next? 

Building a great edge application requires a deep understanding of the unique constraints and operational realities of edge environments. By focusing on configurability, efficient resource management, and robust security measures, developers can create applications that are not only effective but also resilient and adaptable to the diverse scenarios encountered at the edge. As edge computing continues to evolve, these best practices will be essential in harnessing its full potential.

In our next post, we will share about the challenge of observability and telemetry collection in edge environments and share more best practices. Be sure to subscribe for updates and follow us on LinkedIn.

The Edge Monsters: Tilly Gilbert, Brian Chambers, Dillon TenBrink, Erik Nordmark, Jim Teal, Joe Pearson, Michael Maxey.

Prepared by Tilly Gilbert, Director and Edge Practice Lead, STL Partners

OUR SPONSORS

STL Partners

CONTINUE READING

SUBSCRIBE NOW

Contact us