KubeCon + CloudNativeCon North America 2025 drew thousands of cloud-native practitioners to Atlanta this November. For the Edge Monsters in attendance, the event was less about flashy new announcements and more about a maturing ecosystem coming into its own after a decade of Kubernetes. In this post, we recap the highlights from an edge computing perspective and what they mean for edge architects.
Kubernetes at 10: Stability Over Hype
A recurring theme was the stability and maturity of the Kubernetes ecosystem. As one Edge Monster put it during our post-show huddle, “I don’t think there was any explosive thing… It’s reaching a level of maturity, and that’s a good thing after ten years or so.” The buzzwords may have toned down, but Kubernetes has undeniably become the de facto standard for modern infrastructure, including at the edge. Rather than searching for the next big thing, many attendees were focused on renovating what they have, making Kubernetes work better at scale, in production, and in new workloads like AI. The general sentiment was that a stable foundation isn’t “boring”, it’s what enables us to confidently deploy innovation.
Flip That Stack
The Flip That Stack: Renovating Edge Infrastructure at the Home Depot session was one of the standout moments of the week. Edge Monster’s own Dillon TenBrink walked through the transformation of a 2,300-site retail footprint from a store-centric, batch-driven architecture into a modern, Kubernetes-powered edge platform designed around customer expectations, developer velocity, and real-world constraints. His core message was clear: building an edge platform is less about Kubernetes itself and more about creating the conditions for Kubernetes to succeed at the edge. That means acknowledging harsh realities such as limited power, no data closets, and unreliable connectivity, and designing around them with resilience, composability, and eventual consistency at the foundation. The shift from authoritative data in each store to cloud-hosted truth drove the creation of services like local data cache, guaranteed delivery messaging, replication tracking, and fully self-service test environments that can scale to thousands of store replicas in minutes.
What stood out for us in the Edge Monster’s context is how much of Home Depot’s success relied on packaged, reusable platform services rather than bespoke one-offs. Their path mirrors the patterns we see as customers adopt edge solutions: templated deployments eliminate YAML sprawl, pull-based configuration solves fleet-scale rollout challenges, and integrated observability turns a distributed edge estate into something that can actually be operated. They built a set of modular capabilities, such as cache, messaging, testing, and observability, that developers can consume without touching infrastructure. This is exactly the model edge architects are adopting today: provide composable services that reduce toil, meet edge constraints, and unlock enterprise workflows. The talk reinforced a clear industry signal from KubeCon Atlanta that the leaders in edge computing will be the teams delivering integrated, self-service building blocks that solve real operational problems at scale.
Taming The Complexity Beast
The Taming the Complexity Beast: How Organizations are Rethinking Software Architecture and Deployment panel felt like group therapy for anyone who has lived through the microservices-for-everything era. The speakers were direct: a lot of us copied Netflix-scale architectures for very non-Netflix problems, and we are now paying for it in cloud spend, operational drag, and security sprawl. Observability has turned into a treasure hunt across ten or twenty services, each with its own traces and dashboards, and that is only when teams remember to instrument things correctly. Every microservice became its own security perimeter, with its own secrets and certificates, so we effectively multiplied our attack surface by the number of pods we shipped. On top of that, we keep buying new layers of abstraction and tooling, hoping each one will finally simplify the system, when in reality we are just adding more moving parts that someone has to patch, secure, and understand.
The thread running through the conversation was a call for complexity-aware architecture, especially valid for teams working at the edge. Instead of starting from “Kubernetes plus eighteen microservices plus three meshes,” the panel pushed a more practical question: what are you actually trying to do, and what is the simplest architecture that can do it safely and reliably? Sometimes that really is deploying a single virtual machine or a small cluster with a few well-understood services. Containers and Kubernetes are not going away, but the message was clear: minimize layers, package only what you need, treat operational complexity and security as first-class design constraints, and stop shifting raw problems left without delivering real solutions.
Observability Overload: Many Solutions, No One-Size-Fits-All
Walking the expo floor, it was clear that observability and fleet management tools were indeed plentiful. From monitoring startups to AIOps platforms, everyone claims to have the answer for managing cloud-native applications and Kubernetes (and by extension, edge deployments). The sheer number of vendors present underscores that this is not a solved problem, at least not in a universally agreed-upon way. Some focus on logs and metrics, others on tracing, event-driven automation, or configuration drift management.
The takeaway for us is that observability at the edge remains a choose-your-own-adventure. Your strategy may involve a combination of open-source tools (Prometheus, Grafana, Loki, OpenTelemetry, etc.) or proprietary services, depending on what fits your scale and skillset. The conference reinforced that there’s no silver bullet here. It’s an unsolved mystery in the sense that we haven’t converged on one approach. This leaves lots of room for innovation. For edge practitioners, the key is to ensure whatever solution you pick can handle highly distributed, intermittent-connectivity environments, as we discussed in this post. It was encouraging to see edge use-cases discussed in observability sessions, but also a reminder to double down on simplicity. More moving parts in your monitoring stack can mean more things to break in the field. Stay tuned for our next blog post, which covers our latest observability discussion with a well-known cloud-native developer.
The Edge is Real (and Getting Boring in a Good Way)
A couple of years ago, edge Kubernetes was seen as niche. At KubeCon 2025, it’s clear that the edge has moved from hype to practical reality. The dedicated Edge Day and edge-focused breakouts had attendance on par with previous years, but the conversations have matured noticeably. As one Edge Monster observed, “Two years ago, a lot of the edge talk was theoretical. This year, the hallway track was full of people from large enterprises with real deployments, asking mature questions.” In other words, the market is growing up, companies are no longer just experimenting; they’re running significant production workloads outside the cloud data center, and dealing with the hard questions of scale, reliability, and management at the edge.
Our team also dug into hallway conversations about taming edge complexity. One golden nugget came from chatting with folks working on bare-metal provisioning: what if an edge node could boot straight from a container image, like a GRUB boot loader container? In other words, treat the entire edge node OS as an immutable container. This approach could tightly control edge environments and let you deploy or roll back edge systems as easily as container apps. It’s an intriguing idea that would remove a lot of today’s bootstrapping headaches by containerizing the boot process to simplify lifecycle management. We’ll be watching that space closely as it evolves.
Projects with Momentum
A technology that got several nods in edge discussions was KubeVirt, the Kubernetes virtualization API that lets you run VMs on your cluster. KubeVirt isn’t new or shiny at this point (it’s an Incubating CNCF project quietly nearing version 1.7), but it’s steadily improving and proving its value. For edge computing, KubeVirt can be a game-changer because many edge use-cases still involve virtual machines (think: legacy apps, VNFs in telco, or specialized workloads that can’t be easily containerized). Running those side-by-side with containers on one platform simplifies operations. The KubeVirt community has been hard at work; adoption is rising, and it’s backed by major players from Arm to Cloudflare. We heard about upcoming features and better tooling (e.g. improved VM lifecycle management and UI) that will make KubeVirt easier to use at scale. This is great news for edge architects. It means one less stack to manage.
If one technology seemed to be on everyone’s lips this year, it was Crossplane. Crossplane is the CNCF project for building cloud-agnostic control planes. Indeed, internal developer platforms (IDPs) and control-plane abstractions were front and center at KubeCon, underscoring a shift back toward developer productivity. After years of developers drowning in YAML and Kubernetes internals, the pendulum is swinging back to higher-level platforms. The Edge Monsters are intrigued by Crossplane’s rise and if it could it play a role at the edge. On one hand, Crossplane could help manage heterogeneous edge infrastructure through familiar K8s APIs. On the other hand, it’s new enough that the best practices for established edge environments aren’t clear.
Final Thoughts
KubeCon + CloudNativeCon NA 2025 felt like a turning point where the cloud-native world and edge computing world came closer together. Rather than treating edge as a totally distinct realm, many are now treating it as another environment under the cloud-native umbrella. The conference reinforced a few things for us Edge Monsters:
- Maturity brings opportunity: A stable Kubernetes core lets us tackle higher-level problems with confidence. It might not grab headlines, but boring is good when you need your infrastructure to just work.
- Developer experience is king: Whether in a cloud data center or a closet in a retail store, developers want platforms that remove friction. Expect to see more platform engineering efforts ensuring that going cloud-native doesn’t mean drowning in complexity at the edge.
- The edge journey continues: It was refreshing to see the edge community pushing forward. Projects like K3s for lightweight Kubernetes, KubeVirt for VMs, and new provisioning options, such as containers as boot media, are all pieces of the puzzle. The edge is real and here to stay, and each KubeCon makes that more apparent.
In the end, the Edge Monsters left Atlanta inspired – not by hype, but by the hard work and ideas percolating across the community.
Be sure to subscribe to updates and follow us on LinkedIn.
The Edge Monsters: Jim Beyers, Colin Breck, Brian Chambers, Tilly Gilbert, Michael Henry, Michael Maxey, Chris Milliet, Erik Nordmark, Joe Pearson, Steve Savage, Jim Teal, & Dillon TenBrink.