The Open Edge Architecture Imperative

Print Friendly, PDF & Email

The debate between open and closed architecture has raged on throughout the computer age. First, there was PC versus Mac. Recently, we have debated iOS versus Android. While many think taking a side in this argument has more to do with brand name fandom and preference, the real underlying debate is between an open or closed architecture.

On the iOS side, every aspect of the ecosystem is under the careful control of Apple—from the hardware to the software. On the Android side, the system is far more open—with various manufacturers creating a variety of devices and apps meeting more of an open consensus-driven set of requirements than those on iOS. In the end, Android adoption has come to dominate iOS worldwide, with nearly 80 percent of the mobile OS shipment market share in 2018.

In other words, open architecture wins particularly in a world with diverse hardware. Now, we’re about to watch the same debate play out for edge computing and we are willing to wager that open architecture will again win out with growth and innovation. But first, current innovation must break free of closed architecture in order to realize the full potential of edge computing.

Early Days of Edge Computing

Right now, early versions of edge computing are progressing because of their closed, vertically integrated nature, much like the early days of iOS. The reason is simple—when you control all aspects of an ecosystem, it is easy to ensure conformity and compatibility. Building new applications is simple, because you can rely on the idea that all hardware meets expectations and is compatible. Real-time applications, like the AI in a self-driving car, are driving innovation on the edge.

Looking deeper, the self-driving car is in fact a rolling data center. However, it is currently treated as a predefined piece of hardware in a highly controlled, vertical architecture. It is part of a walled garden, instead of a cloud-native, open architecture. The servers in this rolling data center can be virtualized, networked and orchestrated locally in the same way Amazon Web Services or Google Cloud Services does with its centralized, back-end data centers. This closed architecture means that app developers can take for granted the fact that the infrastructure is there and that the orchestration layer between the apps and the physical car are already provided. What ensues is a highly secure and resilient environment that allows the development of AI and apps for the car without concern for the details of putting together the entire infrastructure of the car.

But this type of hyper-growth cannot and will not last forever. By keeping ecosystems closed, growth and innovation can only go so far before the limitations of the closed architecture controlled by a single entity move from innovating quickly to stifling.

Cloud Computing Origins

We can also examine this issue in terms of the evolution of cloud computing. If cloud computing’s pioneers failed to realize the best data center was the one where commodity hardware was disaggregated from the application, by introducing a platform layer, then the cloud wouldn’t exist as we know it. Abstracting the underlying infrastructure out from the application is known as a cloud-native environment. These types of cloud-native environments are what allow companies like Amazon and Netflix to outcompete the rest of the industry by reaching velocity of application innovation that would be unthinkable in a more traditional IT environment.

Around the Monolith, Towards an Open Imperative

In similar fashion, the edge will ultimately resemble an economy of abundant edge compute “servers,” app developers and asset owners—not monolithic, fully integrated systems. The current tendency to view the edge as a fully integrated system looks through the embedded computing lens of the past—a lens that will inevitably die off as we move to a cloud-native future. It ignores the innovations that have made the cloud possible and spawned operators who don’t own any assets, such as Dropbox, Netflix, AirBnB, Uber, Snap and others. This same style of innovation will happen in the edge.

In an open architecture, the self-driving car will become a set of generic compute devices that can host AI drivers. It is an artificial and limiting concept to require that the car manufacturer, Ford for example, and the car operating software, have to be created and operated by the same company. Uber can put a driver in a generic compute (cloud-native) car, complete a route, then relinquish operation to the next service that needs to provide a rider with a taxi—say Lyft or Waymo for argument’s sake. Then there are distributed payments between driver, operator and asset owner.

This is the true path forward for edge computing—where regardless of hardware, developers can create applications to run uniformly on the edge. Constricting the architecture to keep control only works in the short term.

About the Author

Roman Shapshnik is Co-Founder and VP of Product & Strategy at ZEDEDA. He is also a member of the board of directors at The Apache Software Foundation and has been involved in Open Source software for more than a decade. He has hacked projects ranging from Linux kernel to the flagship multimedia library FFmpeg. He grew up in Sun Microsystems where he had an opportunity to learn from the best software engineers in the industry.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*