Cloud Optimization 2.0: Beyond Better Performance

Print Friendly, PDF & Email

It’s no secret that the adoption of cloud computing is on the rise. Experts predict that 67% of enterprise infrastructure will have migrated to the cloud by the end of 2020, and soon, 82% of workloads will be cloud-based. These numbers are likely even greater now that the pandemic has forced digital transformation into overdrive.

As enterprises accelerate their migration to the cloud more than ever before, the age-old question of optimization has become relevant once again. Given the flexibility of the cloud and cloud-native code, and the rapidity of code delivery that CI/CD practices have enabled, the ultimate question that enterprises face when it comes to the cloud becomes what to optimize and how to optimize it. But that is often easier said than done.

The Flaw of Cloud Optimization 1.0

Way back in 1974, Donald Knuth, American computer scientist and professor emeritus at Stanford University, said the following: “Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs… We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.”

Premature optimization is one of the errors of traditional optimization methods. It is typical of the dated approach to cloud optimization, where the optimization is static, reactive, and manual.

Traditionally, optimization has only occurred when cloud applications become bogged down, experienced downtime, and/or failed to meet service level agreements. Most application parameters have remained un-configured altogether, and organizations have generally turned to over-provisioning in order to buy peace of mind. When something does go wrong, the typical response has been to throw more people and more resources at the problem.

With this approach, premature optimization wastes everyone’s time. Optimization doesn’t begin until the faults are identified. On top of this, the whole process is very expensive and wasteful.

But worst of all, optimization based on human expertise and intelligence simply won’t cut it anymore. As cloud applications and environments become more mature and complex, it is essentially impossible for humans to keep up. A simple 5-container application, for example, can have 255-trillion resource and basic parameter permutations.

Cloud Optimization 2.0: Humans+AI+ML

If cloud optimization 1.0 is no longer up to the task, what does cloud optimization 2.0 look like?

Combining human expertise with artificial intelligence and machine learning will lead to interventions that are not possible when enterprises rely on people alone. Even the best engineers and programmers feel the slightest doubt, which can impact their decision making exponentially. Machines, in contrast, have no doubt. And even in situations riddled with doubt, AI-powered optimization engines and tools can experiment thousands to millions of new combinations tirelessly until they find a solution.

When optimization 2.0 technology is integrated into the application deployment platform, the optimization process is automated at an enterprise level. For businesses, the benefits include getting the lowest possible cost for their SaaS services that are deployed internally or provided as a service.

Optimization 2.0 provides enterprise with the best of all worlds: The flexibility of the cloud, modularized code thanks to microservices, rapid delivery of new features and products, and higher margins and lower costs by virtue of autonomous, continuous optimization on their application delivery platform. All of which is inclusive of the application deployment platforms developed by enterprises, whether a self-managed Kubernetes deployment, or a managed Kubernetes deployment by the Cloud Service Provider.

A Continuous Challenge

CI/CD – continuous integration and delivery – works best when used on the cloud. For many enterprises, cloud-native code, namely structures that generally rely on event-triggered microservices, have made product delivery and upgrades more flexible. Continuous is the key and operating word, and it deserves to also be applied to cloud optimization so that efficiency and “respecting the budget” are not sacrificed in pursuit of speed and agility.

About the Author

Ross Schibler is co-founder and CEO of Opsani. As the co-founder of Opsani, Ross comes to work every day excited to face a new challenge and a new opportunity. Ross is no stranger when it comes to starting a business – he’s started two successful companies before – but he believes that Opsani will be the one to fundamentally change the world.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*