The Future Starts Now – Achieving Successful Operation of ML & AI-Driven Applications

Operationalizing AI and ML has become an unavoidable need in business, as various industries heavily rely on large volumes of real-time data as input to automated decision-making processes to yield the best results. Use cases in the data science field have shown that ML models and AI have few tangible business benefits until they are operationalized. In this e-book, our friends over at MemSQL show us how to successfully deploy model-driven applications into production.

Spell MLOps Platform Launches ‘Spell for Private Machines’ to Streamline DevOps and Foster Deeper Team Collaboration for Enterprises

Spell – a leading end-to-end machine learning platform that empowers businesses to get started with machine learning projects and make better use of their data – announced its new Spell for Private Machines integration. With Spell for Private Machines, enterprise teams that are spearheading machine learning projects will be able to use their privately owned GPUs or CPUs alongside cloud resources for experimentation, results and collaboration, reducing time, money and resources usually spent in-house.

Dotscience Enables Simplest Method for Building, Deploying and Monitoring ML Models in Production on Kubernetes Clusters to Accelerate the Delivery of Business Value from AI

Dotscience, a pioneer in DevOps for Machine Learning (MLOps), announced new platform advancements that offer the easiest way to deploy and monitor ML models on Kubernetes clusters, making Kubernetes simple and accessible to data scientists.

Help! My Data Scientists Can’t Write (Production) Code!

In this contributed article, Nisha Talagala, Co-founder and CTO/VP of Engineering at ParallelM, takes a hard look at productionizing machine learning code and how integrating SDLC practices with MLOps (production ML) practices certifies that all code, ML or not, is managed, tracked and executed safely.