Executive Spotlight: Oliver Schabenberger, COO & CTO at SAS

Print Friendly, PDF & Email

The insideBIGDATA 2019 Executive Round Up features insights from industry executives with lengthy experience in the big data industry. Here’s a look at the insights from Oliver Schabenberger, COO & CTO at SAS.

As COO and CTO, Oliver Schabenberger sets the technology direction for SAS and executes the  company’s strategic direction and business priorities. He oversees multiple divisions within SAS,  including R&D, Sales, Marketing, Information Technology and Customer Support, as well as divisions  focused on solutions for IoT, financial risk management and cloud. Schabenberger joined the SAS  R&D Division in 2002 and was named CTO in 2016. Prior to SAS, he served as Associate Professor of Statistics at Virginia Tech, where he earned his PhD in 1995. He frequently writes on emerging  technology for publications such as Forbes.com and holds several patents on software design and  algorithms.

The full text of Oliver Schabenberger’s insights from our Executive Round Up are provide below.

Daniel D. Gutierrez, Managing Editor & Resident Data Scientist – insideBIGDATA.com

insideBIGDATA: AI and its adaptability come with a significant barrier to its deployment particularly in regulated industries like drug discovery – “explainability” as to how AI reached a decision and gave its predictions. How will 2019 will mark a new era in coming up with solutions to this very real problem?

Oliver Schabenberger: Explainability of AI is part of a larger effort toward fair, accountable and transparent AI systems. The issues about algorithmic decision making are not new, but the conversation has ramped up in recent years. AI is bringing automated decisioning to new domains such as medical diagnostics and autonomous driving, and is building systems that are more complex, less transparent and highly scalable. That combination makes us uneasy.

2019 will be the year that transparency in AI comes front and center. We need to shine a light on the black box. Right now, it is way too dark.

Explainability and transparency are not achieved by handing over thousands of lines of computer code or by listing the millions of parameters of an artificial neural network. The inputs and outputs of AI system must be communicated in easily consumable form. What type of information is the system relying on and how does that affect its predictions and decisions.

Methods to explain the impact of features on predictions already exist; these enable us to shine a light on the workings of the black box by manipulating it.

Automation and autonomy are not the same thing. Monitoring the performance of systems build on algorithms is key and is the responsibility of those developing and of those deploying the algorithms.

We have lots of algorithms that we understand pretty well in isolation, but we are not quite sure what happens when we put hundreds or even thousands of these algorithms together.

One of AI’s most important and useful features is its ability to make connections and inferences that are not obvious or may even be counter-intuitive. As AI takes on increasingly important and diverse tasks, data scientists need to be able to explain clearly and simply what their models are doing and why. This will build confidence and trust in AI and the decisions it supports.

insideBIGDATA: What industries do you feel will make the best competitive use of AI, machine learning, and deep learning in the next year? Pick one industry and describe how it will benefit from embracing or extending its embrace of these technologies.

Oliver Schabenberger: AI has and will continue to benefit all industries, but strong examples in healthcare tend to highlight the capabilities in an accessible way. In fact, I’ve seen it save one of my employee’s lives.

One day in early 2017, I received an email informing me that Jared Peterson, the young and talented software manager who runs our cognitive computing team, was in the hospital with what appeared to be a stroke. Medical imagining, and the use of predictive analytics, optimization and machine learning to process and analyze Jared’s MRI images saved his life. It was serendipitous that his team at SAS had been working on adding capabilities to analyze medical imaging to our own products.

Through embracing the big data that hospitals gather and applying AI, we can continue to glean new insights for the diagnosis and treatment of diseases and save lives like Jared’s.

It is of course particularly important within the healthcare space that we strike a balance between innovation and transparency in AI.

And it is also worth noting that AI and machine learning will complement people, especially knowledge workers, augmenting rather than replacing many jobs. For example, AI algorithms can read diagnostic scans with great accuracy, freeing doctors from repetitive tasks so they can help patients by applying their most valuable training and skills.

insideBIGDATA: As deep learning makes businesses innovate and improve with their AI and machine learning offerings, more specialized tooling and infrastructure will be needed to be hosted on the cloud. What’s your view of enterprises seeking to improve their technological infrastructure and cloud hosting processes for supporting their AI, machine learning, and deep learning efforts?

Oliver Schabenberger: The rise of AI during the last decade is due to the convergence of big data and big compute power with decades-old neural network technology. AI systems based on deep neural networks require large amounts of data to train deep networks well.

Improving technological infrastructure and cloud processes is a requirement for any business involved in digital transformation, whether business problems are solved through AI, machine learning, or other technologies. We will see continued investment in cloud infrastructure as a result of it.

Ultimately however, the question of whether AI-driven insights are created via cloud-based or on-premises systems is secondary. It is central that smarter and more effective data analysis by AI systems addresses real-world challenges for businesses and governments.

insideBIGDATA: How will AI-optimized hardware solve important compute and storage requirements for AI, machine learning, and deep learning?

Oliver Schabenberger: This is an exciting area. For many years, analytics and data processing has followed behind advances in computing. The importance of AI has now changed the equation. Hardware is being designed and optimized for AI workloads. One avenue is to increase the performance and throughput of the systems to speed up training and to enable the training of more complex models. Graphics Processing Units (GPUs) are playing an important role to accelerate training and inference because of their high degree of parallelism and because they can be optimized for neural network operations. So do FPGAs, ASICs, and chip designs optimized for tensor operations. New persistent memory technology moves the data closer to the processor.

A second exciting route is the development of computing architectures that enable constant training with low power consumption, for example neuromorphic chips. This is an important step to bring learning and adaptability to edge devices.

insideBIGDATA: What’s the most important role AI plays for your company’s mission statement? How will you cultivate that role in 2019?

Oliver Schabenberger: SAS’ mission is to transform a world of data into a world of intelligence. We help customers solve their most critical business issues, as well as tackling humanitarian issues related to natural disasters, opioid abuse, child welfare and more. Our investment in honing AI technologies takes that mission to the next level. By making AI more transparent and accessible, we’re able to lead more organizations from data to discovery, enabling decision makers with powerful advanced analytics to automate as many of their decisions as possible.

Our approach to AI is multi-fold. By embedding artificial intelligence and machine learning into our products (tools and solutions) we empower others to benefit from AI without having to develop AI. By providing AI tooling such as deep learning, natural language processing and computer vision tools, we enable others to build powerful AI applications. By providing tools that govern, monitor, and explain AI models, we add transparency. Finally, we provide services to our customers to solve business problems, sometimes those solutions involve AI.

In my role as SAS COO and CTO, I am tasked with the execution of this vision from both a business and technology standpoint. As COO, I am responsible for ensuring SAS data scientists, marketers and salespeople remain curious about what’s next for our business, and how we can continue to help our customers solve new and emerging challenges. And as CTO, I am responsible for making sure these customers have access to innovative SAS technologies, including a host of AI and machine learning capabilities.

 

Sign up for the free insideBIGDATA newsletter.

 

 

Speak Your Mind

*