OpsClarity, the intelligent web-scale applications monitoring company, announced that its Intelligent Monitoring solution now provides monitoring for the growing and popular suite of open source data processing frameworks. OpsClarity understands the extremely complex and distributed runtime characteristics of modern data processing frameworks like Apache Kafka, Apache Storm, Apache Spark as well as datastores such as Elasticsearch, Cassandra, MongoDB and others that act as sinks to these data processing frameworks. The solution enables DevOps teams to gain visibility into how these technologies are dependent on each other and troubleshoot performance issues.
Open source data processing frameworks have rapidly matured and gained enterprise adoption to provide immediate business value, whether it be to identify customer preferences on the fly, detect online fraud or IOT-enable the next electronic device in our homes,” said Amit Sasturkar, Co-Founder and CTO of OpsClarity. “OpsClarity has deep domain understanding of these distributed and complex data processing frameworks and how they work together and has built an intelligent assistant that visualizes the entire environment, detects and correlates failures, and provides guided troubleshooting.”
Enterprises use big-data frameworks to process and understand large-scale data. Technologies like Apache Kafka, Apache Spark and Apache Storm are constantly expanding the scope of what is possible. However, most of these data processing frameworks are themselves a complex collection of several distributed and dynamic components such as producers/consumers and masters/slaves. Monitoring and managing these data processing frameworks and how they are dependent on each other is a non-trivial undertaking and usually requires an extremely experienced operations expert to manually identify the individual metrics, chart and plot them, and then correlate events across them.
Unresponsive applications, system failures and operational issues adversely impact customer satisfaction, revenue and brand loyalty for virtually any enterprise today,” said Holger Mueller, VP & Principal Analyst at Constellation Research. “The distributed and complex characteristics of modern data-first applications can add to these issues and make it harder than ever to troubleshoot problems. It is good to see vendors addressing this critical area with approaches that include analytics, data science, and proactive automation of key processes to keep up with the changes being driven by DevOps and web-scale architectures.”
OpsClarity leverages an advanced data-science and real-time streaming analytics-based approach to ingest huge amount of metrics and events data from a disparate set of the open source frameworks and intelligently correlate metrics and events across them. OpsClarity synthesizes the various metrics, alerts and signals into an intuitive visual service topology along with overlaid health status. This radically simplifies the effort required by DevOps to set up and troubleshoot these modern data frameworks.
The OpsClarity Intelligent Monitoring solution provides the following for data processing frameworks:
- Auto-Discover: Automatically discover all the components of various data processing frameworks and automatically configure a deep and specific collection of metrics, events, alerts, process and network data. For example, Kafka brokers, Spark masters/slaves, Storm supervisors/workers are auto-discovered and auto-configured.
- Visual Topology: Automatically discover the service connections and dependencies to generate a logical visual topology for these data processing frameworks.
- Health Analysis: Enables immediate understanding of data processing framework component health, prioritized anomalies, and service-level metrics – all within the context of the topology.
- Troubleshooting: a highly-specific and actionable anomaly detection and event correlation capability that allows for rapid root cause analysis.
Sign up for the free insideBIGDATA newsletter.