New Syncsort Trillium Software Delivers Data Quality at Scale

Print Friendly, PDF & Email

Syncsort, a leader in Big Iron to Big Data software, unveiled Trillium DQ for Big Data, providing best-in-class data profiling and data quality capabilities in a single solution, designed to work natively with distributed architectures. Using Trillium DQ for Big Data, organizations can apply data quality to large volumes of enterprise data on-premises or in the cloud, delivering trusted data for business insights and realizing the full potential of emerging technologies to meet their data governance and compliance requirements.

“Recent Syncsort research revealed more than 72 percent of respondents reported sub-optimal data quality negatively impacting business decisions,” said Dr. Tendü Yoğurtçu, CTO, Syncsort. “Almost half also found un-trustworthy results or inaccurate insights from analytics were due to a lack of quality in the data fed into downstream application such as AI and machine learning. By providing integrated data profiling, cleansing, standardization and matching on distributed and cloud platforms, we are empowering organizations to resolve the data quality issues and drive significant business value from their data.”

Trillium DQ for Big Data is part of a market-leading Syncsort Trillium family of data quality products designed to ensure enterprises understand and trust the quality of their data for effective use and accurate insights at the speed of business. It is an integrated solution that delivers profiling, cleansing, standardization and matching including strong entity resolution on distributed architectures, on-premises and in the cloud.

“Analysis of data quality process flow results is essential for data analysts to support continuous data quality improvement and deliver optimal results within targeted time windows,” continued Yoğurtçu. “With the new Syncsort Trillium product, we are enabling them to easily profile large, more diverse data sources in a few simple steps, explore the results of the profiling from a business-friendly user interface to discover new insights and issues and monitor the quality of their data to allow delivery of reports and findings readily to business leaders. Data analysts and data quality specialists can also design, develop and deploy highly-scalable data quality solutions in their data pipelines or in real-time to cleanse, standardize, match and resolve entities without technical expertise in Big Data and distributed architectures.”

Highlights of data profiling and data quality capabilities provided by Trillium DQ for Big Data, on-premises and in the cloud, include:

  • Single solution to support a variety of Big Data quality use cases for understanding issues, challenges and value of the data in the data lake including 3rd party data, requirements to utilize, govern and trust key business information and advanced analytics to build data quality solutions that provide a 360-degree view in support of critical analytical and compliance requirements (e.g. fraud detection, anti-money laundering, omni-channel marketing, predictive analytics, data science and machine learning)
  • Robust, scalable data profiling of high volumes of data and comprehensive matching of complex entities to help ensure data quality and build a 360-degree view of key entities (e.g. customers) for business-critical applications and AI pipelines within the data lake
  • Business user-friendly interface with hundreds of built-in business rules and drill-down capabilities to quickly get insight into issues and anomalies
  • “Design once, deploy anywhere” design for deploying data quality jobs natively to Big Data execution frameworks including Hadoop MapReduce and Spark or through real-time services to dynamically optimize processing with no coding or tuning required.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*