No more sampling, no cutting corners on DQ rules. Acceldata platform dynamically scales to measure the quality of all your critical data assets against all the rules and business policies needed to ensure data trust.
Yes, we monitor both data in the could and on-premises. For data on premises, our data plane can be deployed within your environment. The data plane sends only meta data back to the control plane. None of actual data ever leaves your premises. Read more about the platform architecture here.
In addition to traditional Data Quality, we monitor data drift, schema drift, data freshness, reconciliation of data across data hops. These and other monitors provide you a comprehensive health of your data.
Acceldata’s Anomaly detection algorithm is extremely sophisticated and built entirely on your data set, hence the accuracy of alerts is extremely high. In addition anomaly detection has sensitivity levels (low, medium and high), which can be used to minimize anomalies.
We have a three tier approach. Based on AI based detection of asset type and field a basic set of Data Quality policies are automatically applied. These policies can be modified or edited. Second, we apply Anomaly detection to automatically detect drift in quality and other metrics. This is based on AI models that is built solely based on your data. Third, we allow you to write highly specific custom rules based on a no-code interface. For highly complex business specific rules, we also enable low-code rules that can be written in any language such as SQL, Python, Javascript and such.
Critical data assets can be tagged and any issues from those can immediately be prioritized. You can also prioritize highly used assets for faster remediation to minimize impact to your business.