The desire to transform how you manage data quality.

Ever questioned why something is done the way that it is done, when the outcome is always the same and always unsatisfactory? After many years of experience at different financial institutions, the founders of DataSynthesis felt that there had to be a better approach to the way in which enterprise data quality is managed. They saw too many firms that were constrained by data, held back by it, and certainly very distant from becoming data-driven businesses. Instead of data becoming the foundation for better decision-making, the difficulty in accessing high quality data was limiting what the business could do now and just how quickly it could adapt to the future.

Some of their experiences included seeing institutions where data governance was just a passive process of documentation, rather than anything that controlled data operations and encouraged the business to innovate with data. Where there was no common definition and consistency of data across the institution, leading to extra confusion, complexity and an ever-increasing need for data reconciliation. Where structured and unstructured data were segregated rather than related. Where data errors were identified after they had occurred, rather than preventing them at source from spreading. Where every data expert needed an IT expert to make data available, and where every new system implemented created yet another silo of data to be managed. Where the costs and effort involved in maintaining a spaghetti-like data architecture were approaching breaking point.

One of the major themes that came from observing these issues were that the legacy tools being used only dealt with a limited segment of the data management process. Data governance tools that captured the definition and flow of data, presenting a passive and often outdated snapshot of the flow of data. Data quality tools were difficult to use and produced batch-based reports of issues found, long after any data errors had already flowed into downstream systems. Data integration tools containing embedded data validation and transformation rules, located all over the organisation and not designed for any kind of centralised control or management. The founders believed that in order to ensure end-to-end data quality and consistency, what was needed was a far more integrated, top-down approach to the process of data quality management.

The founders concluded that what was needed was to build a platform that enabled data policy to be directly and automatically tranformed into data operations – one where data governance, data integration and data quality were brought into direct synchronisation. Given the operational constraints of existing architecture, such a platform needed to be incremental to implement, and to leave operational data in place whilst allowing legacy tools to be incrementally retired. It needed to operate at enterprise scale, where its performance would automatically scale to meet any change in business needs. It should allow data experts to take control of the whole data manufacturing process, whilst freeing technologists of the maintenance burden of a complex, costly, spaghetti-like data architecture. These are the design principles that Datasynthesis have built their platform upon, using modern internet-scale data technology to help our clients simplify the process of defining, creating and maintaining high quality enterprise data.