The lack of scalability in many legacy systems promotes a defensive approach to data operations. Processes take longer because they are batched into separate sequential jobs rather than run in parallel. Data quality has to be checked after a process has finished, allowing data to enter the data ecosystem and delaying how quickly problems can be fixed. Business users are not allowed to look directly at operational data so as to prevent them from slowing production processes. New departments and data needs mean multiple instances of data management software, increasing software licensing costs and the complexity of reconciling between systems.
Even within the field of data management, lack of scalability naturally biases vendor solutions towards segmentation by data type and data operation, whether that be data governance, data integration, data quality, data mastering or data collaboration. If the constraint of scalability were effectively removed, data management processes could be designed that were enterprise in scope, that operated in real-time, that were simpler and that encouraged much greater interaction and collaboration around data from all departments and all users, regardless of technical knowledge and ability.