Validating new releases is a vital cog in the mainframe SDLC. However, the test data involved makes this a complex task. Our new blog explores how mitigating risk requires a robust data management approach.
When it comes to IT change, mainframe applications remain mission-critical. The importance of these incumbent systems means that any change must be absolutely correct across all aspects. As a result, considerable resources are devoted to ensuring accuracy: skilled developers, testers, release managers, system administrators, and the mainframe technologies that support them.
Moreover, organizations must operate within stringent regulatory frameworks and be able to demonstrate that changes comply with a range of industrial, regional, and governmental regulations. A prime example is ISO 27701, specifically clause 6.11.3.1 – Protection of Test Data, which states:
“Organisations should carefully select test data to ensure that testing activity is both reliable and secure. Organisations should pay extra attention to ensuring that personally identifiable information (PII) is not copied into the development and testing environments.”
Clearly, orchestrating mainframe change—particularly from a risk and compliance perspective—is no small task.
Knowledge is Power – and Data is its Source
While the mainframe ecosystem is a secure, stable platform for controlled change, it’s not known for being easy to work with. Consider a typical change request—for example, updating an insurance application to modify pricing in an online quote system. This requires testing against valid datasets, simulating customer requests, validating parameter changes, and comparing outputs across various scenarios.
The test data alone demands meticulous planning. Extracting data that looks like production data—but isn’t—is a complex and resource-intensive process when done manually. Application and data scientists must collaborate to scope, identify, extract, redact, and obfuscate real data into fictitious but valid equivalents.
Multiply that by each test case and by each application change request, and a manual process for building test data becomes a substantial potential bottleneck in the regulatory compliance checklist.
Mainframe data expert Craig Mullins explained the challenge surrounding test data management in a recent article:
“One of the biggest challenges faced by developers is creating realistic test data that mimics the actual data the application will encounter in production. This is particularly important for applications that handle sensitive or personal data. If the test data is not representative of the actual data, the test results may not accurately reflect the application’s behaviour in production”.
Stuart Ashby, PopUp Mainframe Practice Lead, agrees. His perspectives regarding the type of testing needed—and the type of data needed for each—is an important consideration:
“You must remember that different phases of the software delivery life cycle (SDLC) have different data requirements. Unit testing requires just enough data to exercise all the code paths, including the freshly changed logic. These tests must execute as quickly as possible, meaning minimum viable data is important, and maybe with synthesised data because there is no representative data for the newly built feature in production, yet.
Elsewhere, System testing will need more robust approach that is larger in scale, and must cover functionality, scalability, and volume. Then, add in User Acceptance Testing (UAT) and performance testing data requirements, and the dependencies on data increase significantly. Each needs its own test data plan.”
Modernizing the Mainframe Testing Process
As both the velocity of change and regulatory pressure increase, mainframe leaders must modernize traditionally manual tasks like test data management and data masking.
Gary Hallam, of data masking experts Perforce Delphix, concurs:
“Masking enterprise production data for use in non-production environments is no longer a ‘tick-box’ exercise. With recent regulatory pressures (e.g., GDPR and DORA) and the speed of change demanded by the digital economy, enterprises need automated sensitive data discovery and masking to ensure compliance and speed. Repeatability, auditability, referential integrity, and application integrity are essential ingredients for secure, compliant test data delivery.
One of the most difficult challenges of masking is ensuring that application integrity is maintained, which may mean ensuring the correct sequencing of related dates and retaining relationships that are not necessarily coded within the database systems themselves. Examples include postcodes with addresses, nationalities with national ID numbers, and even values like email addresses (e.g., first name, last name, domain name). Scalable, adaptable masking algorithms, such as those provided by Perforce Delphix masking, simplify complex masking scenarios.
Effective masking tools must support all data sources — mainframe, cloud, hybrid — to ensure comprehensive, continuous compliance and secure non-production environments. Equally, it must offer a dynamic, flexible approach, to avoid having to create new datasets from scratch each time. Tools to automate that process are vital in the efficiency of the task. Perforce Delphix are delighted to partner with mainframe experts, Popup Mainframe, to offer this capability to our customers.”
Conclusion: Mainframe Modernization Means Automating Risk Management
Mainframe delivery has come a long way, but today’s demand for speed and volume of change poses new challenges. Manual processes, legacy tools, and increasing governance requirements create friction in already complex pipelines.
Providing on-demand development and test environments that integrate with modern DevOps workflows—and that deliver compliant, masked test data—represents a meaningful step forward. It meets both the need for agility and the imperative for compliance.
PopUp Mainframe’s comprehensive solution enables continuous delivery and continuous compliance without compromise.
Talk to us to find out how we can help.