Testing
How did ValidMind develop the tests that are currently in the library?
All the existing tests were developed using open-source Python and R libraries.
The developer framework test interface is a light wrapper that defines some utility functions to interact with different dataset and model backends in an agnostic way, and other functions to collect and post results to the ValidMind backend using a generic results schema.
Can tests be configured or customized, and can we add our own tests?
ValidMind allows tests to be configured at several levels:
- Administrators can configure which tests are required to run programmatically depending on the model use case
- You can change the thresholds and parameters for tests already available in the developer framework (for instance, changing the threshold parameter for class imbalance flag).
- In addition, ValidMind is implementing a feature that allows you to add your own tests to the developer framework. You will also be able to connect your own custom tests with the developer framework. These custom tests will be configurable and able to run programmatically, just like the rest of the developer framework libraries (roadmap item – Q3’2023).
Is there a use case for synthetic data on the platform?
ValidMind’s developer framework supports you bringing your own datasets, including synthetic datasets, for testing and benchmarking purposes, such as for fair lending and bias testing.
We are happy to discuss exploring specific use cases for synthetic data generation with you further.