%pip install -q validmind
Load dataset predictions
To enable tests to make use of predictions, you can load predictions in ValidMind dataset objects in multiple different ways.
This interactive notebook includes the code required to load the demo dataset, preprocess the raw dataset and train a model for testing, and initialize ValidMind objects. Additionally, it offers options for loading predictions using the assign_predictions()
function, such as loading predictions from a file, linking an existing prediction column in the dataset with a model, or allowing the developer framework to run and link predictions to a model.
Contents
- About ValidMind
- Install the client library
- Initialize the client library
- Load the sample dataset
- Prepocess the raw dataset
- Train models for testing
- Initialize ValidMind objects
- Options to load predictions using the developer frameworks
- Next steps
About ValidMind
ValidMind is a platform for managing model risk, including risk associated with AI and statistical models.
You use the ValidMind Developer Framework to automate documentation and validation tests, and then use the ValidMind AI Risk Platform UI to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
Before you begin
This notebook assumes you have basic familiarity with Python, including an understanding of how functions work. If you are new to Python, you can still run the notebook but we recommend further familiarizing yourself with the language.
If you encounter errors due to missing modules in your Python environment, install the modules with pip install
, and then re-run the notebook. For more help, refer to Installing Python Modules.
New to ValidMind?
If you haven’t already seen our Get started with the ValidMind Developer Framework, we recommend you explore the available resources for developers at some point. There, you can learn more about documenting models, find code samples, or read our developer reference.
For access to all features available in this notebook, create a free ValidMind account.
Signing up is FREE — Sign up nowKey concepts
Model documentation: A structured and detailed record pertaining to a model, encompassing key components such as its underlying assumptions, methodologies, data sources, inputs, performance metrics, evaluations, limitations, and intended uses. It serves to ensure transparency, adherence to regulatory requirements, and a clear understanding of potential risks associated with the model’s application.
Documentation template: Functions as a test suite and lays out the structure of model documentation, segmented into various sections and sub-sections. Documentation templates define the structure of your model documentation, specifying the tests that should be run, and how the results should be displayed.
Tests: A function contained in the ValidMind Developer Framework, designed to run a specific quantitative test on the dataset or model. Tests are the building blocks of ValidMind, used to evaluate and document models and datasets, and can be run individually or as part of a suite defined by your model documentation template.
Custom tests: Custom tests are functions that you define to evaluate your model or dataset. These functions can be registered with ValidMind to be used in the platform.
Inputs: Objects to be evaluated and documented in the ValidMind framework. They can be any of the following:
- model: A single model that has been initialized in ValidMind with
vm.init_model()
. - dataset: Single dataset that has been initialized in ValidMind with
vm.init_dataset()
. - models: A list of ValidMind models - usually this is used when you want to compare multiple models in your custom test.
- datasets: A list of ValidMind datasets - usually this is used when you want to compare multiple datasets in your custom test. See this example for more information.
Parameters: Additional arguments that can be passed when running a ValidMind test, used to pass additional information to a test, customize its behavior, or provide additional context.
Outputs: Custom tests can return elements like tables or plots. Tables may be a list of dictionaries (each representing a row) or a pandas DataFrame. Plots may be matplotlib or plotly figures.
Test suites: Collections of tests designed to run together to automate and generate model documentation end-to-end for specific use-cases.
Example: the classifier_full_suite
test suite runs tests from the tabular_dataset
and classifier
test suites to fully document the data and model sections for binary classification model use-cases.
Install the client library
The client library provides Python support for the ValidMind Developer Framework. To install it:
Initialize the client library
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the client library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
Get your code snippet:
In a browser, log into the Platform UI.
In the left sidebar, navigate to Model Inventory and click + Register new model.
Enter the model details and click Continue. (Need more help?)
For example, to register a model for use with this notebook, select:
- Documentation template:
Binary classification
- Use case:
Marketing/Sales - Attrition/Churn Management
You can fill in other options according to your preference.
- Documentation template:
Go to Getting Started and click Copy snippet to clipboard.
Next, replace this placeholder with your own code snippet:
# Replace with your code snippet
import validmind as vm
vm.init(="https://api.prod.validmind.ai/api/v1/tracking",
api_host="...",
api_key="...",
api_secret="...",
project )
Preview the documentation template
A template predefines sections for your documentation project and provides a general outline to follow, making the documentation process much easier.
You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template()
function from the ValidMind library and note the empty sections:
vm.preview_template()
Load the sample dataset
The sample dataset used here is provided by the ValidMind library. To be able to use it, you need to import the dataset and load it into a pandas DataFrame, a two-dimensional tabular data structure that makes use of rows and columns:
# Import the sample dataset from the library
from validmind.datasets.classification import customer_churn as demo_dataset
print(
f"Loaded demo dataset with: \n\n\t• Target column: '{demo_dataset.target_column}' \n\t• Class labels: {demo_dataset.class_labels}"
)
= demo_dataset.load_data()
raw_df raw_df.head()
Prepocess the raw dataset
Preprocessing performs a number of operations to get ready for the subsequent steps:
- Preprocess the data: Splits the DataFrame (
df
) into multiple datasets (train_df
,validation_df
, andtest_df
) usingdemo_dataset.preprocess
to simplify preprocessing. - Separate features and targets: Drops the target column to create feature sets (
x_train
,x_val
) and target sets (y_train
,y_val
).
= demo_dataset.preprocess(raw_df)
train_df, validation_df, test_df = train_df.drop(demo_dataset.target_column, axis=1)
x_train = train_df[demo_dataset.target_column]
y_train = validation_df.drop(demo_dataset.target_column, axis=1)
x_val = validation_df[demo_dataset.target_column] y_val
Train models for testing
- Initialize XGBoost and Logistic Regression Classifiers
from sklearn.linear_model import LogisticRegression
import xgboost
%matplotlib inline
= xgboost.XGBClassifier(early_stopping_rounds=10)
xgb
xgb.set_params(=["error", "logloss", "auc"],
eval_metric
)
xgb.fit(
x_train,
y_train,=[(x_val, y_val)],
eval_set=False,
verbose
)
= LogisticRegression(random_state=0)
lr
lr.fit(
x_train,
y_train, )
Initialize ValidMind objects
Initialize the ValidMind models
= vm.init_model(
vm_model_xgb
xgb,="xgb",
input_id
)= vm.init_model(
vm_model_lr
lr,="lr",
input_id )
Initialize the ValidMind datasets
Before you can run tests, you must first initialize a ValidMind dataset object using the init_dataset
function from the ValidMind (vm
) module.
This function takes a number of arguments:
dataset
— the raw dataset that you want to provide as input to testsinput_id
- a unique identifier that allows tracking what inputs are used when running each individual testtarget_column
— a required argument if tests require access to true values. This is the name of the target column in the datasetclass_labels
— an optional value to map predicted classes to class labels
With all datasets ready, you can now initialize the raw, training and test datasets (raw_df
, train_df
and test_df
) created earlier into their own dataset objects using vm.init_dataset()
:
= vm.init_dataset(
vm_raw_ds ="raw_dataset",
input_id=raw_df,
dataset=demo_dataset.target_column,
target_column
)
= vm.init_dataset(
vm_train_ds ="train_dataset",
input_id=train_df,
dataset=demo_dataset.target_column,
target_column
)= vm.init_dataset(
vm_test_ds ="test_dataset", dataset=test_df, target_column=demo_dataset.target_column
input_id )
Options to load predictions using the developer frameworks
Load predictions from a file
This creates a new column called <model_id>_prediction
in the dataset and assigns metadata to track that the <model_id>_prediction
column is linked to the model <model_id>
Predictions calculated outside of VM
import pandas as pd
= pd.DataFrame(xgb.predict(x_train), columns=["xgb_prediction"])
train_xgb_prediction = pd.DataFrame(xgb.predict(x_val), columns=["xgb_prediction"])
test__xgb_prediction
= pd.DataFrame(lr.predict(x_train), columns=["lr_prediction"])
train_lr_prediction = pd.DataFrame(lr.predict(x_val), columns=["lr_prediction"]) test_lr_prediction
Assign predictions to the training dataset
We can now use the assign_predictions()
method from the Dataset
object to link existing predictions to any model:
vm_train_ds.assign_predictions(=vm_model_xgb, prediction_values=train_xgb_prediction.xgb_prediction.values
model
)
vm_train_ds.assign_predictions(=vm_model_lr, prediction_values=train_lr_prediction.lr_prediction.values
model )
Run an example test
Now, let’s run an example test such as MinimumAccuracy
twice to show how we’re able to load the correct model predictions by using the model
input parameter, even though we’re passing the same train_ds
dataset instance to the test:
= vm.tests.run_test(
full_suite "validmind.model_validation.sklearn.MinimumAccuracy",
={"dataset": vm_train_ds, "model": vm_model_xgb},
inputs )
= vm.tests.run_test(
full_suite "validmind.model_validation.sklearn.MinimumAccuracy",
={
inputs"dataset": vm_train_ds,
"model": vm_model_lr,
}, )
Link an existing prediction column in the dataset with a model
This approach allows loading datasets that already have prediction columns in addition to feature and target columns. The developer framework assigns metadata to track the predictions column that are linked to a given <vm_model>
model.
= train_df.copy()
train_df2 "xgb_prediction"] = train_xgb_prediction.xgb_prediction.values
train_df2["lr_prediction"] = train_lr_prediction.lr_prediction.values
train_df2[5) train_df2.head(
= [
feature_columns "CreditScore",
"Gender",
"Age",
"Tenure",
"Balance",
"NumOfProducts",
"HasCrCard",
"IsActiveMember",
"EstimatedSalary",
"Geography_France",
"Geography_Germany",
"Geography_Spain",
]
= vm.init_dataset(
vm_train_ds =train_df2,
dataset="train_dataset",
input_id=demo_dataset.target_column,
target_column=feature_columns,
feature_columns )
Link prediction column to a specific model
The prediction_column
parameter informs the Dataset
object about the model that should be linked to that column.
=vm_model_xgb, prediction_column="xgb_prediction")
vm_train_ds.assign_predictions(model=vm_model_lr, prediction_column="lr_prediction") vm_train_ds.assign_predictions(model
= vm.tests.run_test(
full_suite "validmind.model_validation.sklearn.MinimumAccuracy",
={"dataset": vm_train_ds, "model": vm_model_xgb},
inputs )
= vm.tests.run_test(
full_suite "validmind.model_validation.sklearn.MinimumAccuracy",
={"dataset": vm_train_ds, "model": vm_model_lr},
inputs )
Link an existing prediction column in the dataset with a model
This lets the developer framework run model predictions, creates a new column called <model_id>_prediction
, and assign metadata to track that the <model_id>_prediction
column is linked to the <vm_model>
model.
There are two ways run and assign model predictions with the developer framework:
- When initializing a
Dataset
withinit_dataset()
. This is the most straightforward method to assign predictions for a single model. - Using
dataset.assign_predictions()
. This allows assigning predictions to a dataset for one or more models.
Pass <vm_model>
in dataset interface
= [
feature_columns "CreditScore",
"Gender",
"Age",
"Tenure",
"Balance",
"NumOfProducts",
"HasCrCard",
"IsActiveMember",
"EstimatedSalary",
"Geography_France",
"Geography_Germany",
"Geography_Spain",
]
= vm.init_dataset(
vm_train_ds =vm_model_xgb,
model=train_df,
dataset="train_dataset",
input_id=demo_dataset.target_column,
target_column=feature_columns,
feature_columns )
Through assign_predictions
interface
= vm.init_dataset(
vm_train_ds =train_df,
dataset="train_dataset",
input_id=demo_dataset.target_column,
target_column=feature_columns,
feature_columns )
Perform predictions using the same assign_predictions
interface
=vm_model_xgb)
vm_train_ds.assign_predictions(model=vm_model_lr) vm_train_ds.assign_predictions(model
Run an example test
Now, let’s run an example test such as MinimumAccuracy
twice to show how we’re able to load the correct model predictions by using the model
input parameter, even though we’re passing the same train_ds
dataset instance to the test:
= vm.tests.run_test(
full_suite "validmind.model_validation.sklearn.MinimumAccuracy",
={"dataset": vm_train_ds, "model": vm_model_xgb},
inputs )
= vm.tests.run_test(
full_suite "validmind.model_validation.sklearn.MinimumAccuracy",
={
inputs"dataset": vm_train_ds,
"model": vm_model_lr,
}, )
Next steps
You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way — use the ValidMind platform to work with your model documentation.
Work with your model documentation
From the Model Inventory in the ValidMind Platform UI, go to the model you registered earlier.
Click and expand the Model Development section.
What you see is the full draft of your model documentation in a more easily consumable version. From here, you can make qualitative edits to model documentation, view guidelines, collaborate with validators, and submit your model documentation for approval when it’s ready. Learn more …
Discover more learning resources
We offer many interactive notebooks to help you document models:
Or, visit our documentation to learn more about ValidMind.