Quickstart for ongoing monitoring of models with ValidMind

Welcome! In this quickstart guide, you’ll learn how to seamlessly monitor your production models using the ValidMind platform.

We’ll walk you through the process of initializing the ValidMind Developer Framework, loading a sample dataset and model, and running a monitoring test suite to quickly generate documentation about your new data and model.

This notebook utilizes the Bank Customer Churn Prediction dataset from Kaggle to train a simple classification model for demonstration purposes.

About ValidMind

ValidMind is a platform for managing model risk, including risk associated with AI and statistical models.

You use the ValidMind Developer Framework to automate documentation, validation, monitoring tests, and then use the ValidMind AI Risk Platform UI to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.

Before you begin

This notebook assumes you have basic familiarity with Python, including an understanding of how functions work. If you are new to Python, you can still run the notebook but we recommend further familiarizing yourself with the language.

If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.

New to ValidMind?

If you haven’t already seen our Get started with the ValidMind Developer Framework, we recommend you explore the available resources for developers at some point. There, you can learn more about documenting models, find code samples, or read our developer reference.

For access to all features available in this notebook, create a free ValidMind account.

Signing up is FREE — Sign up now

Key concepts

Model documentation: A structured and detailed record pertaining to a model, encompassing key components such as its underlying assumptions, methodologies, data sources, inputs, performance metrics, evaluations, limitations, and intended uses. It serves to ensure transparency, adherence to regulatory requirements, and a clear understanding of potential risks associated with the model’s application.

Documentation template: Functions as a test suite and lays out the structure of model documentation, segmented into various sections and sub-sections. Documentation templates define the structure of your model documentation, specifying the tests that should be run, and how the results should be displayed.

Model monitoring documentation: A comprehensive and structured record of a production model, including key elements such as data sources, inputs, performance metrics, and periodic evaluations. This documentation ensures transparency and visibility of the model’s performance in the production environment.

Monitoring documentation template: Similar to documentation template, The monitoring documentation template functions as a test suite and lays out the structure of model monitoring documentation, segmented into various sections and sub-sections. Monitoring documentation templates define the structure of your model monitoring documentation, specifying the tests that should be run, and how the results should be displayed.

Tests: A function contained in the ValidMind Developer Framework, designed to run a specific quantitative test on the dataset or model. Tests are the building blocks of ValidMind, used to evaluate and document models and datasets, and can be run individually or as part of a suite defined by your model documentation template.

Custom tests: Custom tests are functions that you define to evaluate your model or dataset. These functions can be registered with ValidMind to be used in the platform.

Inputs: Objects to be evaluated and documented in the ValidMind framework. They can be any of the following:

  • model: A single model that has been initialized in ValidMind with vm.init_model().
  • dataset: Single dataset that has been initialized in ValidMind with vm.init_dataset().
  • models: A list of ValidMind models - usually this is used when you want to compare multiple models in your custom test.
  • datasets: A list of ValidMind datasets - usually this is used when you want to compare multiple datasets in your custom test. See this example for more information.

Parameters: Additional arguments that can be passed when running a ValidMind test, used to pass additional information to a test, customize its behavior, or provide additional context.

Outputs: Custom tests can return elements like tables or plots. Tables may be a list of dictionaries (each representing a row) or a pandas DataFrame. Plots may be matplotlib or plotly figures.

Install the client library

The client library provides Python support for the ValidMind Developer Framework. To install it:

%pip install -q validmind

Initialize the client library

ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the client library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.

Get your code snippet

We’re going to use the code snippet from the ValidMind platform and ensure the monitoring parameter is set to True.

  1. In a browser, log into the Platform UI.

  2. In the left sidebar, navigate to Model Inventory and click the + registed model.

  3. Go to Getting Started and click Copy snippet to clipboard.

  4. Add monitoring=True parameter in the vm.init method.

Next, replace this placeholder with your own code snippet:

import validmind as vm

vm.init(
    api_host="https://api.prod.validmind.ai/api/v1/tracking",
    api_key="...",
    api_secret="...",
    project="...",
    monitoring=True,
)

Initialize the Python environment

Next, let’s import the necessary libraries and set up your Python environment for data analysis:

import xgboost as xgb
import validmind as vm
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from validmind.tests import run_test

%matplotlib inline

Preview the monitoring template

A template predefines sections for your model monitoring documentation and provides a general outline to follow, making the documentation process much easier.

You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:

vm.preview_template()

Load the reference and monitoring datasets

The sample dataset used here is provided by the ValidMind library. For demonstration purposes we’ll use the training, test and validation dataset splits as training, reference and monitoring datasets.

from validmind.datasets.classification import customer_churn

raw_df = customer_churn.load_data()

train_df, reference_df, monitor_df = customer_churn.preprocess(raw_df)

Load the production model

We will also load a pre-trained model for demonstration purposes. This is a simple XGBoost model trained on the Bank Customer Churn Prediction dataset.

import xgboost as xgb

# Load the saved model
model = xgb.XGBClassifier()
model.load_model("xgboost_model.model")

Initialize the ValidMind datasets

Before you can run tests, you must first initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module.

This function takes a number of arguments:

  • dataset — the raw dataset that you want to provide as input to tests
  • input_id - a unique identifier that allows tracking what inputs are used when running each individual test
  • target_column — a required argument if tests require access to true values. This is the name of the target column in the dataset
  • class_labels — an optional value to map predicted classes to class labels

With all datasets ready, you can now initialize training, reference(test) and monitor datasets (train_df, reference_df and monitor_df) created earlier into their own dataset objects using vm.init_dataset():

vm_train_ds = vm.init_dataset(
    dataset=train_df,
    input_id="train_df",
    target_column=customer_churn.target_column,
)

vm_reference_ds = vm.init_dataset(
    dataset=reference_df,
    input_id="reference_df",
    target_column=customer_churn.target_column,
)

vm_monitor_ds = vm.init_dataset(
    dataset=monitor_df,
    input_id="monitor_dataset",
    target_column=customer_churn.target_column,
)

Initialize a model object

Additionally, you need to initialize a ValidMind model object (vm_model) that can be passed to other functions for analysis and tests on the data. You simply intialize this model object with vm.init_model():

vm_model = vm.init_model(
    model,
    input_id="model",
)

Assign predictions to the datasets

We can now use the assign_predictions() method from the Dataset object to link existing predictions to any model. If no prediction values are passed, the method will compute predictions automatically:

vm_train_ds.assign_predictions(
    model=vm_model,
)

vm_reference_ds.assign_predictions(
    model=vm_model,
)

vm_monitor_ds.assign_predictions(
    model=vm_model,
)

Run the ongoing monitoring tests

Before we start the testing procedure let’s take a look at the expected tests that are pre-configured:

test_list = vm.get_test_suite().get_default_config()
for l in test_list:
    print(l)

Let’s run the first test in the list. Note that you can use vm.tests.describe_test() to get information about the inputs required for the test:

vm.tests.describe_test("validmind.model_validation.ModelMetadata")

As you can see, the ModelMetadata only requires a model input. Let’s run the test and log the results into the monitoring document with the .log() method:

test_result = vm.tests.run_test(
    "validmind.model_validation.ModelMetadata",
    model=vm_model,
).log()

Let’s run the tests needed to determine data quality of the monitoring dataset:

data_qual = vm.get_test_suite(
    section="prediction_data_description"
).get_default_config()

# Run all of the necessary data quality checks where the monitoring dataset is the basis
for l in data_qual:
    vm.tests.run_test(
        l,
        inputs={"dataset": vm_monitor_ds},
        show=False,
    ).log()
    print("Completed test: {0}".format(l))

To view the results of the model metadata and data quality tests, navigate to “Ongoing Monitoring” for the model in the ValidMind platform and go to the following sections:

  • Model Monitoring Overview > Model Details
  • Data Quality & Drift Assessment > Prediction Data Description

Next, let’s run comparison tests, which will allow comparing differences between the training dataset and monitoring datasets. To run a test in comparison mode, you only need to pass an input_grid parameter to the run_test() method instead of inputs.

For more information about comparison tests, see this notebook.

correlation_tests = [
    "validmind.data_validation.PearsonCorrelationMatrix:train_vs_test",
    "validmind.data_validation.HighPearsonCorrelation:train_vs_test",
]

for test in correlation_tests:
    vm.tests.run_test(
        test,
        input_grid={
            "dataset": [vm_train_ds, vm_monitor_ds],
            "model": [vm_model],
        },
        show=False,
    ).log()
    print("Completed test {0}".format(test))

You can view these results in the ValidMind platform under the following section:

  • Data Quality & Drift Assessment > Prediction Data Correlations and Interactions

Conduct Target and Feature Drift Testing

Next, the goal is to investigate the distributional characteristics of predictions and features to determine if the underlying data has changed. These tests are crucial for assessing the expected accuracy of the model.

  1. Target Drift: We compare the dataset used for testing (reference data) with the monitoring data. This helps to identify any shifts in the target variable distribution.
  2. Feature Drift: We compare the training dataset with the monitoring data. Since features were used to train the model, any drift in these features could indicate potential issues, as the underlying patterns that the model was trained on may have changed.

In the Data Quality & Drift Assessment > Target Drift section we can confirm only there is only one pre-configured test:

for l in vm.get_test_suite(section="comparison_data_target").get_default_config():
    print(l)

As part of running the rest of the tests, we will directly log the results to a section when calling the .log() method.

First, let’s run the Population Stability Index (PSI) for predictions. In this case, we want to compare the test data with the monitoring data. (Note: For predictions, the training data is irrelevant.)

vm.tests.run_test(
    "validmind.model_validation.sklearn.PopulationStabilityIndex",
    inputs={
        "datasets": [vm_reference_ds, vm_monitor_ds],
        "model": vm_model,
    },
    show=False,
).log()

Next, we can examine the correlation between features and predictions. Significant changes in these correlations may trigger a deeper assessment.

vm.tests.run_test(
    "validmind.ongoing_monitoring.TargetPredictionDistributionPlot",
    inputs={
        "datasets": [vm_reference_ds, vm_monitor_ds],
        "model": vm_model,
    },
    show=False,
).log(section_id="comparison_data_target")

Now we want see difference in correlation pairs between model prediction and features

vm.tests.run_test(
    "validmind.ongoing_monitoring.PredictionCorrelation",
    inputs={
        "datasets": [vm_reference_ds, vm_monitor_ds],
        "model": vm_model,
    },
    show=False,
).log(section_id="comparison_data_target")

Finally for target drift, let’s plot each prediction value and feature grid side by side

vm.tests.run_test(
    "validmind.ongoing_monitoring.PredictionAcrossEachFeature",
    inputs={
        "datasets": [vm_reference_ds, vm_monitor_ds],
        "model": vm_model,
    },
    show=False,
).log(section_id="comparison_data_target")

Feature Drift Tests

Next, let’s add run a test to investigate how or if the features have drifted. In this instance we want to compare the training data with prediction data. These results will be logged in the Data Quality & Drift Assessment > Feature Drift section.

vm.tests.run_test(
    "validmind.ongoing_monitoring.FeatureDrift",
    inputs={
        "datasets": [vm_reference_ds, vm_monitor_ds],
        "model": vm_model,
    },
    show=False,
).log(section_id="comparison_data_feature")

Model performance monitoring tests

Let’s wrap up by monitoring the model’s performance. Keep in mind that in some cases, it may not be possible to determine accuracy if the ground truth is unavailable. If this is the case, you can skip this test and instead focus on target and feature drift to inform the model owners.

The pre-configured tests for model performance are:

for l in vm.get_test_suite(section="model_performance_monitoring").get_default_config():
    print(l)

The code below will run the tests and log the results into the monitoring document for each of the tests. Note the use of input_grid again, which is required for comparison tests:

# Use the reference dataset vs monitoring dataset - the true comparison of accuracy
for test in vm.get_test_suite(
    section="model_performance_monitoring"
).get_default_config():
    if test == "validmind.model_validation.statsmodels.GINITable":
        vm.tests.run_test(
            "validmind.model_validation.statsmodels.GINITable",
            inputs={
                "datasets": [vm_reference_ds, vm_monitor_ds],
                "model": vm_model,
            },
            show=False,
        ).log()
    else:
        vm.tests.run_test(
            test,
            input_grid={
                "dataset": [vm_reference_ds, vm_monitor_ds],
                "model": [vm_model],
            },
            show=False,
        ).log()
    print("Completed test: {0}".format(test))

Next steps

You can now review all the ongoing monitoring results in the ValidMind platform.

  1. From the Model Inventory in the ValidMind Platform UI, go to the model you registered earlier.

  2. Click on the Ongoing Monitoring section.

What you see is the full draft of your model monitoring documentation in a more easily consumable version. From here, you can make qualitative edits to model monitoring documentation, view guidelines, collaborate with validators, and submit your model monitoring documentation for approval when it’s ready.

Discover more learning resources

We offer many interactive notebooks to help you document models:

Or, visit our documentation to learn more about ValidMind.