Automate news summarization using LLMs

Document a LLM-based text summarization model of news using the CNN DailyMail sample dataset from HuggingFace with the ValidMind Developer Framework.

As part of the notebook, you will learn how to develop an text summarization model while exploring how the documentation process works:

Use case

The purpose of this notebook is to showcase how to document an automated news summarization system using a Large Language Model (LLM). This AI system leverages a large language model (LLM) to process and condense web-based news articles into concise summaries.

Data Sources

The CNN/DailyMail Dataset is a collection tailored for text summarization, containing over 300,000 news articles from two significant English-speaking regions, the US and the UK. Each row comprises an article, a highlight section, and a unique ID. The highlights in the dataset are summaries that have been written by the original journalists. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.

The original dataset includes pre-divided splits: train, validation, and test. In this demo, as we are not training a LLM in the traditional machine learning sense but rather using prompt engineering to guide the LLM to function as a text summarizer, we do not adhere to the conventional distinction between training and test datasets. Therefore, we exclusively utilize the test dataset, applying it as a validation or “gold” standard to evaluate the effectiveness of our summarization through prompt engineering.

Workflow

The workflow comprises four primary stages, starting with article selection, where articles from the test dataset are chosen. This is followed by prompt engineering, where a prompt is crafted to communicate the summarization task to the LLM. In the summarization stage, the prompt is input into the LLM, which then produces summaries based on the article content. The final stage involves LLM response evaluation, where the summaries generated by the LLM are measured against the original journalist-authored highlights to evaluate the summarization quality.

About ValidMind

ValidMind’s platform enables organizations to identify, document, and manage model risks for all types of models, including AI/ML models, LLMs, and statistical models. As a model developer, you use the ValidMind Developer Framework to automate documentation and validation tests, and then use the ValidMind AI Risk Platform UI to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.

If this is your first time trying out ValidMind, you can make use of the following resources alongside this notebook:

Before you begin

For access to all features available in this notebook, create a free ValidMind account.

Signing up is FREE — Sign up now

If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.

Install the client library

The client library provides Python support for the ValidMind Developer Framework. To install it:

%pip install -q validmind

Initialize the client library

ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the client library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.

Get your code snippet:

  1. In a browser, log into the Platform UI.

  2. In the left sidebar, navigate to Model Inventory and click + Register new model.

  3. Enter the model details and click Continue. (Need more help?)

    For example, to register a model for use with this notebook, select:

    • Documentation template: LLM-based Text Summarization
    • Use case: Marketing/Sales - Sales/Prospecting

    You can fill in other options according to your preference.

  4. Go to Getting Started and click Copy snippet to clipboard.

Next, replace this placeholder with your own code snippet:

# Replace with your code snippet

import validmind as vm

vm.init(
    api_host="https://api.prod.validmind.ai/api/v1/tracking",
    api_key="...",
    api_secret="...",
    project="...",
)

Initialize the Python environment

Next, let’s import the necessary libraries and set up your Python environment for data analysis:

# Install the `datasets` library from huggingface
%pip install -q datasets
%matplotlib inline

Preview the documentation template

A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.

You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:

vm.preview_template()

Load the sample dataset

The sample dataset used here is provided by the ValidMind library. To be able to use it, you need to import the dataset and load it into a pandas DataFrame, a two-dimensional tabular data structure that makes use of rows and columns:

# Import the sample dataset from the library
from validmind.datasets.nlp import cnn_dailymail

print(
    f"Loaded demo dataset with: \n\n\t• Target column: '{cnn_dailymail.target_column}' "
    f"\n\t• Input text column: {cnn_dailymail.text_column} "
    f"\n\t• Prediction columns: '{cnn_dailymail.t5_prediction}', '{cnn_dailymail.gpt_35_prediction_column}'"
)


train_df, test_df = cnn_dailymail.load_data(source="offline", dataset_size="100")

# Display the first few rows of the dataframe to check the loaded data.
cnn_dailymail.display_nice(train_df.head())

Document the model

As part of documenting the model with the ValidMind Developer Framework, you need to preprocess the raw dataset, initialize some training and test datasets, initialize a model object you can use for testing, and then run the full suite of tests.

Setup the Large Language Model (LLM)

This section prepares our environment to use OpenAI’s Large Language Model by setting up the API key and defining a function to call the model.

import os

import dotenv

dotenv.load_dotenv()

if os.getenv("OPENAI_API_KEY") is None:
    raise Exception("OPENAI_API_KEY not found")
from openai import OpenAI

model = OpenAI()


def call_model(prompt):
    return (
        model.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "user", "content": prompt},
            ],
        )
        .choices[0]
        .message.content
    )

Setup up the Prompt

In this section, we construct a structured prompt template designed to guide the AI in summarizing the CNN Daily news. The template emphasizes the AI’s role as an expert in parsing and condensing news information. It instructs the AI to focus on the article’s core content, avoiding assumptions or external data.

prompt_template = """
You are an AI with expertise in summarizing financial news.
Your task is to provide a concise summary of the specific news article provided below.
Before proceeding, take a moment to understand the context and nuances of the financial terminology used in the article.

Article to Summarize:

```
{article}
```

Please respond with a concise summary of the article's main points.
Ensure that your summary is based on the content of the article and not on external information or assumptions.
""".strip()

prompt_variables = ["article"]

Initialize the ValidMind datasets

Before you can run tests, you must first initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module.

This function takes a number of arguments:

  • dataset — the raw dataset that you want to provide as input to tests
  • input_id - a unique identifier that allows tracking what inputs are used when running each individual test
  • target_column — a required argument if tests require access to true values. This is the name of the target column in the dataset

With all datasets ready, you can now initialize training and test datasets (train_df and test_df) created earlier into their own dataset objects using vm.init_dataset():

from validmind.models import FoundationModel, Prompt

vm_test_ds = vm.init_dataset(
    dataset=test_df,
    input_id="test_dataset",
    text_column="article",
    target_column="highlights",
)

vm_model = vm.init_model(
    model=FoundationModel(
        predict_fn=call_model,
        prompt=Prompt(
            template=prompt_template,
            variables=prompt_variables,
        ),
    ),
    input_id="gpt_35",
)

Assign predictions to the datasets

We can now use the assign_predictions() method from the Dataset object to link existing predictions to any model. If no prediction values are passed, the method will compute predictions automatically:

# Assign pre-computed model predictions to the test dataset
vm_test_ds.assign_predictions(vm_model, prediction_column="gpt_35_prediction")

print(vm_test_ds)

Data validation

This section focuses on performing a series of data description tests to gain insights into the basic characteristics of our text data. The goal of data description in this use case is verifying that the data meets certain standards and criteria before it is used for text summarization tasks. We conduct the follwoing NLP data quality tests:

  • Duplicates: Check for duplicate articles in the dataset.
  • Text Description: Assess the general context and provide a summary of the dataset.
  • Common Words: Determine the most frequently occurring words that could indicate key themes.
  • Punctuations: Analyze punctuation patterns to understand sentence structures and emphases.
  • Stop Words: Identify and remove common stopwords to clarify the significant textual elements.
  • Language Detection: Verify the language of the dataset to ensure it is consistent.
  • Toxicity: Evaluate the presence of toxic language in the dataset.
  • Polarity and Subjectivity: Measure the sentiment of the dataset to understand the overall tone.
  • Sentiment: Analyze the sentiment of the dataset to determine the overall mood.
test = vm.tests.run_test(
    "validmind.data_validation.Duplicates",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.TextDescription",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.CommonWords",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.Punctuations",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.StopWords",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.LanguageDetection",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.Toxicity",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.PolarityAndSubjectivity",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.data_validation.nlp.Sentiment",
    inputs={
        "dataset": vm_test_ds,
    },
)
test.log()

Prompt Validation

This section conducts a critical analysis of prompts to ensure their effectiveness when interacting with AI models. It involves systematic checks across several dimensions to enhance the quality of the interaction between the user and the AI:

  • Bias: Evaluate prompts for impartiality.
  • Clarity: Confirm the prompts are clearly understood.
  • Conciseness: Verify that the prompts are brief and concise.
  • Delimitation: Check the boundaries and extent of prompts.
  • Negative Instruction: Review prompts for any negative phrasing that could be misconstrued.
  • Specificity: Assess prompts for detailed and precise instructions.
test = vm.tests.run_test(
    "validmind.prompt_validation.Bias",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.prompt_validation.Clarity",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.prompt_validation.Conciseness",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.prompt_validation.Delimitation",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.prompt_validation.NegativeInstruction",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.prompt_validation.Specificity",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()

Model Validation

This section is dedicated to the assessment of the AI model’s understanding and processing of language data. It involves validating the model through various embedding and performance tests, ensuring the model’s output is as expected and reliable.

Run embeddings tests

This subsection involves conducting tests to examine the semantic space where words or phrases from the vocabulary are mapped. We check for:

  • Cosine Similarity Distribution: Analyze the degree of similarity between vectors.
  • Cluster Distribution: Observe how embeddings group together, potentially indicating similarities in meaning.
  • Descriptive Analytics: Provide statistical descriptions of the embedding space.
  • Stability Analysis Keyword: Test embeddings stability against keyword variations.
  • Stability Analysis Random Noise: Assess how random noise affects the stability of embeddings.
  • Stability Analysis Synonyms: Evaluate the consistency of embeddings for synonymous words.
from transformers import pipeline

embedding_model = pipeline(
    "feature-extraction",
    model="bert-base-uncased",
    tokenizer="bert-base-uncased",
    truncation=True,
)

vm_embedding_model = vm.init_model(
    model=embedding_model,
    input_id="bert_embedding_model",
)
vm_test_ds.assign_predictions(
    model=vm_embedding_model,
)
test = vm.tests.run_test(
    "validmind.model_validation.embeddings.CosineSimilarityDistribution",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_embedding_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.embeddings.ClusterDistribution",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_embedding_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.embeddings.DescriptiveAnalytics",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_embedding_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.embeddings.StabilityAnalysisKeyword",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_embedding_model,
    },
    params={
        "text_column": "article",
        "keyword_dict": {"finance": "financial"},
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.embeddings.StabilityAnalysisRandomNoise",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_embedding_model,
    },
    params={
        "text_column": "article",
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.embeddings.StabilityAnalysisSynonyms",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_embedding_model,
    },
    params={
        "text_column": "article",
        "probability:": 0.1,
    },
)
test.log()

Run model performance tests

Here we measure the model’s linguistic performance across various metrics, including:

  • Token Disparity: Examine the distribution of token usage.
  • Rouge Metrics: Use Recall-Oriented Understudy for Gisting Evaluation to assess the summary quality.
  • Bert Score: Implement BERT-based evaluations of token similarity.
  • Contextual Recall: Test the model’s ability to recall contextual information.
  • Bleu Score: Evaluate the quality of machine translation.
  • Meteor Score: Measure translation hypothesis against reference translations.
test = vm.tests.run_test(
    "validmind.model_validation.TokenDisparity",
    inputs={
        "dataset": vm_test_ds,
        "model": "gpt_35",
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.RougeScore",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
    params={
        "metric": "rouge-1",
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.BertScore",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.ContextualRecall",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.BleuScore",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.MeteorScore",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()

Run bias and toxicity tests

The focus of this subsection is on identifying any potential bias or toxicity in the model’s language processing. We conduct:

  • Toxicity Score: Quantify the degree of toxicity in content generated by the model.
  • Toxicity Histogram: Visualize the distribution of toxicity scores.
  • Regard Score: Assess the model’s language for indications of respect or disrespect.
  • Regard Histogram: Plot the frequencies of different levels of regard to identify patterns.
test = vm.tests.run_test(
    "validmind.model_validation.ToxicityScore",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()
test = vm.tests.run_test(
    "validmind.model_validation.RegardScore",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)
test.log()

Next steps

You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way: view the prompt validation test results as part of your model documentation right in the ValidMind Platform UI:

  1. In the Platform UI, go to the Documentation page for the model you registered earlier.

  2. Expand 2. Data Preparation or 3. Model Development to review all test results.

What you can see now is a more easily consumable version of the prompt validation testing you just performed, along with other parts of your model documentation that still need to be completed.

If you want to learn more about where you are in the model documentation process, take a look at Get started with the ValidMind Developer Framework.