%pip install -q validmind
Sentiment analysis of financial data using a large language model (LLM)
Document a large language model (LLM) specialized in sentiment analysis for financial news using the ValidMind Developer Framework.
This interactive notebook shows you how to set up the ValidMind Developer Framework, initializes the client library, and uses a specific prompt template for analyzing the sentiment of sentences in a dataset. The notebook also includes example data to test the model’s ability to correctly identify sentiment as positive, negative, or neutral.
About ValidMind
ValidMind’s platform enables organizations to identify, document, and manage model risks for all types of models, including AI/ML models, LLMs, and statistical models. As a model developer, you use the ValidMind Developer Framework to automate documentation and validation tests, and then use the ValidMind AI Risk Platform UI to collaborate on documentation projects. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
If this is your first time trying out ValidMind, we recommend going through the following resources first:
- Get started — The basics, including key concepts, and how our products work
- Get started with the ValidMind Developer Framework — The path for developers, more code samples, and our developer reference
Before you begin
For access to all features available in this notebook, create a free ValidMind account.
Signing up is FREE — Sign up nowThis notebook requires an OpenAI API secret key to run. If you don’t have one, visit API keys on OpenAI’s site to create a new key for yourself. Note that API usage charges may apply.
If you encounter errors due to missing modules in your Python environment, install the modules with pip install
, and then re-run the notebook. For more help, refer to Installing Python Modules.
Install the client library
The client library provides Python support for the ValidMind Developer Framework. To install it:
Initialize the client library
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the client library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
Get your code snippet:
In a browser, log into the Platform UI.
In the left sidebar, navigate to Model Inventory and click + Register new model.
Enter the model details, making sure to select LLM-based Text Classification as the template and Marketing/Sales - Analytics as the use case, and click Continue. (Need more help?)
Go to Getting Started and click Copy snippet to clipboard.
Next, replace this placeholder with your own code snippet:
# Replace with your code snippet
import validmind as vm
vm.init(="https://api.prod.validmind.ai/api/v1/tracking",
api_host="...",
api_key="...",
api_secret="...",
project )
Preview the documentation template
A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.
You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template()
function from the ValidMind library and note the empty sections:
vm.preview_template()
Get ready to run the analysis
Import the ValidMind FoundationModel
and Prompt
classes needed for the sentiment analysis later on:
from validmind.models import FoundationModel, Prompt
Check your access to the OpenAI API:
import os
import dotenv
dotenv.load_dotenv()
if os.getenv("OPENAI_API_KEY") is None:
raise Exception("OPENAI_API_KEY not found")
from openai import OpenAI
= OpenAI()
model
def call_model(prompt):
return (
model.chat.completions.create(="gpt-3.5-turbo",
model=[
messages"role": "user", "content": prompt},
{
],
)0]
.choices[
.message.content )
Set the prompt guidelines for the sentiment analysis:
= """
prompt_template You are an AI with expertise in sentiment analysis, particularly in the context of financial news.
Your task is to analyze the sentiment of a specific sentence provided below.
Before proceeding, take a moment to understand the context and nuances of the financial terminology used in the sentence.
Sentence to Analyze:
```
{Sentence}
```
Please respond with the sentiment of the sentence denoted by one of either 'positive', 'negative', or 'neutral'.
Please respond only with the sentiment enum value. Do not include any other text in your response.
Note: Ensure that your analysis is based on the content of the sentence and not on external information or assumptions.
""".strip()
= ["Sentence"] prompt_variables
Get your sample dataset ready for analysis
To perform the sentiment analysis for financial news we’re going to load a local copy of this dataset: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-for-financial-news.
This dataset contains two columns, Sentiment
and Sentence
. The sentiment can be negative
, neutral
or positive
.
import pandas as pd
= pd.read_csv("./datasets/sentiments_with_predictions.csv") df
Run the model documentation tests
First, use the ValidMind Developer Framework to initialize the dataset and model objects necessary for documentation. The ValidMind predict_fn
function allows the model to be tested and evaluated in a standardized manner:
= vm.init_dataset(
vm_test_ds =df,
dataset="test_dataset",
input_id="Sentence",
text_column="Sentiment",
target_column
)
= vm.init_model(
vm_model =FoundationModel(
model=call_model,
predict_fn=Prompt(
prompt=prompt_template,
template=prompt_variables,
variables
),
),="gpt_35_model",
input_id
)
# Assign model predictions to the test dataset
="gpt_35_prediction") vm_test_ds.assign_predictions(vm_model, prediction_column
Next, use the ValidMind Developer Framework to run validation tests on the model. The vm.run_documentation_tests
function analyzes the current project’s documentation template and collects all the tests associated with it into a test suite.
The function then runs the test suite, logs the results to the ValidMind API and displays them to you.
= vm.run_documentation_tests(
test_suite ={
inputs"dataset": vm_test_ds,
"model": vm_model,
} )
Next steps
You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way: view the prompt validation test results as part of your model documentation in the ValidMind Platform UI:
In the Platform UI, go to the Documentation page for the model you registered earlier.
Expand 2. Data Preparation or 3. Model Development to review all test results.
What you can see now is a more easily consumable version of the prompt validation testing you just performed, along with other parts of your model documentation that still need to be completed.
If you want to learn more about where you are in the model documentation process, take a look at Get started with the ValidMind Developer Framework.