Build a Sentiment Analysis App with Hugging Face and Streamlit
Sentiment analysis helps computers understand if text expresses positive, negative, or neutral emotions. In this tutorial, we’ll build a web app that analyzes emotions in text using Hugging Face transformers and Streamlit. You’ll learn how to use pre-trained AI models to create a professional sentiment analyzer that can process any text input and provide confidence scores for its predictions.
Let’s get started.
Step 1: Setting up the environment
Before we start coding, we need to set up our development environment with all the necessary tools.
Create a project directory
First, create a new folder for the project and navigate into it:
mkdir sentiment-analysis-appcd sentiment-analysis-app
Set up a Python virtual environment
A virtual environment keeps the project dependencies isolated from other Python projects on your computer:
python -m venv sentiment_env# Activate on Windows:sentiment_env\Scripts\activate# Activate on Mac/Linux:source sentiment_env/bin/activate
Install required libraries
Install all the libraries we’ll need for sentiment analysis:
pip install transformers torch streamlit pandas plotly
Here’s what each library does:
transformers: Provides access to Hugging Face pre-trained modelstorch: The deep learning framework that runs the modelsstreamlit: Creates web interfaces with Pythonpandas: Handles data in table format
Create project files
Create two Python files in your project directory:
sentiment_utils.py- Contains helper functionsapp.py- Main application file
This will be our final project structure:
sentiment-analysis-app/├── app.py├── sentiment_utils.py
Step 2: Build a sentiment analysis with Hugging Face Transformers
Loading pre-trained models from Hugging Face
Let’s start by creating functions to load and use sentiment analysis models. Open sentiment_utils.py and add this code:
from transformers import pipelineimport streamlit as st@st.cache_resourcedef load_model(model_name):"""Load and cache the sentiment analysis model."""try:return pipeline("sentiment-analysis", model=model_name, return_all_scores=True)except Exception as e:st.error(f"Error loading model {model_name}: {str(e)}")return None
The @st.cache_resource decorator saves the model in memory so it doesn’t reload every time someone uses the app. The pipeline function creates a ready-to-use sentiment analyzer that can process text and return sentiment predictions.
Creating a function to analyze text sentiment
Add this function to analyze any text:
def analyze_sentiment(text, classifier):"""Analyze sentiment of the given text."""if not text.strip():return Nonetry:result = classifier(text)return result[0]except Exception as e:st.error(f"Error analyzing sentiment: {str(e)}")return None
The analyze_sentiment() function takes text input, sends it to the classifier, and returns results showing whether the text is positive or negative, along with confidence scores.
Adding visual elements with emojis
Make the results more user-friendly with emojis:
def get_sentiment_emoji(label):"""Map sentiment labels to emojis."""emoji_map = {'POSITIVE': '😊','NEGATIVE': '😞','NEUTRAL': '😐',}return emoji_map.get(label.upper(), '🤔')
When the sentiment analysis model processes text, it returns results in the following format:
[{'label': 'POSITIVE', 'score': 0.9998},{'label': 'NEGATIVE', 'score': 0.0002}]
The scores represent confidence levels on a scale from 0 to 1. A score of 0.9998 means the model is 99.98% confident the text is positive, while 0.0002 indicates only 0.02% confidence for negative sentiment.
Step 3: Building a Streamlit interface
Setting up the streamlit app structure
Create your main application in app.py:
import streamlit as stimport pandas as pdfrom sentiment_utils import load_model, analyze_sentiment, display_single_result# Set page configurationst.set_page_config(page_title="Sentiment Analysis",page_icon="💬",layout="centered")def main():st.title("💬 Sentiment Analysis with Transformers")st.markdown("Enter text below to analyze sentiment using a state-of-the-art mo
Implementing model loading with progress indicators
Add the model loading section:
# Load the best model onlymodel_name = "siebert/sentiment-roberta-large-english"# Initialize model with loading messagewith st.spinner("Loading model... This may take a moment on first run."):classifier = load_model(model_name)if classifier is None:st.error("Failed to load sentiment analysis model. Please refresh the page.")return
The model loading section handles three essential tasks:
Loads the
siebert/sentiment-roberta-large-englishmodel for accurate predictionsShows a loading spinner so users know the model is being loaded (especially useful the first time)
Displays an error message and asks users to refresh the page if the model fails to load
Creating an interactive text input interface
Add the text input area and analyze button:
# Text input areatext_input = st.text_area(label="Enter your text:",placeholder="Type or paste your text here... (e.g., 'I love this new product! It works perfectly.')",height=150,max_chars=1000)# Analyze buttonif st.button("Analyze Sentiment", type="primary", use_container_width=True):if text_input.strip():with st.spinner("Analyzing sentiment..."):results = analyze_sentiment(text_input, classifier)if results:display_single_result(results, text_input)else:st.warning("⚠️ Please enter some text to analyze.")if __name__ == "__main__":main()
Displaying results with metrics and tables
Back in sentiment_utils.py, add the display function:
def display_single_result(results, text_input=None):"""Display sentiment analysis results with metrics, table, and chart."""if not results:returnst.subheader("Analysis Results")# Get the sentiment with highest confidencebest_result = max(results, key=lambda x: x['score'])# Display metricscol1, col2 = st.columns(2)with col1:st.metric("Predicted Sentiment",f"{get_sentiment_emoji(best_result['label'])} {best_result['label']}",f"{best_result['score']:.2%}")with col2:st.metric("Confidence Score", f"{best_result['score']:.2%}")# Display all scores in a tablest.subheader("Detailed Scores")df = pd.DataFrame(results)df['score'] = df['score'].apply(lambda x: f"{x:.2%}")df['emoji'] = df['label'].apply(get_sentiment_emoji)df = df[['emoji', 'label', 'score']]df.columns = ['', 'Sentiment', 'Confidence']st.dataframe(df, use_container_width=True, hide_index=True)
- The
display_single_result()function accomplishes three key objectives: Shows the main predicted sentiment clearly with an emoji and confidence score. - Displays all sentiment results in a neat table with labels, scores, and emojis.
- Makes the output easy to understand and visually appealing for users.
Adding model information display
Enhance the app with model details:
# Display model infowith st.expander("ℹ️ Model Information"):st.write(f"**Model:** {model_name}")st.write("This model is fine-tuned on 15 datasets and achieves state-of-the-art performance.")st.write("It classifies text as either POSITIVE or NEGATIVE sentiment.")
Step 4: Exploring different Hugging Face models
Now, let’s compare popular Hugging Face sentiment analysis models:
| Model name | Speed | Accuracy | Best use case | Languages |
|---|---|---|---|---|
| distilbert-base-uncased-finetuned-sst-2-english | Fast | Good | Quick analysis, real-time apps | English |
| siebert/sentiment-roberta-large-english | Medium | Excellent | General purpose, business reviews | English |
| cardiffnlp/twitter-roberta-base-sentiment | Medium | Very Good | Social media posts, tweets | English |
| nlptown/bert-base-multilingual-uncased-sentiment | Slow | Good | International applications | 100+ languages |
| finiteautomata/bertweet-base-sentiment-analysis | Fast | Good | Informal text, social media | English |
How to switch between models
To use a different model, you need to change the model name in your code. This can be done using the following code:
# For Twitter analysismodel_name = "cardiffnlp/twitter-roberta-base-sentiment"# For multilingual supportmodel_name = "nlptown/bert-base-multilingual-uncased-sentiment"# For fast performancemodel_name = "distilbert-base-uncased-finetuned-sst-2-english"
Step 5: Running and Testing Your Application
Now that you’ve built the sentiment analysis app, let’s run it and test its functionality.
Running the application
To run the application, navigate to the project directory and make sure the virtual environment is activated, then start the Streamlit app:
streamlit run app.py
This command will start the app, and you should see output similar to this:
Local URL: http://localhost:8501Network URL: http://192.168.1.100:8501
You can now view the Streamlit app in the browser. To view the app, open your web browser and go to http://localhost:8501 .
Let’s now test the Application.
Testing the application
.png)
Try these sample texts to see how well your sentiment analyzer works:
- Positive examples:
- “I absolutely love this new smartphone! The camera quality is amazing, and the battery lasts all day.”
- “Thank you for the excellent customer service. You exceeded my expectations!”
- Negative examples:
- “This product is terrible. It broke after just one week of use.”
- “I’m really disappointed with the slow delivery and poor packaging.”
- Mixed/neutral examples:
- “The hotel was okay. Good location, but the room was small.”
- “The meeting has been scheduled for next Tuesday at 3 PM.”
When we test the app with this text, “I absolutely love this new smartphone! The camera quality is amazing, and the battery lasts all day.” It produces the following output:
The first time you run the app, it may take a moment to download and load the model. Subsequent analyses will be much faster thanks to Streamlit’s caching.
Conclusion
In this tutorial, we built a complete sentiment analysis web application. We:
- Set up a Python environment with Hugging Face transformers and Streamlit
- Created functions to load pre-trained models and analyze text sentiment
- Built an interactive web interface with text input and visual results display
- Learned about different sentiment analysis models available on Hugging Face
You now have a working sentiment analysis app that can classify any text as positive or negative with confidence scores. If you want to learn more about building apps using Hugging Face and Streamlit, you can check out this free course on Introduction to Hugging Face on Codecademy.
Frequently asked questions
1. Which Hugging Face model is best for sentiment analysis?
For most English text, siebert/sentiment-roberta-large-english offers the best balance of accuracy and speed. It’s trained on 15 different datasets and handles various text types well. However, if you’re analyzing tweets, use cardiffnlp/twitter-roberta-base-sentiment, and for multiple languages, try nlptown/bert-base-multilingual-uncased-sentiment.
2. Can I use Hugging Face models for other languages?
Yes! Many models support multiple languages. The nlptown/bert-base-multilingual-uncased-sentiment model works with over 100 languages. You can also find language-specific models by searching Hugging Face’s model hub for your target language.
3. What do the confidence scores mean?
Confidence scores show how sure the model is about its prediction. A score of 0.95 (95%) means the model is very confident, while 0.60 (60%) suggests uncertainty. The scores for all possible sentiments always add up to 1.0 (100%). Higher confidence usually means more reliable results.
4. How accurate are Hugging Face sentiment models?
The best models achieve 90-95% accuracy on standard test sets. However, accuracy can vary based on your specific text type. Models perform best on text similar to their training data. Always test with your own examples to verify accuracy for your use case.
5. Can I fine-tune these models for my specific use case?
Absolutely! If the pre-trained models don’t perfectly fit your needs, you can fine-tune them with your own labelled data. This process teaches the model to better understand your specific domain, whether it’s medical reviews, legal documents, or customer feedback. Hugging Face provides excellent tutorials on fine-tuning.
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
- Article
How to Use Hugging Face: Beginner's Guide to AI Models
Learn Hugging Face fundamentals to train transformer models, tokenize text, and deploy AI with Google Colab. Complete beginner tutorial. - Article
RAG Chatbot With HuggingFace And Streamlit: Complete Tutorial
Learn how to build an AI customer service chatbot with Hugging Face RAG. Complete tutorial with datasets, embeddings & Streamlit interface. - Article
What is Streamlit? A Complete Guide for Building Data Apps
Learn what Streamlit is, how to install it, and build your first interactive data app with Python, no web dev skills needed.
Learn more on Codecademy
- Learn about the Hugging Face AI and machine learning platform, and how their tools can streamline ML and AI development.
- Beginner Friendly.< 1 hour
- Learn about what transformers are (the T of GPT) and how to work with them using Hugging Face libraries
- Intermediate.3 hours
- Learn Streamlit to build and deploy interactive AI applications with Python in this hands-on course.
- With Certificate
- Intermediate.1 hour