Welcome to ONLC Training Centers

Large Language Models (Using Huggingface)

Class Dates
(click date for class times)
(click Enroll for locations)

Fee:  $1495

Savings options:

 Learning Credits
Need a price quote?

Follow the link to our self-service price quote form to generate an email with a price quote.

Need a class for a group?

We can deliver this class for a private group at your location. Follow the link to request more information.

Email Alert

Receive an email when this class is available as "Ready to Run" or "Early Notice" status.

Attend from your office or home

If you have high-speed internet and two computers you can likely take this class from your office or home.


Large Language Models (Using Huggingface) Course Outline

Overview
This two-day, hands-on course introduces participants to the Hugging Face ecosystem, equipping them with practical skills to find, run, fine-tune, and deploy pre-trained models for real-world applications. Through guided exercises, attendees will explore NLP, vision, and audio models, learn the fundamentals of fine-tuning, and deploy their own models using Hugging Face Spaces.

The course covers the full workflow—from navigating the Hugging Face Hub to preparing datasets, customizing models, and optimizing deployments—ensuring participants gain both conceptual understanding and practical coding experience. By the end of the program, learners will have built and deployed functional AI applications they can adapt for their own projects.

Prerequisites
Basic Python knowledge is required. Familiarity with Jupyter or Google Colab is recommended but not required.

COURSE OUTLINE

Welcome & Course Orientation
Instructor and participant introductions
Course goals and objectives
Hugging Face at a glance: the “GitHub of AI”
Overview of the Hugging Face ecosystem: Hub, Transformers, Datasets, Spaces
Software setup check and Colab/Jupyter introduction

Hugging Face Hub Tour
Navigating the Hub: search, filters, and model cards
Popular tasks: NLP, vision, audio, multimodal
Understanding model tags, licenses, and intended use
Cloning repositories and downloading models locally

Getting Started with Pipelines
What is a pipeline?
Running a sentiment analysis model in 5 lines of code
Other built-in pipelines: summarization, translation, question answering
Parameters and customization options
Performance considerations

Practical Text-Based Use Cases
Zero-shot classification
Summarization and translation
Question answering over custom text
Hands-on: building a small “document query” notebook

Beyond Text: Vision & Audio Models
Image classification with ViT
Image generation with diffusers (Stable Diffusion Lite)
Speech-to-text with Whisper models
Hands-on: choose and run one vision or audio task

Customizing Pre-Trained Models
Changing model configurations
Tokenizer tweaks and preprocessing techniques
Using pipelines vs. model/tokenizer API directly
Exporting and reusing code for automation

Building & Sharing a Demo with Hugging Face Spaces
Spaces overview: Gradio and Streamlit
Creating a simple interface for a pre-trained model
Uploading to Spaces for public or private sharing

Fine-Tuning Fundamentals
Why fine-tune? Benefits over training from scratch
Parameter-efficient fine-tuning (LoRA, QLoRA)
Overview of the Trainer API
Using the `peft` library for LoRA-based tuning

Preparing Your Dataset
Using the datasets library
Loading public datasets from the Hub
Cleaning and tokenizing text
Train/test splits and evaluation metrics

Hands-On Fine-Tuning (Text Model)
Selecting a small model (e.g., DistilBERT)
Setting training arguments in Trainer
Running fine-tuning in Colab
Monitoring training progress
Saving and evaluating the model

Publishing to the Hugging Face Hub
Creating a model card
Uploading model weights and metadata
Versioning and setting permissions

Deployment Pathways
Inference API basics
Using Spaces for deployment
Integrating models into Python or JavaScript applications
Example: deploy fine-tuned sentiment classifier to Spaces

Optimization & Best Practices
Reducing model size for faster inference
Using quantization and pruning techniques
Keeping models updated
Managing costs in production environments

View outline in Word

LDLMH2

Attend hands-on, instructor-led Large Language Models (Using Huggingface) training classes at ONLC's more than 300 locations. Not near one of our locations? Attend these same live classes from your home/office PC via our Remote Classroom Instruction (RCI) technology.

For additional training options, check out our list of ML & AI Courses and select the one that's right for you.

GENERAL INFO

Class Format
Class Policies
Student Reviews


HAVE QUESTIONS?
First Name

Last Name

Company

Phone

Email

Location

Question/Comment



ONLC TRAINING CENTERS
www.onlc.com