Carnegie Mellon University
Home > Curriculum

Cutting-Edge Curriculum

Training for future GenAI experts poised to transform the world

As Generative AI continues to evolve, computer scientists will need the most sophisticated and cutting-edge skills to enhance the capabilities of AI for organizations. In the Generative AI and Large Language Models graduate certificate, courses cover three distinct areas of knowledge:

  • Theory - in our Large Language Models & Applications course, you will learn the practical applications of LLM's, what they are, what they can do, and how they work.
  • Data Representation - in our Multimodal Machine Learning course, you will learn how different language modalities are used in various prediction problems.
  • Scalability - in our Large Language Model Systems course, you will learn how to apply and scale LLM's for your organization.

With the ability to understand the use of large language models, train with massive multimodal data sets and implement scalable systems, you will be ready to maximize the potential of generative AI, all on your own. See more details about our coursework below.

Curriculum Overview

When you enroll in the Generative AI & Large Language Models graduate certificate, you will take 3 graduate-level, credit-bearing courses. Each course will appear on your Carnegie Mellon transcript with the grade earned.

To earn the certificate, you must successfully complete all courses in the program. If you are only interested in one course, however, you may complete that course only and it will show on your transcript with the grade earned. 

The certificate includes the following courses taught by CMU faculty:

Course Number: 11-667

Number of Units: 12 units

This course provides a broad foundation for understanding, working with, and adapting existing tools and technologies in the area of Large Language Models like BERT, T5, GPT, and others.

Throughout this course, you will learn:

  • A range of topics including systems, data, data filtering, training objectives, RLHF/instruction tuning, ethics, policy, evaluation, and other human facing issues.
  • How transformer architectures work and explore the reasons why they are better than LSTM-based seq2seq, decoding strategies, etc. Through readings and hands-on assignments, you will explore techniques for pretraining, attention, prompting, etc. 
  • How to apply the skills you've learned in a semester-long course project, making use of locally sourced model instances that offer the opportunity to explore behind the curtain of commercial APIs.
  • How to compare and contrast different models in the LLM ecosystem in order to determine the best model for any task.
  • How to implement and train a neural language model from scratch in Pytorch.
  • How to utilize open source libraries to finetune and do inference with popular pre-trained language models.
  • How to apply LLM’s in downstream applications and how decisions made during pre-training affect suitability for tasks.  
  • How to design new methodologies to leverage existing large scale language models in novel ways.

Course Number: 11-777

Number of Units: 12 units

In this course, you will learn the fundamental mathematical concepts in machine learning and deep learning that are relevant to the five main challenges in multimodal machine learning: 

  1. Multimodal representation learning
  2. Translation and mapping
  3. Modality alignment
  4. Multimodal fusion
  5. Co-learning 

The mathematical concepts you will learn include, but are not limited to, multimodal auto-encoder, deep canonical correlation analysis, multi-kernel learning, attention models and multimodal recurrent neural networks.

You will also review recent papers describing state-of-the-art probabilistic models and computational algorithms for multimodal machine learning and discuss the current and upcoming challenges. Finally, you will study recent applications of multimodal machine learning including multimodal affect recognition, image and video captioning and cross-modal multimedia retrieval.

Course Number: NA

Number of Units: 12 units

LLM's are often very large and require increasingly larger data sets to train, which means developing scalable systems is critical for advancing AI. In this course, you will learn the essential skills for designing and implementing scalable LLM systems.  

Throughout the course, you will:

  • Learn the approaches for training, serving, fine-tuning, and evaluating LLM's from the systems perspective.
  • Gain familiarity with sophisticated engineering using modern hardware and software stacks needed to accommodate the scale.
  • Acquire essential skills for designing and implementing LLM systems, including:
    • Algorithms and system techniques to efficiently train LLM's with huge data
    • Efficient embedding storage and retrieval
    • Data efficient fine-tuning
    • Communication efficient algorithms
    • Efficient implementation of reinforcement learning with human feedback
    • Acceleration on GPU and other hardware
    • Model compression for deployment
    • Online maintenance
  • Learn about the latest advances in LLM systems regarding machine learning, natural language processing, and system research.

Meet Our World-Class Faculty

Dr. Carolyn Rosé
Dr. Carolyn Rosé

Professor, Language Technologies Institute & Human-Computer Interaction Institute

Education: Ph.D., Carnegie Mellon University

Research Focus: Bridging deep, theoretical insights from theories of language and interaction and computational modeling paradigms such as deep learning and LLMs, Dr. Rosé applies understanding of language and interaction in design and orchestration of ensembles of data representations with needed affordances and architectural elements that introduce inductive biases at the algorithmic level.

Dr. Daphne Ippolito
Dr. Daphne Ippolito

Assistant Professor, Language Technologies Institute

Education: Ph.D., University of Pennsylvania

Research Focus: Dr. Ippolito's research group explores the properties of language model memorization, natural language generation systems and how creative writers might interact with and perceive these tools, and how factors like genre, decoding strategy and annotator training impact the detectability of machine-generated text. Before starting her role at Carnegie Mellon, Dr. Ippolito worked as a Research Scientist at Google Brain.

leili.png
Dr. Lei Li

Assistant Professor, Language Technologies Institute

Education: Ph.D., Carnegie Mellon University

Research Focus: Dr. Li's research interest lies in natural language processing, machine learning, and drug discovery. He explores topics associated with large language models (e.g. efficient large language model systems), multilingual natural language processing (e.g. speech translation), and AI for science. Before joining CMU, Dr. Li worked as a principal researcher at Baidu's Institute of Deep Learning in Silicon Valley and as the founding director of ByteDance's AI Lab.

louis-philippe-morency.jpg
Dr. Louis-Philippe Morency

Associate Professor, Language Technologies Institute

Education: Ph.D., MIT Computer Science and Artificial Intelligence Laboratory

Research Focus: Dr. Morency leads the Multimodal Communication and Machine Learning Laboratory which focuses on building the computational foundations to help computers analyze, recognize and predict subtle human communicative behaviors during social interactions. This multi-disciplinary research topic overlaps the fields of multimodal interaction, social psychology, computer vision, machine learning and artificial intelligence.

yonatan-bisk.jpg
Dr. Yonatan Bisk

Assistant Professor, Language Technologies Institute

Education: Ph.D., University of Illinois at Urbana-Champaign

Research Focus: At CMU, Dr. Bisk leads the CLAW Lab, which includes members from the Language Technologies Institute, Machine Learning Department and Robotics Institute. The lab's research assumes that perception, embodiment, and language cannot exist without one another. Overall, they work to uncover the latent structures of natural language, modeling the semantics of the physical world, and connecting language to perception and control.

daniel-fried.jpg
Dr. Daniel Fried

Assistant Professor, Language Technologies Institute

Education: Ph.D., UC Berkeley

Research Focus: Dr. Fried's lab focuses on building language interfaces that can help people with real-world tasks. They aim to make programming more commmunicative by creating models, methods, and datasets for producing code from language. Much of their work also takes a multi-agent system perspective on communication, showing that natural language processing agents can be improved by modeling the intents and interpretations people have when they use language. 

CMU School of Computer Science logo

The Graduate Certificate in Generative AI & Large Language Models is offered by the Language Technologies Institute (LTI) at CMU, which is housed within the highly-ranked School of Computer Science (SCS). SCS faculty are esteemed in their field, and many of them have collaborated on critical projects that have paved the way for future discoveries in artificial intelligence. Check out some of their work below:

autonomous driving

Researchers from CMU’s Robotics Institute completed a long-distance autonomous driving test in 1995 called the No Hands Across America mission.

football field

In 2001, SCS Founders University Professor Takeo Kanade and his team created a video replay system called EyeVision for Super Bowl XXXV.

Graphic of autonomous vehicle data

In 2007, Faculty Emeritus William “Red” Whittaker led CMU’s Tartan Racing team to victory in the DARPA’s Grand Challenge.

facial-recognition.png

Assistant Research Professor László Jeni used computer vision technology to create a facial recognition tool that can help people with visual impairment.

The Building Blocks of Our Curriculum

Practical Problem Solving

As a student in CMU’s Generative AI online graduate certificate, you will not only master the fundamentals of large language models but also learn how to practically apply this technology in the workplace. Understanding large language models, how they work, and how to build them are vital for success, but knowing how to leverage this knowledge while thinking critically about real-world problems is equally important.

Real-World, Industry-Focused Classes

In this program, you will learn how to approach problems from experts who have been there, done that. You will learn to think strategically about challenges you encounter on the job by considering questions like, ‘What resources are available to me?’ or ‘What limitations do I have?’ Your critical thinking skills, combined with your technical prowess, will empower you to design the most cutting-edge solutions for your organization.

Thoughtfully Designed Coursework

The coursework for this certificate is deliberately designed to highlight Generative AI and large language models from different angles. Each course focuses on one of three distinct concepts: theory, data representation and their affordances, and system scalability. By completing three complementary and complex courses, you will be ready to innovate new and exciting designs that will take your organization to the next level.