Generative AI Short Courses by DeepLearning.AI


Introduction

In my previous couple posts (post1, post2), I shared my detailed notes on Generative AI Learning Path in Google Cloud’s Skills Boost. It’s a great collection of courses to get started in GenAI, especially on the theory underpinning GenAI.

Since then, I discovered another great resource to learn more about GenAI: Learn Generative AI Short Courses by DeepLearning.AI from Andrew Ng.

In this post, I summarize what each course teaches you to help you decide which course to take. I highly recommend taking all 4 courses. They’re full of useful information and short enough that even if you’re not fully interested in the topic, you can still get a good idea about it in a short amount of time.

Overview: Learn Generative AI

Learn Generative AI Short Courses

Learn Generative AI Short Courses consists of 4 short courses on GenAI. Each course takes about 1-2 hours to complete and they are marked as “free for a limited time”.

Each course starts with some theory but they quickly dive into hands-on labs explained by the instructor. I like the hands-on approach, making it more concrete and useful to understand how GenAI can help with application development.

I completed all 4 courses. I took detailed notes for each course but since courses are “free for a limited time”, I didn’t want to publish detailed notes for courses that might be paid at some point. Instead, I will tell you what each course teaches.

The four short courses are (ranked from my most favorite to least favorite):

  1. LangChain for LLM Application Development
  2. ChatGPT Prompt Engineering for Developers
  3. Building Systems with the ChatGPT API
  4. How Diffusion Models Work

Let’s dive into each course in more detail.

LangChain for LLM Application Development

This was my most favorite course by far. I had heard a lot about LangChain and this course provided all the basic knowledge and hands-on experience I needed to understand what it is and why it’s useful. If you have time for 1 course, I’d choose this one, as it’s packed with information and practical hands-on code samples.

The course focuses on the open-source development framework for LLM apps called LangChain. The course covers various components such as models (LLMs, chat models, text embedding models), prompts (templates, output parsers, example selectors), indexes (document loaders, text splitters, vector stores, retrievers), chains (prompt + LLM + output parsing), memory models (conversation buffer, token buffer, summary buffer), and question and answer techniques using embedding models and vector stores.

Additionally, the course introduces the concept of agents, which extend LLM functionality for various purposes. You also learn about how to evaluate how well your LLM is doing.

ChatGPT Prompt Engineering for Developers

This course made me realize how badly I was prompting LLMs. It opened my eyes to how much better answers you can get from LLMs by prompting clearly and in detail.

The course focuses on best practices for prompting LLMs, emphasizing principles such as writing clear and specific instructions and allowing sufficient time for the model to process information. It introduces tactics for effective prompting, including the use of delimiters, requesting structured output, and employing few-shot prompting.

Additionally, the course explores diverse applications of LLMs, such as iterative analysis and refinement of prompts, summarization, sentiment analysis, text transformation tasks (e.g., translation, tone adjustment), text expansion, and the creation of customized chatbots.

It emphasizes the need to provide comprehensive context in conversations with LLMs and offers practical techniques for constructing prompts in a conversational format.

Building Systems with the ChatGPT API

This course builds on the previous course and teaches you how you productionize a system using LLMs. I was especially impressed to learn about how you can use LLM to evaluate its own outputs and see how your changes are improving the system!

This course focuses on best practices for developing applications that utilize Language Models (LLMs), using end-to-end customer assistance systems as an example. The course covers various topics, including steps hidden from users, communication with external systems, system evaluation, and improvement.

It explains the concepts of base LLMs and instruction-tuned LLMs, along with the process of transforming a base LLM into an instructed tuned one through fine-tuning and reinforcement learning.

The course covers tokenization and system-user-assistant messages. It explores classification techniques and content moderation using OpenAI’s API. The course also delves into prompt injection prevention using delimiters and chain of thought reasoning. It covers chaining prompts for complex tasks and output checks for safety and quality. Additionally, it teaches evaluation methods for LLM outputs, including the use of rubrics and comparing responses to ideal answers.

How Diffusion Models Work

This course was my least favorite course, mainly because it has a lot of code in hands-on sections but little explanation of what the code does and why we need certain parts. Lots of details were glossed over. Also, running the code takes a while, due to training, so it’s not as interactive as other courses.

The course provides a detailed understanding of the technical aspects behind image generation using diffusion models. The goal of a diffusion model is to generate images resembling the ones used for training, capturing fine details and general outlines.

The model achieves this by adding noise at different levels during training and allowing the neural network to learn how to handle each level. Through the sampling process, the network progressively refines the noise to generate high-quality sprite images.

The course covers topics such as the neural network architecture (UNet), training procedures, controlling the model’s outputs using embeddings, and advancements in sampling techniques to speed up the training process.


In this post, I aimed to provide an overview of the Learn Generative AI Short Courses series by DeepLearning.AI.

From LangChain to prompt engineering to building LLM systems and exploring the inner workings of diffusion models, these courses provide a lot of value for GenAI learners like myself.

As always, if you have questions or feedback or if you come across other good GenAI courses, feel free to let me know on Twitter @meteatamel.

GenAI  AI 

See also