EP48: 9-hour crash course curriculum for learning to build LLM applications.
๐ซก๐ช๐ Here's a 9-hour crash course curriculum for learning to build LLM applications. All resources are free.
๐ซก๐ช๐ Here's a 9-hour crash course curriculum for learning to build LLM applications. All resources are free.
๐ฃ๐๐ฅ๐ง ๐ญ - ๐ ๐๐๐ก๐ง๐๐ ๐๐ก๐ง๐ฅ๐ข ๐ง๐ข ๐๐๐ ๐ฆ (1h 30m)
Intro & overview of language models, next-word prediction, embeddings, cosine similarity, semantic search.
Resources:
๐บ 30m: Simple Introduction to LLMs (Matthew Berman) ๐https://bit.ly/3KmQxPW
๐ 40m: A Very Gentle Introduction to LLMs without the Hype (Mark Riedl) ๐https://bit.ly/3yEJzmO
๐บ 10m: How do LLMs work? Next Word Prediction with the Transformer Architecture Explained (Louis-Franรงois Bouchard) ๐https://bit.ly/4bERFub
๐บ 10m: What is Semantic Search? (Luis Serrano) ๐https://bit.ly/3yBOKDU
---
๐ฃ๐๐ฅ๐ง ๐ฎ - ๐ง๐ฅ๐๐ก๐ฆ๐๐ข๐ฅ๐ ๐๐ฅ ๐๐ฅ๐๐๐๐ง๐๐๐ง๐จ๐ฅ๐ (1h 30m)
Encoder-decoder architecture, masking, attention, transformers, GPTs.
Resources:
๐บ 30m: But what is a GPT? Visual intro to transformers (3Blue1Brown) ๐https://bit.ly/45c2aTp
๐บ 20m: Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained! (Josh Starmer) ๐https://bit.ly/3KuI90F
๐บ 10m: The Attention Mechanism (Luis Serrano) ๐https://bit.ly/3yIvMvD
๐บ 15m: Transformer Models (Luis Serrano) ๐https://bit.ly/4aD62Ox
๐บ 15m: Generative Pre-trained Transformer (GPT) (Databricks) ๐https://bit.ly/458I7oP
---
๐ฃ๐๐ฅ๐ง ๐ฏ - ๐ฃ๐ฅ๐ข๐ ๐ฃ๐ง ๐๐ก๐๐๐ก๐๐๐ฅ๐๐ก๐ (2h 0m)
Zero/one/few-shot, chain-of-thought, self-consistency, generated knowledge, prompt chaining, ReAct.
Resources:
๐บ 60m: Prompt Engineering Overview (Elvis Saravia) ๐https://bit.ly/4bUGX37
๐ 90m: Prompt Engineering Guide (DAIR) ๐https://bit.ly/3V7eb87
๐ 30m: Brex's Prompt Engineering Guide (Brex) ๐https://bit.ly/3yGt9ul
โ 0m: LangChain Hub ๐https://bit.ly/3VmP2HJ
---
๐ฃ๐๐ฅ๐ง ๐ฐ - ๐๐๐๐๐ก๐๐ก๐, ๐ฅ๐๐, ๐ฉ๐๐๐ง๐ข๐ฅ ๐๐๐ฆ, ๐๐๐๐ก๐ง๐ฆ (2h 30m)
Langchain, RAG, vector databases, LlamaIndex, Open AI functions.
Resources:
๐บ 30m: The LangChain Cookbook - Beginner Guide To 7 Essential Concepts (Greg Kamradt) ๐https://bit.ly/3wXCu02
๐บ 30m: The LangChain Cookbook Part 2 - Beginner Guide To 9 Use Cases (Greg Kamradt) ๐https://bit.ly/3KnIQZL
โ 0m: Learn LangChain (Greg Kamradt) ๐https://bit.ly/3wNr5jB
๐ 30m: Advanced RAG (Hugging Face) ๐https://bit.ly/3yS0ouo
๐บ 5m: Vector databases are so hot right now. WTF are they? (Fireship) ๐
๐บ 10m: Question A 300 Page Book (w/ OpenAI + Pinecone) (Greg Kamradt) ๐https://bit.ly/3VmHEw2
๐บ 20m: Talk to Your Documents, Powered by Llama-Index (Prompt Engineering) ๐https://bit.ly/4bWJEAA
๐บ 30m: From OpenAI Function Calling to LangChain Agents (Automata Learning Lab) ๐https://bit.ly/4bZpDcJ
---
๐ฃ๐๐ฅ๐ง ๐ฑ - ๐๐๐ก๐๐ง๐จ๐ก๐๐ก๐ (1h 30m)
Feature-based finetuning, LoRA, RLHF.
Resources:
๐ 15m: Finetuning LLMs (Sebastian Raschka) ๐https://bit.ly/3Kmhxz9
๐ 15m: Fine Tuning vs. Prompt Engineering LLMs (Niels Bantilan) ๐https://bit.ly/4aDJf4V
๐บ 30m: Reinforcement Learning from Human Feedback: From Zero to chatGPT (Hugging Face) ๐https://bit.ly/3KJ0UxT
๐ 15m: Complete Guide On Fine-Tuning LLMs using RLHF (Labellerr) ๐https://bit.ly/3R9I8TH
๐ 15m: LoRA (Hugging Face) ๐https://bit.ly/4aD5S9T
#AI #MachineLearning #DeepLearning #InnovationTechnology #ArtificialIntelligence #GenerativeAI #LLM