Natural Language Processing with Transformers in Python
- Description
- Curriculum
- FAQ
- Reviews
Transformer models are the de-facto standard in modern NLP. They have proven themselves as the most expressive, powerful models for language by a large margin, beating all major language-based benchmarks time and time again.
In this course, we learn all you need to know to get started with building cutting-edge performance NLP applications using transformer models like Google AI’s BERT, or Facebook AI’s DPR.
We cover several key NLP frameworks including:
-
HuggingFace’s Transformers
-
TensorFlow 2
-
PyTorch
-
spaCy
-
NLTK
-
Flair
And learn how to apply transformers to some of the most popular NLP use-cases:
-
Language classification/sentiment analysis
-
Named entity recognition (NER)
-
Question and Answering
-
Similarity/comparative learning
Throughout each of these use-cases we work through a variety of examples to ensure that what, how, and why transformers are so important. Alongside these sections we also work through two full-size NLP projects, one for sentiment analysis of financial Reddit data, and another covering a fully-fledged open domain question-answering application.
All of this is supported by several other sections that encourage us to learn how to better design, implement, and measure the performance of our models, such as:
-
History of NLP and where transformers come from
-
Common preprocessing techniques for NLP
-
The theory behind transformers
-
How to fine-tune transformers
We cover all this and more, I look forward to seeing you in the course!
-
1IntroductionVideo lesson
A brief introduction to the course, and how to get the most out of it.
-
2Course OverviewVideo lesson
An overview of everything we'll be covering in this course.
-
3Environment SetupVideo lesson
How to setup a local Python environment that aligns to the environment used throughout the course.
-
4CUDA SetupVideo lesson
How to setup CUDA for CUDA enabled GPUs.
-
5The Three Eras of AIVideo lesson
-
6Pros and Cons of Neural AIVideo lesson
-
7Word VectorsVideo lesson
-
8Recurrent Neural NetworksVideo lesson
-
9Long Short-Term MemoryVideo lesson
-
10Encoder-Decoder AttentionVideo lesson
-
11Self-AttentionVideo lesson
-
12Multi-head AttentionVideo lesson
-
13Positional EncodingVideo lesson
-
14Transformer HeadsVideo lesson
-
15StopwordsVideo lesson
Here we'll start with our first NLP preprocessing technique, how we can use stopwords.
-
16Tokens IntroductionVideo lesson
In the first part of exploration of tokens in NLP, we'll look at word, character, punctuation, part-of-word tokens and more.
-
17Model-Specific Special TokensVideo lesson
In the second part of tokens in NLP, we'll look at model-specific special tokens.
-
18StemmingVideo lesson
We take a look and the Porter and Lancaster stemmers.
-
19LemmatizationVideo lesson
Here we take a look at reducing words to their lemma roots.
-
20Unicode Normalization - Canonical and Compatibility EquivalenceVideo lesson
Here we will introduce Unicode Normalization and the two forms of equivalence, canonical and compatibility.
-
21Unicode Normalization - Composition and DecompositionVideo lesson
Here we take a look at the two different directions in Unicode Normalization, composition, and decomposition.
-
22Unicode Normalization - NFD and NFCVideo lesson
We'll move onto applying Unicode Normalization in Python with both NFD and NFC forms.
-
23Unicode Normalization - NFKD and NFKCVideo lesson
In the final Unicode Normalization session, we'll learn about and implement NFKD and NFKC forms.
-
44Introduction to spaCyVideo lesson
-
45Extracting EntitiesVideo lesson
-
46NER WalkthroughText lesson
-
47Authenticating With The Reddit APIVideo lesson
-
48Pulling Data With The Reddit APIVideo lesson
-
49Extracting ORGs From Reddit DataVideo lesson
-
50Getting Entity FrequencyVideo lesson
-
51Entity BlacklistVideo lesson
-
52NER With SentimentVideo lesson
-
53NER With roBERTaVideo lesson
-
54Open Domain and Reading ComprehensionVideo lesson
An introduction to the two modes of Q&A, open domain (OD) and reading comprehension (RC).
-
55Retrievers, Readers, and GeneratorsVideo lesson
An introduction to the three key model types we will be using in Q&A, retrievers, readers, and generators.
-
56Intro to SQuAD 2.0Video lesson
We introduce the SQuAD Q&A dataset.
-
57Processing SQuAD Training DataVideo lesson
How we process the SQuAD data to be more friendly for our use-case.
-
58(Optional) Processing SQuAD Training Data with Match-CaseVideo lesson
We take a look at refactoring our SQuAD processing data using pattern matching syntax introduced in Python 3.10.
-
59Processing SQuAD Dev DataText lesson
-
60Our First Q&A ModelVideo lesson
We put together our first Q&A model.
-
61Q&A Performance With Exact Match (EM)Video lesson
-
62ROUGE in PythonVideo lesson
Learn how to implement ROUGE scores using Python.
-
63Applying ROUGE to Q&AVideo lesson
We take a look at applying ROUGE performance metrics to our first Q&A model.
-
64Recall, Precision and F1Video lesson
We work through the intuition and mathematics behind ROUGE-N.
-
65Longest Common Subsequence (LCS)Video lesson
We work through the intuition and mathematics behind ROUGE-L.
-
66Q&A Performance With ROUGEVideo lesson
We introduce the ROUGE metrics.
-
67Intro to Retriever-Reader and HaystackVideo lesson
A quick introduction to the retriever-reader architecture and Haystack.
-
68What is Elasticsearch?Video lesson
An quick overview of how Elasticsearch works, and why we use it.
-
69Elasticsearch Setup (Windows)Video lesson
-
70Elasticsearch Setup (Linux)Video lesson
-
71Elasticsearch in HaystackVideo lesson
We introduce Elasticsearch via Haystack.
-
72Sparse RetrieversVideo lesson
-
73Cleaning the IndexVideo lesson
-
74Implementing a BM25 RetrieverVideo lesson
-
75What is FAISS?Video lesson
-
76FAISS in HaystackVideo lesson
-
77What is DPR?Video lesson
-
78The DPR ArchitectureVideo lesson
-
79Retriever-Reader StackVideo lesson
External Links May Contain Affiliate Links read more