Transformers in Computer Vision
- Description
- Curriculum
- FAQ
- Reviews
Transformer Networks are the new trend in Deep Learning nowadays. Transformer models have taken the world of NLP by storm since 2017. Since then, they become the mainstream model in almost ALL NLP tasks. Transformers in CV are still lagging, however they started to take over since 2020.
We will start by introducing attention and the transformer networks. Since transformers were first introduced in NLP, they are easier to be described with some NLP example first. From there, we will understand the pros and cons of this architecture. Also, we will discuss the importance of unsupervised or semi supervised pre-training for the transformer architectures, discussing Large Scale Language Models (LLM) in brief, like BERT and GPT.
This will pave the way to introduce transformers in CV. Here we will try to extend the attention idea into the 2D spatial domain of the image. We will discuss how convolution can be generalized using self attention, within the encoder-decoder meta architecture. We will see how this generic architecture is almost the same in image as in text and NLP, which makes transformers a generic function approximator. We will discuss the channel and spatial attention, local vs. global attention among other topics.
In the next three modules, we will discuss the specific networks that solve the big problems in CV: classification, object detection and segmentation. We will discuss Vision Transformer (ViT) from Google, Shifter Window Transformer (SWIN) from Microsoft, Detection Transformer (DETR) from Facebook research, Segmentation Transformer (SETR) and many others. Then we will discuss the application of Transformers in video processing, through Spatio-Temporal Transformers with application to Moving Object Detection, along with Multi-Task Learning setup.
Finally, we will show how those pre-trained arcthiectures can be easily applied in practice using the famous Huggingface library using the Pipeline interface.
-
2The Rise of TransformersVideo lesson
-
3Inductive Bias in Deep Neural Network ModelsVideo lesson
-
4Attention is a General DL ideaVideo lesson
-
5Attention in NLPVideo lesson
-
6Attention is ALL you needVideo lesson
-
7Self Attention MechanismsVideo lesson
-
8Self Attention Matrix EquationsVideo lesson
-
9Multihead AttentionVideo lesson
-
10Encoder-Decoder AttentionVideo lesson
-
11Transformers Pros and ConsVideo lesson
-
12Unsupervised Pre-trainingVideo lesson
-
13Module roadmapVideo lesson
-
14Encoder-Decoder Design PatternVideo lesson
-
15Convolutional EncodersVideo lesson
-
16Self Attention vs. ConvolutionVideo lesson
-
17Spatial vs. Channel vs. Temporal AttentionVideo lesson
-
18Generalization of self attention equationsVideo lesson
-
19Local vs. Global AttentionVideo lesson
-
20Pros and Cons of Attention in CVVideo lesson
External Links May Contain Affiliate Links read more