Master LangChain with No-Code tools: Flowise and LangFlow
- Description
- Curriculum
- FAQ
- Reviews
Welcome to the comprehensive LangChain course on Udemy, an immersive learning experience designed to transform you into an AI app wizard! Whether you’re a complete beginner or have some programming background, this course will equip you with the skills to build AI-powered apps using LangChain, complemented by the ease of no-code tools like Flowise and LangFlow.
What You’ll Learn:
-
Master the art of building ChatGPT clones and Chat with PDF apps without writing a single line of code. Discover the seamless power of Flowise and LangFlow to make your app dreams a reality.
-
Unlock the potential of Autonomous Agent apps and automate tasks like a pro with over 5000+ integrations using Zapier’s robust platform, regardless of your coding proficiency.
-
Harness the magic of API Chains, eliminating the need for coding while effectively calling APIs and unleashing endless possibilities for your AI apps.
-
Engage users in captivating conversations by Chatting with documents in various formats, including PDFs, DOCX, TXT, and websites, making your apps more interactive and user-friendly.
-
Acquire in-depth knowledge of LangChain’s core elements – chains, agents, tools, and memory, and harness this expertise to create sophisticated and intelligent applications.
-
Seamlessly integrate APIs from Flowise and LangFlow with no-code website builders like Bubble, enabling you to deploy your AI apps with unparalleled ease and efficiency.
Why Choose This Course:
-
Perfect for absolute beginners with no technical background, we provide comprehensive guidance at every step, making the learning journey smooth and enjoyable.
-
Impress your colleagues, friends, and managers with your GenAI skills, gaining a competitive edge in the evolving world of AI app development.
-
Learn from an expert LangChain developer, certified by the Founder of LangChain as “a langchain expert,” ensuring you receive top-tier instruction and industry insights.
-
Engaging explanations and animations make learning LangChain a breeze, helping you grasp complex concepts effortlessly.
Special Focus on Retrieval:
-
Unlock the secrets of multiple LLM, chat models, and embedding providers, including OpenAI, Cohere, and HuggingFace, elevating your AI apps’ capabilities to new heights.
-
Work with various data types, such as PDFs, DOCX, TXT, and Github Repos, enabling you to cater to diverse user needs and deliver exceptional user experiences.
-
Utilize multiple Vector stores, including in-memory, Chroma, Qdrant, and Pinecone, optimizing your app’s performance and ensuring seamless data management.
-
Master the art of using various Text Splitters, refining your AI app’s accuracy and taking user interactions to a whole new level.
Requirements:
-
No prior coding experience or AI knowledge is required for this course. It is designed to cater to beginners and those with little technical background, making it accessible to everyone.
-
A passion for learning and a desire to dive into the exciting world of AI app development using LangChain and no-code tools is all you need to get started.
Who this course is for:
-
Aspiring AI enthusiasts and beginners who want to venture into AI app development without the need for extensive coding knowledge.
-
Entrepreneurs, founders, and business professionals looking to build AI-powered consumer apps for their organizations, regardless of their technical background.
-
Developers and tech enthusiasts who wish to expand their skill set and explore the power of LangChain and no-code tools in the realm of AI app development.
Benefits of becoming an AI Engineer:
-
With the rapid advancement of AI technology, becoming an AI Engineer opens up exciting career opportunities in various industries. AI-powered apps are transforming businesses and consumer experiences, making AI Engineers highly sought-after professionals.
-
As an AI Engineer, you gain the ability to build intelligent applications that can perform complex tasks, process vast amounts of data, and provide valuable insights, contributing to groundbreaking innovations.
-
Acquiring AI app-building skills sets you apart as an innovative problem solver, giving you a competitive edge in the job market and enhancing your career prospects.
-
By mastering LangChain and no-code tools, you can rapidly prototype and develop AI apps, saving time and resources, and bringing your ideas to life quicker than ever before.
You’ll also get:
-
Access to the course community via Discord server. You can ask questions, brainstorm ideas, and find other motivated learners.
-
Access to course content updates and improvements
Enroll in this transformative LangChain course today and unlock the door to a world of AI app innovation and career opportunities!
-
1Welcome to the LangChain Masterclass!Video lesson
-
2Introduction to LangChainVideo lesson
Overview:
The video provides an overview of LangChain, an AI framework that allows developers to easily build chatbot and conversational AI applications. It explains key concepts like prompt engineering using prompt templates, linking components together with chains, question answering using retrieval QA chains, and building more advanced assistants with tools like memories and output parsers. Overall, the video covers how LangChain provides an easy way to leverage large language models to create customized and capable conversational AI.
Topics covered in the video:
Prompt engineering with prompt templates
Chains for linking components
LLM chains for calling large language models
Retrieval QA chains for question answering
Vector stores and embedding models for indexing text
Using chains as tools for agents
Memories for remembering conversations
Output parsers for formatting responses
Leveraging large language models with LangChain
Building chatbots and conversational AI with LangChain
Customizing conversational AI applications with LangChain
-
3Flowise Installation and BasicsVideo lesson
Overview:
The video provides a step-by-step tutorial on installing Flowise, a user interface for building conversational AI apps with LangChain. It covers installation options like using npm, Docker, and cloud providers. It then walks through deploying Flowise on Railway and Render, showing how to fork the GitHub repo, connect it to these services, and configure options like disk mounting for persistence. The video gives a tour of the Flowise interface for building chat flows with drag-and-drop components. It also covers exporting and loading flows, securing API access, embedding chat widgets, and more. Overall, it's a comprehensive guide to getting started with Flowise for leveraging LangChain through a visual interface.
Topics covered in the video:
- Installation options
- Railway deployment walkthrough
- Render deployment walkthrough
- Forking GitHub repo
- Configuring environment variables
- Adding username and password
- Persistent disk mounting
- Overview of FlowWise UI
- Building chat flows
- Drag and drop components
- Embedding chat widgets
- Exporting and loading flows
- Securing API access
- Zooming and locking view
- Duplicating flows
- Calling flows from external apps
- Flowise Marketplace examples
- Tools configuration
-
4LangFlow Installation and BasicsVideo lesson
Overview:
The video provides a tutorial on installing and using Langflow, another UI for building conversational AI apps with LangChain. It covers deployment options like Railway and Render, walking through forking the GitHub repo and connecting it to these services. The video gives an overview of the Langflow interface including drag-and-drop blocks, importing/exporting flows, accessing documentation, and using the API. It highlights unique features like exporting Python code and an active community on Discord. Overall, it demonstrates how to get started with Langflow as an alternative low-code way to build LangChain apps visually.
Topics covered in the video:
- Langflow overview as UI for LangChain
- Installation options including Railway and Render
- Forking Langflow GitHub repo
- Langflow interface walkthrough
- Drag and drop blocks
- Importing/exporting flows
- Accessing documentation
- Using the API
- Exporting Python code from flows
- Langflow Discord community
- Comparison to Flowise UI
- Building conversational apps visually
- Community examples in Langflow
- Parameters and settings
-
5A Simple Chat GPT CloneVideo lesson
Overview:
The video explains how large language models like ChatGPT work and how we can leverage them through frameworks like LangChain. It covers the core concepts of taking a user input, combining it with a prompt template, and sending it to an AI system to generate a response. The video then walks through a practical tutorial of building a ChatGPT clone in Langflow and Flowise using just a few blocks - a conversation chain linked to an OpenAI chat model. The blocks are connected together based on matching input/output parameters. Overall, the video demonstrates how easy it is to harness large language models like ChatGPT with just a couple of drag-and-drop blocks in low-code platforms like Langflow and Flowise.
Topics covered in the video:
How ChatGPT works by processing user prompt and generating response
Leveraging large language models through LangChain
Sending user input to AI system to get response
Using prompt templates
Chaining together components like user input + template + AI call
Building ChatGPT clone in Langflow with just 2 blocks
Connecting conversation chain to OpenAI chat model
Matching input and output parameters
Getting OpenAI API key for authentication
Testing the ChatGPT clone
Building same thing in Flowise using similar blocks
Connecting Flowise or Langflow to external tools like Bubble
Easily harnessing large language models through low-code
-
6Prompt TemplatesVideo lesson
Overview:
The video covers using prompt templates in LangChain frameworks like Flowise and Langflow. It explains how a template combines with user input to create the full prompt sent to a large language model. This allows customizing the style and behavior of the response. The video walks through adding a Shakespeare-style prompt template in FlowWise by linking a prompt template block to an OpenAI chat model. It also shows using a Steve Jobs personality prompt to mimic his speaking style. Additional topics include exploring different models like GPT-3 vs chat models, prompt engineering resources, and key parameters like temperature and max tokens. Overall, the video demonstrates how to leverage prompt templates to easily customize large language model responses.
Topics covered in the video:
Using prompt templates to customize LLM response style
Templates combine with user input as a full prompt
Adding Shakespeare-style template in Flowise
Connecting prompt template to OpenAI chat model
Steve Jobs personality prompt template
Testing different LLM models like GPT-3 vs chat
Prompt engineering resources
Temperature parameter affects creativity
Max tokens limit length
Presence and frequency penalties
Adding template in Langflow with the text parameter
Customizing LLM responses with templates
Combining user input and template for full prompt
-
7Multi-Input Prompt TemplatesVideo lesson
Overview:
The video demonstrates using multiple user inputs in prompt templates with LangChain frameworks. It shows an example translation app that takes input language, output language, and text as inputs. These are formatted in the prompt template to create the full prompt for the large language model. The video walks through an example in FlowWise taking English and German languages and sample text as inputs. It also shows an example with character names and story themes as inputs to generate AI written stories. The video explains how multiple inputs can be provided through APIs and user interfaces. Overall, it covers leveraging multiple user inputs within prompt templates to customize applications like translation tools, story generators, and more.
Topics covered in the video:
- Using multiple user inputs in prompt templates
- Translation app example with input language, output language, text
- Formatting multiple inputs in the Flowise prompt template
- Providing English, German languages, and "how are you" text
- Generating translated response
- Story generator example with character names and themes
- Creating a story based on provided character inputs
- Passing inputs through APIs and user interfaces
- Customizing applications with multiple input templates
- Prompt engineering with dynamic user inputs
- Building translation tools, story generators, and more
-
8Chat Prompt TemplateVideo lesson
Overview:
The video covers using chat prompt templates in Flowise to customize conversational AI responses. It explains the chat prompt template block which allows separating system and human messages. The system message can set expectations like noting expertise areas. The human message takes inputs like text. An example shows translating text by providing the system template details and human text input. The video recreates a previous translation app flow using the chat prompt format. Overall, it demonstrates how to leverage chat prompt templates in FlowWise to customize system messages and process human inputs through a familiar conversational format.
Topics covered in the video:
Chat prompt templates in Flowise
Separate system and human messages
Setting expectations in the system message
Taking inputs in human message
Translation app example
Providing system template with details
Inputting human text to translate
Recreating previous translation flow with chat template
Customizing system messages for conversation
Processing human inputs through a chat interface
Using chat templates for conversational AI
Flowise-specific chat prompt format
-
9Few Shot Prompt TemplatesVideo lesson
Overview:
The video covers using few-shot prompt templates in Flowise to provide examples that customize the language model response. It shows an example of translating to pirate language by giving sample English and pirate phrase translations. These are formatted as a template that the model uses to respond to new inputs. The video walks through the format with prefix, suffix, separator, and input value slots. It explains providing multiple examples trains the model on the desired response style. Overall, the video demonstrates leveraging few-shot learning in prompts to easily customize model behavior for applications like translation, writing, social media generation, and more.
Topics covered in the video:
- Few-shot prompt templates in Flowise
- Providing example phrases to customize the response
- Translation to pirate language example
- Giving sample English and pirate translations
- Formatting as a template for model
- Prefix, suffix, separator, input value slots
- Multiple examples train models on style
- Customizing model behavior with examples
- Applications like translation, writing, social media
- Mimicking previous writing style with examples
- Easy customization through few-shot learning
-
10Simple LLM ChainVideo lesson
Overview:
The video explains chains as a core component of LangChain that links together different steps like combining a prompt template and user input. It shows how a chain bundles multiple commands into a single block, demonstrated through a simple LLM chain that takes a prompt and sends it to a large language model. The video walks through building this in Flowise by connecting an LLM chain block to an OpenAI model and a prompt template. The same is shown in Langflow by linking an LLM chain to OpenAI and a prompt block. Overall, the video covers how chains allow packaging together different components like templates and AI calls into reusable blocks for building conversational AI apps.
Topics covered in the video:
Chains as the core of LangChain
Linking steps like template and input
Bundling commands into a single block
Example LLM chain taking prompt to LLM
Building in Flowise with LLM chain + OpenAI + prompt
Same in Langflow with LLM chain + OpenAI + prompt
Packaging components into reusable blocks
Chains combine different steps
Creating conversational AI apps with chains
Simple LLM chain for calling LLM with prompt
-
11Conversation ChainVideo lesson
Overview:
The video covers conversational chains in LangChain frameworks like Flowise and Langflow. It explains how these build on simple LLM chains by adding chat models like ChatGPT and memory to enable conversations. The video demonstrates creating a conversation chain in Flowise by connecting blocks for a chat model, memory, and an optional system message. The same is shown in Langflow by linking the chat model and memory blocks. Additional details are provided on how the system message can customize the conversational behavior. Overall, the video shows how conversational chains combine components to easily build chatbot-style conversational AI apps.
Topics covered in the video:
Conversational chains in LangChain
Build on LLM chains with chat models and memory
Enable back-and-forth conversations
Creating in Flowise with chat model, memory, and system message
Building in Langflow with chat model and memory
System message customizes conversation behavior
Combining components into a conversational chain
Easily building chatbot-style apps
Memory enables contextual responses
Customizing conversations with system message
LangChain conversational chains overview
-
12API ChainVideo lesson
Overview:
The video covers API chains in LangChain frameworks like Flowise for calling external APIs. It explains GET vs POST APIs for retrieving vs sending data. Examples are shown of using documentation to call a weather API and an activity suggestion API. The video walks through adding the API chain block, pasting API documentation, and making sample calls. It mentions other provider APIs for content generation, images, and models that can be leveraged via API chains. Overall, it demonstrates the power of API chains to extend conversational AI apps by connecting them to external data sources and services.
Topics covered in the video:
- API chains for calling external APIs
- GET for retrieving data, POST for sending data
- Example of calling weather API with documentation
- Calling activity suggestion API from documentation
- Adding API chain block in Flowise
- Pasting API documentation into a block
- Making GET and POST calls from the API chain
- Other provider APIs for content, images, models
- Extending apps by connecting to external APIs
- Power of API chains for conversational AI
- Using documentation to call any API
- Data sources and services via API chains
- Retrieving and sending data with GET and POST
-
13Sequential ChainVideo lesson
Overview:
The video explains sequential chains in LangChain frameworks like FlowWise to chain multiple language model calls. It shows an example flow that takes a user question, generates a response, and uses that to create follow-up tasks. The video walks through configuring this with two LLM chain blocks, sending the first prediction as input to the next prompt template. It covers chaining prompts for story generation, copywriting, social media post creation, and more. Overall, the video demonstrates how sequential chains allow chaining multiple AI calls to take initial user input and progressively generate desired outputs.
Topics covered in the video:
- Sequential chains to chain LLM calls
- Example flow taking questions and generating responses and tasks
- Configuring with two LLM chain blocks
- Sending the first prediction as input to the next prompt
- Chaining prompts for story and content generation
- Copywriting use case generating blog and social posts
- Progressively generating outputs from the initial input
- Leveraging multiple chained AI calls
- Flowise sequential chain overview
- Starting with user questions and chaining responses
- Chaining LLM calls for desired outputs
- Generating stories, copy, and social media posts
-
14Router ChainVideo lesson
Overview:
The video explains router chains in LangChain frameworks like Flowise to route user requests to appropriate chains. It shows an example using a MultiPrompt chain connected to multiple prompt retrievers. Based on the query, it routes to the right prompt and LLM chain. The video walks through configuring distinct system messages for domains like physics, math, and history. It mentions upgrading to router chains attached to full chains versus just prompts. Overall, the video demonstrates how router chains allow efficient routing of user queries to the optimal chain for generating the response.
Topics covered in the video:
- Router chains route requests to appropriate chains
- Example with MultiPrompt chain and prompt retrievers
- Routing query to the right LLM chain based on the prompt
- Distinct system messages for domains
- Physics, math, and history prompts
- Efficiently routing queries to optimal chain
- Generating responses from the right chain
- Flowise router chain overview
- MultiPrompt chain limitations
- Routing user requests with router chains
-
15Retrieval ChainVideo lesson
Overview:
The video explains retrieval chains in LangChain frameworks like Flowise and Langflow to load documents, embed text, and find answers. It shows setting up a PDF loader, text splitter, embedding model, and vector store to index text. Then using a retrieval QA chain to find answers from the vector store. Comparisons are made between the document loaders and chains in Flowise versus Langflow. The video demonstrates vector stores for storing embedded text and using similarity search to find answers. Overall, it shows how to leverage retrieval chains to build question-answering systems by loading, splitting, embedding, and querying documents.
Topics covered in the video:
- Retrieval chains for loading, embedding, and querying text
- Setting up PDF loader, splitter, embeddings, vector store
- Using retrieval QA chain to find answers
- Comparing Flowise and Langflow chains
- Vector stores for embedded text storage
- Similarity search to find answers in vector space
- Building question-answering systems
- Document loaders to ingest text
- Text splitters to chunk documents
- Embedding models to vectorize text
- Querying the vector store for answers
- LangChain retrieval chain overview
-
16LangChain Memory TypesVideo lesson
Overview:
The video explains different memory options in LangChain frameworks like Flowise and Langflow. It covers conversation buffer memory which stores all chat history, buffer window memory to limit history size, and conversation summary memory to store a summary. Comparisons are made between the built-in memories in Flowise versus additional options like knowledge graph and entity memory in Langflow. The video also mentions external memory providers that integrate with LangChain. It emphasizes selecting the right memory based on the use case such as avoiding token limits. Overall, the video provides an overview of memory types to maintain conversational context in chatbots.
Topics covered in the video:
- Memory options: ConversationBufferMemory, ConversationBufferWindowMemory, ConversationSummaryMemory, Entity Memory, Conversation Knowledge Graph Memory
- Comparing Flowise built-in vs. Langflow additional memories
- External memory providers integrating with LangChain
- Selecting the right memory based on the use case
- Avoiding token limits with summary memory
- Maintaining conversational context
- Storing chat history for chatbots
-
17Summarization Use CaseVideo lesson
Overview:
The video explains different techniques for document summarization using LangChain. It covers stuff, refine, map-reduce, and map-rerank chains available in Python LangChain. Examples demonstrate summarizing a Constitution document using Chroma vector store and retrieval QA chain in Langflow. The video also mentions the Cohere summarize API as an easy alternative for long summaries. Overall, it provides an overview of current options for generating summaries from lengthy documents with LangChain frameworks.
Topics covered in the video:
- Summarization techniques: stuff, refine, map-reduce, map-rerank
- Examples with Constitution document using Chroma and QA chain
- Stuff and map-reduce work for full document summarization
- Cohere API as an easy option for long summaries
- Current LangChain summarization options and Python LangChain summarization chains
- Testing different techniques based on the use case
-
18Agents IntroVideo lesson
Overview:
The video introduces agents in LangChain as an exciting capability for task automation. Agents can perform retrieval, run chains, search the internet, do math, and more based on the tools provided, without hardcoded logic. Comparisons are made between configuring agents in Langflow and Flowise. A recommended video is shared by Harrison Chase explaining agents in more depth. The next steps are to gain understanding from the video, then follow along building agent apps in Langflow and Flowise which will be covered in this module. Overall, the video sets up agents as a powerful concept for flexible task automation and points to additional resources for learning about capabilities and best practices.
Topics covered in the video:
Introducing agents for task automation
Performing retrieval, chains, search, and math without hardcoded logic
Configuring agents in Langflow and Flowise
Recommended video by Harrison Chase on agents
Understanding agents from video first
-
19Agent Types and Usage in FlowiseVideo lesson
Overview:
The video explains different agent architectures in LangChain like React, Conversational Agent, AutoGPT, and Baby AGI. It shows configuring the React-based miracle agent in Flowise with tools like a calculator and web browser. The conversational agent is demonstrated with the addition of memory for context. Examples are provided of agents built with tools and chains. AutoGPT uses a vector store for long-term memory while Baby AGI relies on task engines connected to chat models. Overall, the video covers the progression of agents from reasoning and acting to adding conversation and memory capabilities.
Topics covered in the video:
- Agent architectures: React, conversational, AutoGPT, Baby AGI
- Configuring MRKL agent in FlowWise
- Adding calculator and web browser tools
- Example agents using tools and chains
- AutoGPT with vector store for long-term memory
- Baby AGI with task engines and chat models
- Progression from reasoning and acting
- Adding conversation and memory capabilities
- Memory provides conversational context
- Building capable assistants with agents
-
20Agent Tools and Toolkits in LangFlowVideo lesson
Overview:
The video covers agent tools and toolkits in Langflow including the calculator, JSON, vector store, and Python function agents. It explains the agent initializer block with zero-shot ReAct vs conversational ReAct options. Tools like calculators and toolkits for specific tasks are demonstrated. The thought process showing tools selected is a benefit of Langflow agents. Comparisons are made between FlowWise and Langflow agents which work similarly but with some configuration differences. Overall, the video shows the power of agents for task automation while noting they may be slower than chains without the tool reasoning step.
Topics covered in the video:
- Langflow agent tools and toolkits like calculator, JSON, vector store, Python function
- Agent initializer block with zero shot or conversational ReAct
- Showing the tools selected in the thought process by the agent
- Comparing Flowise and Langflow agents
- Power of agents for task automation
- Various example agents in Langflow
- Reasoning through tools before acting
-
21Zapier ToolVideo lesson
Overview:
In this lesson, you'll discover how to seamlessly combine Flowise and Zapier, enabling you to automate a multitude of tasks across various platforms. We'll take you through the step-by-step process of setting up Zapier, configuring actions, and connecting with external providers to automate actions like never before.
Topics covered in the video:Integrate Flowise and Zapier to amplify workflow automation.
Set up your account at Zapier NLA for seamless automation.
Configure actions in Zapier by selecting providers, inputs, and terms.
Explore diverse external providers for effortless automated actions.
Automate email sending with providers like SendGrid for efficient communication.
Utilize Zapier templates for real-world data retrieval and email automation.
Envision automation possibilities, from trending topics to diverse platform integration.
Leverage APIs for streamlined data exchange and automation expansion.
Create revenue streams with small-scale automated applications.
-
22Exploring Retrieval Nodes in FlowiseVideo lesson
Overview:
The video provides an overview of LangChain integrations by walking through a document retrieval example with Pinecone vector store in Flowise. It loads a PDF document, splits text, embeds vectors, and upserts to Pinecone for indexing. Then a separate flow loads vectors from Pinecone to find answers with a retrieval QA chain. Details are covered like configuring index parameters, changing document return options, and fixing language issues. The video emphasizes the vast integrations as a benefit of LangChain for building applications.
Topics covered in the video:
- LangChain integrations overview
- Document retrieval example with Pinecone
- Loading, splitting, and embedding PDF
- Upserting vectors to Pinecone
- Separate flow to load and query index
- Configuring Pinecone index parameters
- Changing document return options
- Fixing language response issues
- Vast integrations as LangChain benefit
- Building apps with many integrations
- Querying indexed documents for answers
- Tuning parameters for optimal performance
-
23Text Files for RetrievalVideo lesson
Overview:
In this video, we explore the utilization of text files within the Flowise environment for automation and retrieval. The video demonstrates the process of connecting a text file, specifically "The Hitchhiker's Guide to the Galaxy," to Flowise, executing an upsert document block, and making queries to extract information. The video showcases how to interact with text files, manage namespaces, and retrieve responses seamlessly.
Topics covered in the video:
- How to integrate text files into Flowise for automation and retrieval.
- Save a PDF and switch to a text file within the "retrieval test" block.
- Demonstrate the connection of a text file, detailing the requirement of uploading the file and additional parameters.
- Use "The Hitchhiker's Guide to the Galaxy" in text format as an example for the demonstration.
- Execute the upsert document block to incorporate the uploaded text file and create a new namespace.
- Understand the process of managing namespaces and naming conventions.
- Clear the chat and pose a question related to the uploaded text file to trigger the upsert process.
- Observe the successful upsert of the document and the resulting response.
- Learn how to modify parameters within the "load test" to target specific namespaces and retrieve responses.
- Grasp the concept of querying source documents to understand response origins.
-
24Web Data, Github Repo Loader, and Qdrant Vector DBVideo lesson
Overview:
This video explores various document loaders within Flowise, delving into their capabilities and use cases. The video demonstrates the process of utilizing different loaders, highlighting their functionalities while focusing primarily on employing these loaders for upstarting to Pinecone, a vector store. The video also delves into examples of using web document loaders, GitHub repository loaders, and other common loaders, while also showcasing the shift to Qdrant as an alternative vector store.
Topics covered in the video:
- Engage with multiple document loaders in Flowise, expanding your knowledge of their potential.
- Focus on practical usage of document loaders for upserting to Pinecone, streamlining response retrieval.
- Demonstrate the functionality of web document loaders, including connecting to web scrapers like Cheerio for data extraction.
- Execute upsert actions using a web document loader, utilizing an OpenAI blog as an example.
- Experience GitHub repository loaders, beneficial for comprehending unfamiliar codebases.
- Interact with the GitHub repository loader.
- Highlight the utility of the Qdrant vector store for embedding retrieval and indexing.
- Grasp the concept of refining document chains and observe potential errors.
- Understand the In-Memory Vector Store for instant embeddings without database upsert.
- Recognize the suitability of different vector stores based on application requirements.
- Anticipate updates and expansions to the list of available vector store providers.
-
25EmbeddingsVideo lesson
Overview:
In this video, we dive into testing different embedding models to enhance document retrieval and response generation within Flowise. The video takes you through the process of setting up a Cohere embedding model, incorporating it with OpenAI's models, and comparing the responses generated. The video demonstrates the seamless integration of different embedding models for document retrieval and answer formulation, enhancing the understanding of how to optimize response quality.Topics covered in the video:
- Explore the integration of various embedding models for enhanced document retrieval and response generation.
- Begin with an upsert chain template and configure settings for embedding model testing.
- Demonstrate the replacement of a PDF with a text file, "The Hitchhiker's Guide to the Galaxy."
- Utilize the in-memory vector store for efficient local testing.
- Integrate Cohere's embedding model for document embedding and retrieval.
- Employ Cohere's English and multilingual embedding models for comprehensive language support.
- Add the Cohere API key and OpenAI API key for dual-model usage.
- Utilize OpenAI's "chat" model for final response generation.
- Configure temperature settings for optimal response generation.
- Compare response formulations between Cohere and OpenAI, noting similarities and differences.
-
26Retrieval Nodes LangFlowVideo lesson
Overview:
In this video, we delve into the enhanced capabilities of Langflow compared to FlowWise, particularly focusing on its advantageous utilization with specific providers like HuggingFace embeddings. The video showcases a retrieval question-answering chain built in Langflow, utilizing various tools including TextLoader, recursive text splitter, and Pinecone Vector Database for storage. The video demonstrates how Langflow seamlessly integrates with HuggingFace embeddings and Cohere-based text generation models, enabling efficient document retrieval and response generation.
Topics covered in the video:
- Explore the enhanced capabilities of Langflow compared to FlowWise, particularly with certain providers.
- Demonstrate the construction of a retrieval question-answering chain within Langflow.
- Incorporate TextLoader and recursive text splitter tools in the chain setup.
- Integrate Pinecone Vector Database within Langflow for efficient storage and retrieval.
- Utilize HuggingFace SentenceTransformers and MiniLM models for document embeddings.
- Configure embedding dimensions within Pinecone Vector Database for seamless integration.
- Integrate Cohere-based text generation models within the Langflow chain.
- Discuss the variety of available vector stores in Langflow, including Chroma VectorDB.
- Explore additional loader options available in Langflow for document ingestion.
-
27Intro To The SectionVideo lesson
-
28Chat with your Documents with SiriVideo lesson
Overview:
This video demonstrates how to build a Siri voice chat interface using Lang Chain. The video guides through the process of constructing a flow that allows users to interact with documents via Siri voice commands. The video shows various blocks and integrations within the Lang Chain platform to create a seamless conversational experience.
Topics covered in the video:
- Understand the concept of building a Siri voice chat interface with Lang Chain.
- Walk through the steps of setting up a Lang Chain flow for Siri interaction.
- Utilize the "Ask" block to prompt users for input through Siri voice commands.
- Implement variable settings to capture and store user input for further use.
- Integrate the "Get Contents of URL" block to send API requests to Lang Chain flows.
- Configure API requests for making POST calls to the Lang Chain flow endpoints.
- Structure the API request body using JSON formatting to pass queries.
- Explore the possibility of using Lang Chain for creating various apps and workflows.
-
29Connecting Flowise to Bubble No-Code App BuilderVideo lesson
Overview:
This video demonstrates how to build a chat app with PDF integration using Lang Chain and Bubble. The app allows users to upload documents, search for documents, and ask questions directly. The video covers various scenarios, including embedding the chat widget on a website, creating a shared document search space for an organization, and implementing individual user-specific document partitions.
Topics covered in the video:
- Building a chat app with PDF integration using Lang Chain and Bubble.
- Using Lang Chain's open-source library to create search applications for PDFs and documents.
- Creating a chat widget for websites.
- Understanding the document Q&A system with ingestion and search components.
- Using Lang Chain's "Flow-wise" canvas for visualizing and configuring workflows.
- Utilizing API keys for OpenAI and Pine Cone services.
- Uploading documents and extracting text from PDFs.
- Converting text chunks into vectors and storing them in Pine Cone vector databases.
- Performing search queries on uploaded documents using the Flow-wise app.
- Building workflows for different scenarios: single namespace, shared namespace, and individual user partitions.
- Embedding the chat widget on a website and interacting with it.
- Filtering documents based on metadata and user-specific partitions.
-
30Connecting LangFlow to Bubble No-Code App BuilderVideo lesson
Overview:
This video demonstrates the usage of LangFlow with ease. The focus is on scenarios involving document upserting, retrieval, and filtering using LangFlow's API integration with Bubble. The video showcases how to configure and use LangFlow to handle document ingestion, conversion, and querying.
Topics covered in the video:
- Introduction to LangFlow and its capabilities.
- Demonstrating the use of community examples and combining blocks.
- Using LangFlow apps with API keys, model selection, and running flows.
- Overview of document ingestion and retrieval processes.
- Using embeddings and vector databases (Pinecone) for document storage.
- Configuring and utilizing the Retrieval QA chain for question-answering.
- Building workflows to upsert documents and perform queries.
- Setting up scenarios for different use cases (single namespace, multi-user, filtering).
- Working with Pinecone's metadata tags for document filtering.
- Integrating LangFlow APIs with no-code platforms like Bubble.
- Configuring API calls for document upserting and querying.
- Creating chat interfaces with user-specific namespaces and metadata filtering.
- Exploring various scenarios and possibilities for application development.
External Links May Contain Affiliate Links read more