How to Build No-Code AI Agents Using Flowise AI
You can build powerful AI agents in Flowise AI without writing any code using its visual drag-and-drop interface. In this tutorial, you’ll create a complete movie recommendation agent in 10 steps that can search databases and chat with users naturally.
What you’ll accomplish:
- Build a working AI agent in under 30 minutes
- Connect your agent to external databases
- Deploy and test your agent with real conversations
- No programming experience required
Let’s understand what Flowise AI is and explore each step in detail.
What is Flowise AI?
Flowise AI is an open-source, no-code platform that lets users visually build AI workflows using drag-and-drop components. It sits on top of LangChain, a popular framework for building LLM (Large Language Model) applications. It allows us to design, test, and deploy AI agents effortlessly.
In the next section, we’ll learn how to build a movie search assistant using Flowise AI.
Build a movie search assistant using Flowise AI
Before diving into the process, let’s have a look at the prerequisites:
- Flowise AI account (Create one in the official website)
- SingleStore account (Create one in the official website)
- OpenAI API key (Create one in the official website)
Once these prerequisites are satisfied, we start the process.
Step 1: Upload a dataset
The first step is uploading a dataset to SingleStore that the agent will use for generating responses to our queries. For this tutorial, we’ll be using the IMDB TV Show Reviews dataset on Kaggle. To upload it to SingleStore, follow these steps:
Step 1: Navigate to the documentation page for the IMDB TV Show Reviews dataset
Step 2: Download the dataset
Step 3: Extract the .csv file from the dataset and rename it to movies
Step 4: Go to Load Data > Load file in SingleStore and upload the file

After uploading, SingleStore will automatically create a database table from the dataset.
Next, navigate to Deployments > Connect > Your App and copy the connection string in there for future use:

Step 2: Create a new chatflow
In the Flowise AI dashboard, go to Chatflows > Add New and create a new chatflow:

Step 3: Create a custom function
After creating the chatflow, we’ll be presented with a blank canvas, where we can build our agent.
The first step is to create a custom function, which is a powerful node that allows us to write and execute our own JavaScript code directly within our AI workflow.
To configure a custom function, follow these steps:
Step 1: Navigate to Add Nodes > Utilities and drag the Custom JS Function node onto the canvas

Step 2: Add this code to the block titled JavaScript Function
const mysql = require('mysql2/promise');const tableName = $tableName;const tableDescription = $tableDescription;const connectionUrl = "<Connection String>";async function main() {try {const pool = mysql.createPool(connectionUrl);const q = `DESCRIBE ${tableName};`;const [rows] = await pool.query(q);const fields = rows.map((row) => `${row.Field} of type ${row.Type}`).join("\n");const tableInformation =`Table name: ${tableName}\n` +`Table description:\n${tableDescription.trim()}\n` +`Columns:\n${fields.trim()}`;return tableInformation;} catch (error) {return `Error: ${String(error)}`;}}return main();
The code connects to a MySQL database, retrieves the schema of the database table, and formats the table name, description, and column details (name and type) into a readable string. If any error occurs during the process, it returns an error message.
In this code, we need to replace <Connection String> with the connection string that we copied earlier.
Step 3: Click on Input Variables

Step 4: There are two input variables for this custom function, named tableName and tableDescription.
Let’s add them with their values to the Input Variables block:

Step 4: Create a prompt template
A prompt template is a customizable block that defines how our LLM should be prompted when processing user inputs. It acts as the instruction layer for our AI agent—guiding what context to provide, what kind of response to generate, and how to format the input-output flow.
To configure a prompt template, follow these steps:
Step 1: Type prompt template in the search box and drag the Langchain > Prompt Template node onto the canvas
Step 2: Link the Output node of the custom function to the Format Prompt Values node of the prompt template

Step 3: Add this prompt to the block titled Template
Depending on the table information and the user questions, return a SQL query that answers that question.TABLE INFORMATION: {tableInformation}QUESTION: {question}
Step 4: Click on Format Prompt Values

Step 5: There are two values used for formatting, named tableInformation and question.
Let’s add them with their values to the Format Prompt Values block:

Step 5: Create an LLM chain block
An LLM chain block is a modular node used to build a chain of prompts and logic around an LLM like OpenAI, Cohere, or Anthropic.
To configure an LLM chain block, follow these steps:
Step 1: Type llm chain in the search box and drag the Langchain > LLM Chain node onto the canvas

Step 2: Type openai in the search box and drag the Langchain > ChatOpenAI node onto the canvas

For the ChatOpenAI node:
- Add your OpenAI API key to the Connect Credential box
- Choose your model in the Model Name box (gpt-4o-mini in this case)
- Leave the other options as they are
Then, connect all the existing nodes like this:

Step 6: Create a set variable block
A set variable block is used to store data in a named variable that can be reused later in the flow. It acts like a bookmark or scratchpad for passing values between nodes.
To configure a set variable block, follow these steps:
Step 1: Locate Set Variable in the Utilities tab and drag it onto the canvas

Step 2: Insert query as the variable name
Step 3: Connect the LLM chain node to the set variable block

Step 7: Create another custom function
Create another custom function with this code:
const mysql = require('mysql2/promise');const connectionUrl = "<Connection String>";function formatQuery() {const q = `$query`;let lines = q.trim().split(/\r\n|\r|\n/);if(lines[0].startsWith("")) {lines = lines.slice(1, lines.length - 1);}return lines.join("\n").trim()}const q = formatQuery();try {const pool = mysql.createPool(connectionUrl);const [rows] = await pool.query(q);return rows;} catch(error) {return `Query: ${q}\nError:${String(error)}`}
The code connects to a MySQL database using a connection string and executes a SQL query defined in the query variable. It formats the query by trimming extra lines, then runs it using a connection pool. If successful, it returns the query results; if there’s an error, it returns the query and an error message.
In this code, we again need to replace <Connection String> with the connection string that we copied earlier.
Here is the only input variable for this function:

Step 8: Create another prompt template
Create another prompt template with this prompt:
Depending on the user question and the query result, generate an answer to the user's question. Always show the query to the user.QUERY: {query}QUERY RESULT: {queryResult}QUESTION: {question}
Here are the prompt values for this prompt template:

Then, connect the set variable block, the newer custom function block, and this prompt template:

Step 9: Create another LLM chain block
Finally, create another LLM chain block and connect it to the flow:

With this, we’re done creating our agent. This is the final flow that the agent will use to generate responses to our queries.
Step 10: Chat with the agent
Now, it’s time to chat with the agent. Click on the chat icon at the top-right corner of the screen:
![]()
Upon clicking, a chat box will open. Here, we can interact with our movie search assistant:
Here is a sample interaction:
Agent: Hi there, how can I help?User: Hi! I'm looking for some good thriller shows released after 2015. Can you help?Agent: Of course! Let me filter some thriller shows released after 2015 for you...Here are a few recommendations:- The Night Manager (2016) – IMDb: 8.1- Mindhunter (2017) – IMDb: 8.6- Ozark (2017) – IMDb: 8.5- Bodyguard (2018) – IMDb: 8.1Would you like to narrow it down by platform?User: Yes, show me only the ones available on Netflix.Agent: Sure! Among the previous list, these are available on Netflix:- Mindhunter (2017)- Ozark (2017)- Bodyguard (2018)Want to see trailers or more info on any of them?User: Can you show me only those shows that are less than 45 minutes per episode?Agent: Let me filter that for you...Here are thriller shows after 2015, on Netflix, and under 45 minutes per episode:- Bodyguard (2018) – 42 min – IMDb: 8.1
Now that we’re done creating our movie search assistant, let’s explore the differences between Flowise AI and one of its competitors, Dify AI.
Flowise AI vs. Dify AI
While both Flowise AI and Dify AI are powerful tools for building AI agents, they cater to different user needs. Here’s a comparison to help you decide:
| Feature | Flowise AI | Dify AI |
|---|---|---|
| Type | Visual, no-code | Low-code platform |
| Open source | Yes | Yes |
| Based on | LangChain | LangChain + FastAPI |
| UI/UX | Flow-based visual builder | Modern interface with apps |
| Ideal for | Beginners & visual thinkers | Developers & product teams |
| LLM integrations | OpenAI, Cohere, etc. | OpenAI, Anthropic, etc. |
| API extensions | Built-in support | Strong developer focus |
Next, let’s explore some best practices for using Flowise AI.
Best practices for using Flowise AI
To get the best results when building with Flowise AI, consider the following tips:
- Keep prompts clear: The more specific your prompt templates are, the better the LLMs will perform.
- Test iteratively: Make changes and test each node step-by-step to avoid complex debugging.
- Use variables: Leverage dynamic variables in prompts to make your agents smarter.
- Secure your keys: Always store your API keys securely in environment variables.
- Optimize for UX: Customize the chat UI and response formats to match your audience’s needs.
Following these best practices will help us build robust and reliable AI agents faster.
Conclusion
In this tutorial, we discussed how to build a movie search assistant using Flowise AI. We covered what Flowise AI is, how it compares to Dify AI, and some best practices for using it efficiently.
Flowise AI is democratizing AI development. With its open-source nature, ease of use, and integration with powerful language models, it’s a go-to tool for anyone—from solopreneurs to enterprise teams—looking to leverage the power of AI without the complexity.
If you want to learn more about generative AI, check out the Intro to Generative AI course on Codecademy.
Frequently asked questions
1. Is Flowise AI free?
Yes, Flowise AI is completely free for anyone who is interested in building AI agents. You can also self-host and modify it as needed.
2. Which is better, LangFlow or Flowise AI?
Both are excellent tools, but Flowise AI offers a more polished and feature-rich interface. It also supports a larger set of integrations and is better suited for building production-ready agents.
3. Is Flowise AI open-source?
Yes. Flowise AI is fully open-source and hosted on GitHub, allowing anyone to contribute or deploy it on their own servers.
4. Why use Flowise AI?
Use Flowise AI because it empowers anyone to build AI workflows visually—no code, no complexity, just results. It’s ideal for prototyping, experimentation, and even real-world deployment.
5. Can I build an AI agent without coding?
Yes, you can build AI agents without coding using platforms like Flowise AI. Flowise AI provides a visual drag-and-drop interface that lets you create sophisticated AI agents by connecting pre-built components. No programming knowledge is required - you simply drag nodes onto a canvas, configure them through forms, and connect them to create your agent’s logic.
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
- Article
What is Dify? Complete AI Bot Building Tutorial
Learn what Dify AI is and how to build your own Q&A bot using its drag-and-drop interface. Step-by-step tutorial with no coding required. - Article
AI vs Generative AI: Understanding the Difference
Learn what is AI vs generative AI difference. Explore how each works, their key differences, and real-world use cases. - Article
How to Build AI Agents with n8n
Learn to build AI agents with n8n using our step-by-step tutorial. Create an intelligent email support bot with Gemini API integration and visual workflows.
Learn more on Codecademy
- Learn to build AI chatbots and agents with Flowise's no-code platform—no programming required. Perfect for business professionals.
- Beginner Friendly.1 hour
- Learn to generate SQL with AI, transform natural language to SQL, and utilize LLMs for SQL operations in our innovative course.
- Beginner Friendly.1 hour
- Ready to learn how to use AI for coding? Learn how to use generative AI tools like ChatGPT to generate code and expedite your development.
- Beginner Friendly.1 hour