“Basic mode” offers a simplified way to configure your AI agent, designed for teams who want faster setup and fewer customizations. Instead of writing your own prompts, you’ll choose from predefined options for each configuration, including instructions, actions, and tone settings.
This mode is ideal for users who are new to prompt design or prefer guided configurations over custom behavior.
Benefits of using Basic mode
- Faster setup: Choose from pre-filled, best-practice templates without needing to write your own prompts.
- Easier onboarding for new users: Ideal for non-technical users or teams just getting started with AI agent setup.
- Standardized agent behavior: Ensures more consistent performance across different teams or use cases.
Enabling and disabling “Basic mode”
You can follow the steps to enable or disable “Basic mode”:
- In the AI agent settings page, you will find the “Advanced mode” toggle at the bottom left corner of the page
- Toggle this setting on or off to change the configuration mode
- When Advanced mode is turned ON, the AI agent enters Advanced mode, allowing you to edit all prompt fields manually.
- When Advanced mode is turned OFF, the AI agent is in Basic mode, and all prompts will be replaced with pre-filled, non-editable options, including instructions and actions.
⚠️ Important:
- If you’ve made any changes to prompts in Advanced mode and switch back to Basic mode, your custom prompt content will be cleared.
- If you turn Advanced mode back on later, all prompts will be reset to default.
What you can configure in “Basic mode”
Once you’ve selected Basic mode, you can continue to set up your AI agent using simplified options across each configuration tab. The setup process remains the same, but instead of editing prompts manually, you’ll select from pre-filled templates.
Here’s what you can configure:
Knowledge base
- Add data source that your AI agent will reference when generating replies
- You can upload files, import website URLs, or select existing sources from your Global knowledge base.
- You can learn more how to manage and add data source in our Help Center article on Adding data source to your AI agent
Instructions
- Set your agent’s tone, purpose, and fallback behavior using prefilled options.
- In Basic mode, you can simply choose the “Instructions” and “Guardrails” from a guided list of tone options and agent objectives
Actions
- Decide how your AI agent responds to users, exits conversations, or scores leads.
- For “Basic support” templates, the “Calculate lead score” action is default toggled off
- For “Sales growth” templates, the “Calculate lead score” action is default toggled on
- Each action (For example: Send message, Exit conversation, Calculate lead score) comes with preset logic and tone options. Here’s what each action includes:
“Send message” action
This is the core action that allows your AI agent to reply to customer messages using connected knowledge base sources.
In Basic mode, you can configure:
-
Response type: Choose how your AI prioritizes speed vs. response depth:
-
Prioritize speed
Uses knowledge retrieval to generate fast replies. Best for simple, direct questions or high-volume support workflows. -
Prioritize response quality
Uses advanced checks and personalization for smarter replies. Prioritizes helpfulness, nuance, and brand tone — better for complex queries or sales-focused use cases.
-
Prioritize speed
-
Message tone of voice: Select how the AI sounds when replying to customers. This setting controls phrasing, sentence structure, and fallback behavior.
Available tone options include:- Professional and friendly
- Persuasive and confident
-
Trigger condition: This action is automatically triggered when a customer sends a message. You won’t need to configure conditions in Basic mode.
“Calculate lead score”
Note: “Calculate lead score” action is toggled off by default if you chose “Basic support” template when creating an AI agent.
The “Calculate lead score” action helps you identify high-quality leads by analyzing what a customer says — such as interest, urgency, or buying intent. It’s especially useful for sales-focused agents who need to filter serious prospects from general enquiries.
In “Basic mode”, you won’t need to write any custom scoring prompts or logic. Instead, you’ll select from a list of preset scoring criteria and assign a weight (%) to each one, indicating how much it contributes to the overall lead score.
The AI agent will calculate a score between 0 and 100 based on the customer’s message. This score can then be used in downstream actions like Exit conversation or internal handoff.
A few key things to note about Basic mode:
- The trigger condition is fixed: The score is always calculated automatically when the customer replies.
- The scoring logic is pre-set: You won’t be able to edit how each criterion is detected, but you can decide how important each one is by adjusting its weight.
- The AI agent always outputs a single numeric score (0–100): You can use this score to trigger other actions, but not to directly influence how the score is derived.
Once you’ve set your criteria and weights, make sure the total adds up to exactly 100% — otherwise, the system won’t generate a score.
You can follow the steps below to add criteria in the “Calculate lead score” action:
- In the “Calculate lead score” section, click on “Add criteria” button
- A new criteria section will appear.
You will be required to:- Set a weight (%): Determines how important that factor is in the overall score.
- Select a predefined criterion from the dropdown list. Options include:
-
Shows signs of interest or compares with competitors
For example: “How does this compare to Brand X?”
-
Mentions budget, timeline, or shows readiness to buy
For example: “We need this by end of next week. What’s the cost?”
-
Asks detailed, informed, or follow-up questions
For example: “Can this integrate with our existing CRM?”
-
Expresses positive or enthusiastic tone
For example: “This looks really promising, I’m impressed”
-
Shows signs of interest or compares with competitors
- You can use the “+ Add criteria” button to include more scoring factors. Each one must have a valid weight and selected option.
⚠️ Important: The total weight across all active criteria must equal exactly 100%. The system will not calculate a lead score otherwise.
“Exit conversation”
The Exit conversation action allows your AI agent to leave the conversation when certain conditions are met. This ensures a smooth handoff to human agents or a clean exit when the AI can no longer assist confidently.
In Basic mode, you can’t create custom logic or triggers, but you can choose from a predefined list of exit conditions. The AI agent will exit the conversation if any one of the conditions is met.
Note: Exit conditions are based on signals from the customer’s message — not external factors like user profile or contact history.
In Basic mode, the following three conditions are included by default:
-
Confidence is low
- The AI agent will exit when it cannot respond confidently based on the knowledge base.
- This helps avoid misleading replies when the AI is unsure.
-
Human agent requested
- The AI agent will exit when the user explicitly asks to speak with a person.
- This ensures customers can reach your team when they want real-time human help.
-
Keyword detected
- The AI agent will exit when it detects certain exit-related phrases in the customer’s message (e.g. “real agent”, “can I talk to someone”).
- The default exit condition is: Exit when user asks to speak with a human.
You can choose to remove them by clicking on the icon on the right top corner of each condition.
You can add more conditions by following the steps below:
- In the “Exit condition” action section, click on “Add condition” at the bottom
- A new condition will appear, which you can:
- Name the condition (for internal reference)
- Choose the condition type, such as:
- Exit based on message signal
-
Exit based on lead score:
Triggered when the customer’s lead score meets a defined threshold. You can select:- Is less than
- Is more than
- Is between
- Then set the numeric values (For example: 0–30 for low-quality leads, or 80–100 for hot leads ready for sales handoff).
- Select from the exit condition dropdown — options are predefined and cannot be customized in Basic mode.
“Add label”
The Add label action allows your AI agent to automatically tag conversations based on what the customer says. Labels help your team organize chats, segment contacts, and trigger downstream workflows like follow-ups or CRM updates.
In Basic mode, this action is optional and toggled off by default — including for both Basic support and Sales growth templates. You can enable it manually if your workflow requires automated labeling.
You can follow the steps below to configure the “Add label” action:
- Toggle on the “Add label” action
- Click “Add condition”
- You will be required to fill in the following details:
- Trigger condition: A short description of the message pattern or intent that should trigger the label. For example: When users asks for store location
-
Label to add: Choose from your workspace’s existing labels. The AI agent will apply this label based on the triggered condition.
- You can add multiple label conditions using the “+ Add condition” button.
Flow deployment
- Deploy your AI agent into one or more conversation flows using Flow Builder.
- This determines when The AI agent enters a chat, which channel it operates in (e.g. WhatsApp), and how it hands off to humans.
Testing out your AI agent
After you’ve configured your AI agent, you can use the built-in test panel to try out how it responds in real conversations. This helps you review how well your prompts and knowledge base are working, before deploying the agent in live flows.
After configuring your AI agent, there are two ways to test how it performs before going live:
- Performance testing: Use this feature to bulk-generate sample questions from your linked documents and evaluate performance.
- AI agent playground: Use the Playground to manually send test messages and preview AI responses in real time.
Performance test
Use Performance test to evaluate how well your AI agent can answer questions based on your linked documents, without writing test cases manually.
This tool automatically generates sample questions from your knowledge base, then tests the AI’s responses and highlights any failed or low-confidence answers. It’s useful for validating coverage, checking tone alignment, and spotting gaps before launching your agent.
Benefits and common use cases
Why use Performance testing:
- Validate document coverage: Confirm the AI is using your knowledge base correctly.
- Catch low-confidence replies: Identify where the AI struggles to answer clearly or accurately.
- Avoid manual QA work: No need to write your own test questions — they’re generated for you.
When to use it:
- After uploading FAQ or product documents
- After editing instructions or guardrails
- Before publishing your agent to customers
How to run a performance test
You can follow the steps below to run a performance test:
- In the AI agent’s settings page, click “Performance testing” on the left-sided panel
- You will be redirected to the “Performance testing” page
- Click “Generate test” at the top right corner to start
- A pop-up modal will appear
- In this pop-up modal, you’ll be required to fill in:
- Enter test name
- Give your test case a clear and descriptive name so it’s easy to identify later
- Select linked data sources
- Select 1 or more data sources that linked to this AI agent. These sources will be used to automatically generate test questions. The agent’s responses will be evaluated against the content from these sources to calculate performance.
- If a source has already been used in a different performance test, it will appear as disabled and cannot be selected again.
- Enter test name
- Once you have filled in the details, click “Generate”
- You will be redirected back to the "Performance testingt” page, where you will see the summary of the performance test
Here’s what you’ll find on the page:-
Test name and run timestamp
This helps you track when the test was last run and what version of the AI agent it used. -
Total questions generated
The number of test questions automatically created from your selected data sources. -
Hallucination rate (if applicable)
Displays how often the AI produced fabricated or unsupported information. -
Filters and search bar
Narrow down results by question status (e.g. Passed, Failed, In Progress) or rating. -
List of generated questions
For each test entry, you’ll see:- Question and answer: The AI’s generated response to the question
- Status: Whether the answer was marked as Passed, Failed, or still In Progress
- Confidence score: Indicates how confident the AI was in its response (0–100%)
-
Test name and run timestamp
- You can click on any questions to view the full response details.
- Once clicked, a pop-up modal will appear
- Here you can:
- Compare the AI’s answer with the expected answer (if available)
- See the exact confidence score for the response
- Once clicked, a pop-up modal will appear
AI agent playground
Use the Playground to simulate conversations and review how your AI agent responds before going live, with full visibility into what’s happening behind the scenes.
This environment lets you test tone, logic, and coverage by sending sample questions. You’ll also see real-time logs showing which knowledge base sources were used, if lead scores were applied, or whether labels and contact updates were triggered.
Benefits and common use cases
Pain points solved:
- No visibility into agent logic: See exactly what data and rules the AI is using.
- Hard to verify CRM updates: Check if lead scores, labels, and contact properties are triggered correctly.
- Unclear source references: Find out which documents or URLs the agent relied on to answer.
When to use it:
- To test prompt tone and fallback behavior
- To confirm your AI agent is pulling from the right data source
- To simulate edge cases (e.g. privacy questions, aggressive tone)
- To check if your scoring or labeling actions are configured correctly
Using AI agent playground
You can follow the steps below to use AI agent playground:
- In the AI agent settings page, the AI agent playground will appear on the right side of the screen
- Once you have finished setting up your AI agent, you can type a question in the chat input field
- Press “Enter” to send
- The AI agent will respond based on its current configuration
- To view more details on how the response is generated, click on the “Click to expand chat window” bar to expand the chat window
- Once the chat window is expanded, you will find a trace panel on the left, displaying what the AI agent did behind the scenes:
- Sources fetched: Indicates which documents or pages were referenced
- Lead score calculated: Shows whether a lead score was assigned
- Label added: Displays any labels applied to the conversation based on your agent’s configured logic.
- You can expand each log by clicking the
icon to view more detail — such as the exact knowledge base chunks or scoring rules used.
- Sources fetched: Indicates which documents or pages were referenced
- If your agent includes an Exit condition, you’ll also see a note like “Agent exited the chat” appear after the final response.