The most common reason AI chatbots underperform is not the tool — it is insufficient planning before launch. This article covers seven design decisions that, when made early, prevent costly rework and misaligned expectations after deployment.
What you will learn
- Seven design decisions every chatbot deployment needs
- What to define in each decision area
- How to use a lightweight planning sheet to align stakeholders
- The risk of skipping each item
The seven decisions at a glance
Defining these seven items before launch eliminates the two most common post-launch complaints: “It’s not doing what we expected” and “Nobody knows who’s responsible for fixing it.”
If you can only decide three things first
When time is limited, lock these three before anything else:
- Scope — What the chatbot will answer and what it will refuse
- Human handoff — When a person takes over
- Success metrics — How you will judge whether the deployment is working
If these remain vague, post-launch iteration becomes guesswork.
Decision 1: Target audience
What to define
Specify who will interact with the chatbot.
- External users (customers and prospects) — Website FAQs, product guidance, lead qualification
- Internal users (employees) — Internal knowledge base, HR procedures, IT support
- Partners or resellers — Specification lookups, order processes, technical questions
Why this matters
Audience determines tone, response time expectations, and security requirements. A chatbot designed “for everyone” usually serves no one well.
Practical checklist
- What is the user’s technical proficiency?
- What device will they primarily use? (Desktop, mobile)
- What is the typical context of use? (At work, on the go, during purchase evaluation)
Decision 2: Scope
What to define
Draw a clear line between questions the chatbot will answer and questions it will not.
In-scope examples
- Plan and pricing comparisons
- Basic usage instructions
- Supported browsers and platforms
- Common error troubleshooting
Out-of-scope examples
- Account-specific contract details
- Questions requiring legal interpretation
- Personal data modification requests
The benefit of defining scope early
Without a scope boundary, the chatbot either attempts answers it should not (creating risk) or stays silent (creating frustration). Explicitly responding with “For this type of question, please contact us via the form” is a better experience than silence.
How to draw the line
Review your top 10–20 support inquiries and classify them:
| Criterion | In scope | Out of scope |
|---|---|---|
| Frequency | 3+ times per month | Less than once per month |
| Answer predictability | Low branching, straightforward | Case-by-case judgment required |
| Stability | Unchanged for 6+ months | Changes monthly |
| Data sensitivity | No personal data involved | Requires identity verification |
Decision 3: Quality standard
What to define
Set the accuracy and tone expectations for chatbot responses.
Quality levels
| Level | Definition | Use case |
|---|---|---|
| High precision | Errors are unacceptable (medical, legal, financial) | Terms of service, refund policies |
| Standard | Mostly accurate; edge cases should route to a human | General FAQ, product usage |
| Directional | Provides guidance but final judgment is the user’s | Product comparison, recommendations |
Why this matters
Without a defined quality standard, there is no baseline for reviewing response content and no clear protocol when a user reports an incorrect answer. Defining this early aligns the content team and the support team.
Decision 4: Human handoff
What to define
Specify the triggers and destinations for human escalation.
Trigger examples
- The chatbot determines it cannot answer the question
- The user requests to speak with a person
- The user repeats the same question twice
- The conversation requires personal data verification
Destination examples
- Display a link to the contact form
- Show a phone number
- Connect to a live agent (during business hours only)
- Display an email address
Design principle
A chatbot with no escalation path is a dead end. Users who reach an unanswerable question and find no way forward lose trust in the entire support channel. “Let me connect you with our team” is always better than silence.
Decision 5: Success metrics
What to define
Establish measurable criteria for evaluating the chatbot’s performance against its purpose.
Example metrics
| Purpose | Primary metric | Target benchmark |
|---|---|---|
| Reduce support volume | Month-over-month ticket count | 20–30% reduction |
| Improve satisfaction | Post-chat survey rating | 4.0/5.0 or above |
| Generate leads | Chat-driven signups or demo requests | 5+ per month |
| Self-service rate | % of chats resolved without human help | 60% or above |
The most common failure
“We launched but have no idea if it’s working” is the single most frequent outcome when metrics are undefined. Defining success upfront makes improvement directional instead of arbitrary.
Decision 6: Update workflow
What to define
Assign ownership, frequency, and process for keeping chatbot content current.
Update workflow components
| Component | Example |
|---|---|
| Owner | Support team lead |
| Frequency | Monthly scheduled review + ad-hoc for urgent changes |
| Triggers | Pricing change, feature release, new FAQ item |
| Review process | Draft reviewed by team before publishing |
| Tool | ChatBuilder dashboard for direct edits |
What happens when updates stop
Stale chatbot content leads to:
- Serving outdated pricing or specifications
- Failing to answer questions about new features
- Increased “your bot is wrong” support tickets
- Gradual erosion of user trust in the chatbot channel
Decision 7: Incident response
What to define
Create a playbook for when the chatbot is down or performing poorly.
Example incident response flow
- Widget fails to load → Amplify visibility of form and phone fallback
- Response accuracy degrades significantly → Pause the chatbot and switch to form/phone only
- Response latency is unacceptable → Display a temporary notice to users
Design principle
Define who responds, within how many minutes, and what action they take. The higher the chatbot’s share of your support traffic, the more critical this plan becomes.
Planning sheet
Use this template to align stakeholders before implementation:
| Decision area | Outcome | Owner | Date decided |
|---|---|---|---|
| Target audience | External users (prospects) | — | — |
| Scope | Top 10 FAQ items | — | — |
| Quality standard | Standard (edge cases route to human) | — | — |
| Human handoff | Unanswerable → form link | — | — |
| Success metrics | 20% ticket reduction | — | — |
| Update workflow | Support lead, monthly | — | — |
| Incident response | Widget down → form fallback | — | — |
Fill this sheet before writing a single scenario. It prevents misalignment between the person building the chatbot and the team that will operate it.
Who should join the planning meeting
- Support owner: defines common questions and escalation rules
- Business owner: sets priorities and success metrics
- Implementation owner: confirms embed, tracking, and fallback behavior
- Content owner: maintains answers, FAQ updates, and review workflow
FAQ
Do I need all seven decisions finalized before I can start?
No. But at minimum, define scope, human handoff, and success metrics before building. The remaining items can be refined during early operation without causing major issues.
Does this checklist apply to internal chatbots too?
Yes. The structure is the same; only the audience changes. For internal chatbots, add two considerations: data security requirements (handling internal information) and access restrictions (internal network only vs. VPN).
How much does chatbot implementation cost?
Tool costs typically range from a few dollars to a few hundred per month. However, the majority of implementation cost is human time spent on design, scenario building, and ongoing improvement. Defining these seven items upfront reduces rework, which is the largest cost driver.
What should I share with an external vendor if I outsource implementation?
Share this checklist directly. The two items most likely to cause rework if left vague are scope and success metrics. Make sure both are explicitly documented before the vendor begins building.
Next steps
Once you have filled in the planning sheet, move to implementation.
- Related: How to Add a Chatbot to Any Website in 5 Minutes — The technical setup guide for your first deployment
- Related: How to Replace a Contact Form with a Chatbot Using Convly ChatBuilder — If your primary goal is form migration, this staged approach reduces risk
- Get started: Contact Convly — We support planning-stage consultations as well as implementation