Bespoke AI Clones for Local Professionals: Feasibility and Opportunity Guide

Posted
Comments 0

An illustration of a “digital twin” – an AI clone mirroring a professional’s persona and knowledge. Building such bespoke AI assistants for doctors, lawyers, realtors, and other local service providers could help them engage clients 24/7 and scale their expertise.

In an era of ubiquitous AI, even small local businesses are beginning to leverage artificial intelligence to improve client service. Imagine a doctor’s digital assistant triaging patient questions at midnight, or a realtor’s AI persona giving instant home valuation estimates and scheduling showings. This guide provides a comprehensive evaluation of the feasibility and commercial opportunity of starting a business that builds such bespoke AI clones for local professionals. We’ll explore the market size and demand, technical and regulatory feasibility, viable business models, go-to-market strategies, and the recommended tech stack for creating AI agents tailored to an individual professional’s tone and expertise. The goal is to assess whether local doctors, lawyers, real estate agents and other service providers are ready for AI clones – and whether they’d pay for them – while identifying the challenges and steps to make this venture successful.

1. Market Opportunity

Size of the Market: The potential market for personalized AI assistants among small and local service providers is significant. There are millions of professionals in sectors like law, healthcare, real estate, consulting, and home services who could benefit. For example, the United States alone has over 450,000 law firms (mostly small practices) and more than 1 million physicians, many of whom operate or work in smaller clinics . If even a fraction of these professionals adopt AI clones, the addressable market would number in the hundreds of thousands of users. In dollar terms, this falls within the broader AI virtual assistant market, which is projected to grow from about $13.8 billion in 2025 to over $40 billion by 2030 (24%+ CAGR) . Generative AI adoption in small businesses is surging – nearly 60% of small businesses report using AI in 2025, more than double the rate in 2023 . Within that, 43% of small firms are already using generative AI chatbots to engage customers . These figures indicate a rapidly growing total addressable market, as AI tools become mainstream even for smaller enterprises.

Demand Signals: There are strong signals that local professionals are seeking solutions to automate client interactions and improve responsiveness. Customer expectations have shifted – 97% of real estate buyers start their search online and expect instant responses from agents , and legal clients now demand 24/7 availability for inquiries . Small and mid-sized law firms see AI chatbots as a way to meet these expectations by engaging website visitors round the clock, capturing leads even after hours . Similarly, in healthcare, patients appreciate quick answers to common questions and easy appointment booking; hospitals and clinics are experimenting with chatbots to monitor patient messages and handle scheduling inquiries . Surveys confirm the interest: about 66% of healthcare organizations have adopted or plan to adopt AI for patient communication and scheduling, though among individual physicians the uptake of patient-facing chatbots is still around 8–12% so far . Overall, small businesses view AI as a competitive aid – one Salesforce report noted 75% of SMBs are increasing AI investments as an “ally” to help them grow . The high adoption of generic AI tools (content drafting, data analysis, etc.) and the early positive results (e.g. law firms using chatbots seeing up to 30% more lead conversions) both signal that professionals are receptive to AI assistants that can save them time or win new clients.

Willingness to Pay: The critical question is whether a local professional will pay, say, $2–3k for an AI clone of themselves, and how price-sensitive this market is. Indications are that budget constraints are a concern for very small practices, but many do invest in technology that shows ROI. For instance, solo and small law firms routinely spend on marketing (websites, lead generation services) and practice management software – budgets in the hundreds per month are common if the value is clear. AI chatbot services tailored to small businesses already exist at various price points: some off-the-shelf solutions cost $50–$250 per month for a few thousand interactions . This translates to ~$600–$3,000 per year, aligning with the ballpark of $2–3k. Many small businesses appear willing to pay in this range: in fact, 89% of small businesses using AI report it improved productivity, and 84% plan to increase tech use, suggesting they see it as money well spent . However, if pricing climbs much higher (into tens of thousands), interest would drop sharply except among higher-revenue firms. Custom development of AI chatbots is known to cost tens of thousands for enterprises, but a templatized “clone” service can be offered far cheaper. The sweet spot will likely be a modest setup fee (a few thousand dollars) plus an affordable monthly subscription for maintenance. At ~$2k one-time, a professional might compare it to the cost of a few weeks of an assistant’s salary or a marketing campaign – if the AI can reliably handle lead inquiries or save them significant time, many will find that worthwhile. Nonetheless, price sensitivity remains high for sole practitioners; the value proposition must be clear (e.g. one new client captured by the AI could pay for the clone). Early adopters with tech-forward mindsets are less price-sensitive, whereas more traditional professionals may need lower-risk entry points (free trials or tiered plans).

Competitive Landscape: The idea of personalized AI assistants for individuals is emerging, and while not yet widespread in local service niches, several companies are already pursuing this concept:

Tech Giants and Platforms: In 2024, Meta (Facebook) piloted AI chatbots based on popular creators – about 50 influencers built AI versions of themselves that fans can chat with (clearly labeled as AI) . Mark Zuckerberg indicated a vision to eventually let every creator and small business build an AI clone of themselves for customer engagement. This shows the concept has the attention of major players, although Meta’s focus is more on consumer engagement than professional services for now.
Startups Building Personal AI Clones: A number of entrepreneurial ventures have jumped in early. For example, Delphi AI offers services to create and host “digital clones” of people . A Delphi AI clone can answer client questions in your voice, attend meetings on your behalf, or respond to emails with your tone and expertise. They’ve even monetized celebrity clones – e.g. selling paid access to the AI versions of wellness guru Deepak Chopra and coach Brendon Burchard . Another startup, EveryAnswer, provides a platform for businesses to build an “AI Expert” trained on their own data – essentially a branded Q&A chatbot that can be embedded on websites for lead capture or customer support . EveryAnswer’s plans (around $69–$249/month) underscore that vendors are targeting the SMB segment with relatively low-cost, scalable solutions . Domain-specific players exist too: in legal, for instance, LawDroid has been creating AI chatbots for law firm websites (to automate client intake and FAQ), and companies like Smith.ai offer AI-enabled virtual receptionist services. In healthcare, several startups offer AI-driven triage bots or appointment schedulers for clinics.
Current Offerings vs. “True Clones”: It’s worth noting that many existing “AI chatbot for small business” products are limited to scripted Q&A or knowledge retrieval – for example, Justia (a legal marketing firm) includes a chatbot with its law firm websites that answers basic queries and collects contact info . These tools improve responsiveness but are not deeply personalized in persona or capable of complex task automation. The concept of a truly bespoke AI clone mimicking an individual’s persona and handling nuanced tasks is still nascent. This means competition in the exact “AI clone of you” space is relatively thin in most local professional verticals – giving a potential first-mover advantage. A few tech-savvy professionals have prototyped their own clones (for instance, marketing consultant Mark Schaefer’s “MarkBot” built from his content) , but this is far from mainstream. Being early to market with a polished, turnkey solution could allow capturing mindshare and market share before larger players or late entrants crowd the space.

First-Mover Advantage: The market for bespoke AI personas for local service providers in particular is in its infancy. Adoption of AI in these professions, while growing, is not yet saturated. For example, only about 20% of small law firms had implemented any legal-specific AI by 2025 (large firms are ahead) , and a recent AMA survey showed just 8–12% of physicians currently use chatbots for patient-facing tasks. Those numbers are poised to increase, and early adopters are already reaping benefits (e.g. anecdotal reports of AI assistants handling two-thirds of appointment bookings in one medical practice) . Launching now means educating the market and shaping use-cases at a time when interest is high but few comprehensive solutions exist. There is an opportunity to become “the go-to AI clone provider” for, say, independent realtors or boutique law firms, establishing brand recognition and specialized expertise. However, first-movers must also invest in market education (convincing professionals why they need an AI clone) and be prepared for fast followers. If a major platform (like a revamped Siri/Alexa for business, or an offering from OpenAI/Microsoft tailored to professionals) enters later, the first-mover will need a moat – such as proprietary client data, integrations, or a loyal customer base – to maintain an edge. In summary, the timing is early enough to be exciting: the concept is innovative but backed by clear trends in AI adoption and client expectations, so a well-executed entry now could capture a leadership position in this new niche .

2. Feasibility (Technical & Regulatory)

Technical Difficulty – Q&A Bots: Creating a Q&A-style chatbot tuned to a specific professional’s knowledge and tone is technically very feasible with today’s AI tech. Large language models (LLMs) like GPT-4 have a strong ability to answer questions in natural language and can be customized with the right data and prompts. One straightforward approach is Retrieval-Augmented Generation (RAG), which is like giving the AI an open-book exam . In practice, you gather the professional’s relevant content (e.g. their FAQs, past blog articles, brochures, or even transcripts of them speaking), and feed it into an embedding database. When a user asks the AI clone a question, the system fetches the most relevant snippets of the professional’s content and provides it as context for the LLM to formulate an answer . This ensures the bot’s answers are grounded in the professional’s actual knowledge, rather than generic guesses. Technically, this pipeline can be implemented with open-source tools: for example, a developer built a private “AI clone” of himself using a local LLM (via Ollama) and ChromaDB for embeddings, in just days of work . In short, building a custom Q&A chatbot is well within reach – it doesn’t require inventing new algorithms, just smart assembly of existing components (LLM + vector database + prompt engineering). Fine-tuning an existing model on the professional’s data is another option; however fine-tuning can be data-intensive and less flexible when content updates. Many solutions will likely use prompt-based customization and retrieval as a first step, which is technically easier and allows iterative improvements.

Technical Difficulty – Agent-like Behavior: Going beyond Q&A into agent-like automation (e.g. scheduling appointments, initiating follow-ups, completing tasks on behalf of the professional) adds complexity but is becoming feasible. Modern AI frameworks allow integration of LLMs with tools or APIs. For instance, an AI clone could be set up with access to the professional’s calendar system via API – when a user says “Book me an appointment,” the AI can consult openings and schedule a meeting. Libraries like LangChain or the OpenAI function-calling API enable such tool use by AI, effectively letting the clone execute code or call external services in response to natural language commands. The difficulty lies in ensuring reliability and security: the AI needs clear constraints (you don’t want it booking something incorrectly or accessing unauthorized data). Each type of automation (scheduling, form-filling, emailing a follow-up) would require bespoke integration and testing. If the business develops a standardized integration (say, with Google Calendar, Outlook, Calendly for appointments, or with popular CRM systems for lead entry), then deploying those capabilities for each client becomes easier. However, because local professionals use a variety of systems (one doctor might use Athenahealth for scheduling, another uses Google Calendar, etc.), delivering agent capabilities at scale may involve significant custom work per client or limited support for specific popular platforms. In summary, technically possible, but more challenging: a pure Q&A bot can be stood up relatively quickly, whereas an AI agent that truly acts on the client’s behalf will require more engineering and client-specific configuration. This affects feasibility in terms of time and cost per deployment – it’s doable, but the service might initially focus on a few high-impact automations (like scheduling via common calendars, or simple lead info collection) to keep complexity manageable.

Data Needed from Clients: To create a high-quality AI clone, you’ll need to capture two main things from each client: knowledge content and persona/voice. On the knowledge side, the AI must be trained or provided with the professional’s domain expertise, service details, and typical Q&A. This means gathering documents and data such as: the professional’s website content, brochures, service descriptions, any existing FAQ documents, articles or blog posts they’ve written, and possibly transcripts of consultations or presentations they’ve given. The more proprietary and specific the data, the better – this prevents the AI from relying on generic info (which might be wrong or too general). In one case, a consultant was able to clone a marketer’s knowledge by feeding in years of blog posts and podcast transcripts . Many small professionals won’t have that volume of content, so you might gather data by interviewing the client or using forms where they answer common questions in their own words. Even a 30-minute recorded Q&A with the professional could be transcribed and become training data.

For the persona aspect, you’d want examples of the professional’s tone, style, and preferences. This can often be gleaned from the same content (e.g. writing style in their emails or articles). You can also have them describe their tone (“friendly and casual”, or “formal and academic”) and any specific language they do or don’t use (for instance, a doctor might say “blood pressure” instead of “BP” when speaking to patients – these nuances matter). Some solutions may allow a bit of fine-tuning on style or few-shot examples (“Here are 5 sample answers the doctor gave, mimic this style”). Overall, expect a discovery/onboarding phase where the client provides materials. The good news is that this data requirement is usually existing content they have or can easily produce. We’re not talking big proprietary datasets – often just their public info and a consultation to capture implicit knowledge. The RAG approach means you don’t have to hard-code all that into the model; you store it in a vector DB and let the model reference it . Over time, the AI clone can be improved by adding more data (e.g. transcripts of AI-client dialogues that the professional reviews or edits, which helps refine future answers). As a safeguard, the clone can also cite sources or include reference text in its responses for transparency , if that’s desirable. To summarize, data gathering is a manageable task: most small providers have at least a website and intake forms – a baseline knowledge base – and with a bit of prompting they can supply enough for a decent clone. The richness of the clone will increase with more personalized data, so part of the service’s value will be helping clients assemble and feed the right information to the AI.

Regulatory and Ethical Constraints: Building AI agents in sensitive fields like law and medicine brings significant regulatory considerations. Any solution must be carefully designed to avoid unauthorized practice of a profession, protect confidentiality, and comply with industry regulations:

Legal Sector: Lawyers are bound by strict rules about giving legal advice and advertising. An AI clone of a lawyer cannot be allowed to dispense actual legal advice that creates an attorney-client relationship or violates ethics rules. For instance, the bot should provide only general legal information and must clearly disclose it is not a human attorney. Many state bar associations require that communications (even automated ones) not be misleading – so the AI should likely start interactions with a disclaimer that it’s an AI assistant. Also, it should avoid specific strategy or outcome predictions (“You have a great case, you will win”) which could be seen as legal advice or guarantee, which is prohibited . Another consideration is confidentiality: if a prospective client shares details of their case with the chatbot, how is that data stored and used? Ideally, the system should immediately pass such info to the attorney and purge it from the AI’s memory, or ensure it’s stored encrypted with proper access control, since those details might be confidential. Privacy laws (like GDPR or CCPA) also require disclosure if conversations are recorded or used for training . The AI clone should refrain from collecting very sensitive personal data beyond what’s necessary for contact or scheduling, to minimize risk . Finally, lawyers must supervise the technology they use. As a provider, you’d likely advise lawyer clients to review the chatbot transcripts periodically to ensure it’s behaving and not giving improper responses. In short, it’s doable to have an AI handle initial client intake and FAQs for a law firm, but it must be tightly constrained to information sharing, with escalation to a human for actual legal counsel . These constraints will shape the clone’s knowledge base and responses (e.g. pre-loading it with “I am not a lawyer but I can help schedule you…” language for certain queries).
Healthcare Sector: Medical applications are even more sensitive. Patient safety and privacy (HIPAA) are paramount. An AI clone of a doctor should likely operate as an informational and administrative assistant – for example, answering general health FAQs (“What’s Dr. Smith’s procedure for initial consultations?”), providing pre- and post-appointment instructions, and handling appointment booking or reminders. It should not provide personalized medical diagnoses or prescribe treatment. If a user asks medical advice (“I have these symptoms, what should I do?”), the bot needs to have a safe response: general information plus a nudge to seek an appointment or emergency care if severe. Giving specific medical advice could not only be harmful (if incorrect) but might run afoul of medical licensing laws (unlicensed practice of medicine) and FDA regulations (such an AI could be considered a medical device requiring approval if it’s intended for diagnosis or treatment). From a privacy standpoint, if patients input personal health information into the chatbot, the data is considered Protected Health Information (PHI) under HIPAA. That means the service provider must sign Business Associate Agreements with the medical practice and ensure the data is encrypted and not used for any purpose outside the service. Using third-party AI APIs (like sending patient chat content to OpenAI) can be problematic unless those APIs are also HIPAA-compliant or a BAA is in place. This may push toward on-premise or self-hosted models for healthcare clients, or at least careful data handling (e.g. not storing full conversations). Liability is another factor: a doctor could be held liable if their AI gave incorrect info that a patient relied on. Thus, many physicians will use such tools only in low-risk contexts (scheduling, symptom triage with very clear advice to seek care, etc.). The American Medical Association found increasing physician openness to AI, but also noted trust and safety concerns – even among doctors using AI, a quarter are more skeptical, largely due to worries about errors, “black box” reasoning, and data security . Any healthcare-focused AI clone would need to earn trust by demonstrating a near-zero error rate on routine info and a strict policy of not venturing beyond its scope.
Other Professions: For real estate agents, financial advisors, accountants, etc., there are also industry regulations (e.g. Realtors must not violate Fair Housing laws in their communications, financial advisors have compliance about what can be promised in advice, etc.). These are generally less stringent than law/medicine, but a clone should still be programmed not to make guarantees (“This investment will double your money”) or discriminatory statements (even inadvertently). For any profession, advertising and consumer protection laws require honesty – the clone must not fabricate credentials or experience. Practically, this means implementing guardrails so that if the underlying model tries to “hallucinate” an answer outside of provided knowledge, it either refrains or responds with “I’m not sure, let me connect you to [Professional].” Indeed, early experiments show that if an AI clone is missing info, it might be tempted to make something up to sound authoritative, which could be damaging. So part of feasibility is implementing a fallback for unknown queries (perhaps the AI says “I’ll have [Professional] follow up with that detailed question.”).

In summary, regulations do not forbid using AI in these fields, but they impose boundaries that the product must respect. The feasibility is there – many professionals are already cautiously using AI – but the clone’s behavior should be carefully designed with compliance in mind. Expect to invest time in understanding the specific rules of each vertical (e.g. for lawyers, the ABA and state bar ethics opinions on AI; for doctors, FDA guidelines on clinical decision tools and AMA ethics). By building in compliance from the start (disclosures, limited advice, privacy safeguards), you can preempt many concerns. Furthermore, turning regulatory constraints into a feature can be a selling point: for example, advertising that your AI clones are “HIPAA-compliant and keep data 100% private on secure servers” or “configured to comply with legal ethics guidelines” can build trust with potential clients who worry about these issues.

3. Business Model and Scalability

Service Model (Bespoke) vs. Platform (SaaS): A key decision is whether to operate as a bespoke service for each client or to develop a more productized software-as-a-service (SaaS) platform. Each approach has pros and cons:

Bespoke Service Model: In this model, your business works closely with each professional to create a highly customized AI clone. It’s akin to a consulting or agency service – you might charge a one-time setup fee (e.g. a few thousand dollars) for the intensive work of building the clone, plus perhaps ongoing support fees. Pros: High degree of personalization (which clients will value, since you can fine-tune the clone to their exact needs), ability to charge premium prices, and potentially strong relationships (the client sees you as a partner in their tech adoption). It’s also easier to start this way – you can manually do the necessary steps for the first few clients without a fully automated pipeline. Cons: It doesn’t scale well. Each new clone could require significant manual effort, which means hiring more AI specialists as you grow. The revenue is partly one-off, unless you successfully upsell maintenance subscriptions. Over time, margins might suffer if each sale is essentially a custom project. There’s also less recurring revenue stability unless structured well.
SaaS Platform Model: Here, you build a platform or toolkit that clients (or your team) can use to configure clones in a repeatable way. For example, a web app where a professional uploads documents, chooses a persona style, and gets a chatbot to embed on their site. This would be sold as a subscription (monthly/annual). Pros: High scalability – once the platform is built, onboarding each new client is low incremental cost. Recurring revenue from subscriptions can stack up and provide steady cash flow. It’s easier to integrate improvements across all clients (since they’re on one platform). Cons: Developing a robust platform is resource-intensive and risky upfront – you need to invest in software development, user interface, self-service tools, etc., which could be expensive. Also, professionals might still expect a human touch or help in configuring the clone; pure DIY SaaS might be challenging for non-tech-savvy users in this segment. Additionally, a generic platform might not deliver the depth of customization some high-end clients want (a doctor might have very specific needs that a generic wizard can’t accommodate easily).

A likely strategy is a hybrid approach: start with a bespoke/concierge style service to ensure high-quality outputs and learn the nuances of client needs, while gradually building internal tools to automate repetitive parts of the process. Over time, this can evolve into a more standard platform. For example, you might templatize a lot of the clone creation workflow (data ingestion scripts, preset prompt templates for each industry, integration modules for popular software) so that you’re not reinventing the wheel each time. Eventually, power users could even be given a self-service dashboard (moving closer to SaaS). Many startups adopt this progression – do things that “don’t scale” at first to nail the product, then scale them. It’s important, however, to keep an eye on which model fits your growth goals: if you want a smaller high-touch consultancy, bespoke is fine; if you’re aiming for venture-scalable growth, you’ll need the SaaS style scalability.

Delivery Process and Workflow: Whether bespoke or SaaS, it’s crucial to define a clear delivery process for creating and deploying each AI clone. A potential workflow might look like:

Onboarding & Needs Assessment: You (or your platform) work with the professional to identify what roles the AI clone should play. Is it primarily a website Q&A chatbot for lead capture? Should it also handle scheduling or follow-up emails? What tone and persona should it have? At this stage, you also address concerns and set expectations (e.g. “The AI will handle these types of questions, but anything it’s unsure of will be referred to you.”).
Data Collection: Gather the client’s content and data as discussed in Feasibility. This could involve sending them a checklist or using an online form where they upload documents. If bespoke, you might interview them or manually scrape content from their website, etc. Ensure you also gather branding info (their photo or avatar if the chatbot will show an image, their preferred greeting, any slogans, etc., to really personalize it).
Training/Building the Model: This is the core technical step. You would ingest the collected data into your system (embedding it for retrieval, or fine-tuning a model if that’s the approach). Set up the prompt with the persona instructions (e.g. “You are Dr. John Doe, a pediatrician with 20 years of experience… [include style guidelines]…”). If needed, perform a fine-tune on a small model with the client’s Q&A data. Essentially, you configure the AI brain of the clone. Then integrate any tools (APIs for scheduling etc.) as required for this client.
Testing and Iteration: Before handing it off, you’d test the AI clone thoroughly. Pose common questions and edge-case questions to see how it responds. Likely you’ll find some rough edges – maybe it’s too verbose, or it occasionally gives an unintended response. You’d then tweak the prompts or add more data/rules. Ideally, involve the client in this testing phase: let them interact with the clone and give feedback (“I don’t like how it answered this question, it should mention our free consultation offer,” etc.). Iteratively refine the clone until the client is happy with its performance and tone.
Deployment & Integration: Once approved, deploy the clone in the client’s channels. For most, this will mean embedding a chat widget on their website (which should be straightforward – a snippet of code or using a plugin). It could also include integrating with their Facebook page chat, WhatsApp, or other channels if relevant, using your platform’s capabilities . If voice is part of it (like a phone line auto-attendant), you’d set up the voice interface. Integration with their CRM or email – e.g. ensure when a lead is captured by the bot, it emails the office or creates a CRM entry.
Handoff and Training: Provide the client any training or documentation on using and monitoring their AI clone. Even if it runs itself, they should know how to view conversation logs, how to manually take over a conversation if needed, or how to update the knowledge base (maybe they add new FAQs as they realize missing ones). For a SaaS, this is product training; for bespoke, it might be a live walkthrough. Also, emphasize best practices (like periodically checking what the bot is telling people, at least in early days).
Maintenance & Updates: After launch, the clone will need occasional updates. For example, if the professional adds a new service or changes pricing, the AI’s knowledge must be updated. If the AI makes a mistake or clients frequently ask something it wasn’t trained on, you’ll want to update its data or responses. Maintenance also covers technical upkeep – ensuring compatibility if the client updates their website, monitoring the AI’s performance, and applying any improvements you develop (e.g. if your underlying model gets an upgrade). This step is ongoing and is a prime candidate for a retainer or subscription – you might bundle a certain level of support and updates in a monthly fee.

This delivery process highlights that there is a significant initial setup effort (justifying a setup fee), and ongoing work (justifying recurring fees). It also shows what parts can eventually be standardized: for instance, Steps 2–4 could be largely automated with a good platform, while Steps 1 and 5–7 might always need some human touch.

Revenue Model (One-time vs Recurring): Balancing one-time setup fees and recurring charges is key for sustainability. Possible models:

One-Time + Maintenance: Charge a one-time “implementation fee” for building the clone, say $X (could be $1k, $5k, or more depending on complexity and your positioning). This covers the heavy lifting initially. Then have a smaller ongoing monthly subscription (e.g. $100–$300/month) for hosting the AI (covering the compute costs, API calls) and providing support/updates. The subscription also keeps the client tied to you, providing ongoing revenue and ability to upsell new features. This model is common in custom software deployments.
Pure Subscription (SaaS Style): No or minimal upfront fee, but a higher monthly fee that over time pays back the acquisition cost. For example, $299/month with a minimum 12-month commitment. This can lower the barrier for clients (no big upfront cost) and if the service is clearly delivering value month over month, they’ll keep paying. But it means you as the provider carry the burden of setup without guarantee of recouping if they cancel early. This model would make sense once the process is repeatable and low-touch (so that setup cost per client is low on your end).
Tiered Pricing: You might have packages – e.g. Basic Clone for $500 setup + $50/month (which maybe only covers a simple Q&A bot on one channel), and Premium Clone for $2000 setup + $200/month (which includes multi-channel, voice integration, and two custom automations like scheduling). This way, small-budget clients can opt in at a lower tier, while those who see more value can pay more for more capabilities.
Additional Services: Think about ancillary revenue: for instance, providing custom training data creation (if a client doesn’t have good FAQs, you offer to develop a knowledge base for them as a consulting project), or analytics and insights (monthly report of what clients are asking the bot, which could be valuable market feedback to them). These could be add-on charges. Another example: offering a “white-label” arrangement for agencies (discussed later) where they pay for bulk or resell your service with a margin.

The goal should be to ensure healthy margins while staying attractive to cost-conscious professionals. AI API costs and infrastructure will be a portion of your expense – but generally, answering a query via an LLM is only a fraction of a cent or a few cents in compute. For example, if using GPT-4, each query might cost $0.02–$0.10. If a chatbot handles 200 queries a month (~10 per business day), that’s maybe $5–$20 in API costs – quite manageable if the client is paying $100+. Even with more volume, the margin can be high, especially if using fine-tuned smaller models or open-source models on your own server. Support labor is a cost to consider – clients may call with issues or questions, meaning you need to provide some support in the fee. Initially, this might be just you, but as you grow you might need a support person or two, which again means the recurring charges must cover that.

Scalability Considerations: The big question: how scalable is this business, especially if it starts as bespoke? Several factors influence scalability:

Templatization & Reuse: A lot of the work for one client will be repeatable for another in the same industry. You can create industry-specific templates for faster deployment – e.g., a “Lawyer Clone Template” might come pre-loaded with a generic set of 50 common law firm Q&As, a polite-but-professional tone preset, and integrations for calendaring that many lawyers use. Then for each new lawyer client, you only add their specific info and make tweaks, instead of starting from scratch. Similarly, a “Real Estate Agent Clone Template” could include typical home-buyer FAQs, a friendly enthusiastic tone, etc. Over time, the library of templates grows, and each new deployment is quicker. This dramatically improves scalability – essentially moving from custom coding to configuration. Many parts of the clone creation (embedding documents, setting up a chatbot widget) are the same for all – those can be automated in your backend.
Limits of Customization: One risk to scaling is if every client demands a totally unique feature. If one doctor asks “Can my clone integrate with this obscure electronic health record system to pull lab results?” and another asks “Can mine do outbound calls to remind patients?”, you might get pulled into building a lot of one-off features. It’s important to balance accommodating special requests with keeping a core standardized product. Possibly, you define what the product includes clearly (e.g. “The clone will answer FAQs and schedule appointments via Google Calendar; anything beyond that is custom work.”). Custom feature requests could be charged separately or deferred until multiple clients want it (then you build it as a general feature).
Hiring and Training: If demand grows, you’ll need more AI trainers/onboarding specialists. But since you’ll have developed a playbook, new hires (or even the clients themselves) can execute it. Ideally, the process becomes documented enough that you don’t need highly specialized AI PhDs to onboard a client – a moderately tech-savvy employee or a guided setup wizard can handle most of it. That’s when scaling can accelerate.
Quality Control: One challenge in scaling is maintaining quality for each clone. Early on, if you only have 5 clients, you can personally check all their AI’s outputs. If you have 500 clients, you rely on your system and occasional audits. Investing in some automated quality checks could help – for instance, run a set of test queries on each new clone to ensure it’s not saying something it shouldn’t. Also, implementing analytics to flag problematic interactions (like if a user anger rating is high or the AI had to apologize frequently) can alert you to issues. This way, quality remains consistent as you grow.

In terms of growth potential: a scalable SaaS model here could potentially serve thousands of professionals with a relatively small team, given much of the heavy lifting is done by AI and software. The market, as discussed, is large. The main scaling constraints will be how efficiently you can onboard new clients and keep the clones updated with minimal human intervention. If each clone requires a ton of hand-holding indefinitely, that’s essentially a consulting firm (which can still grow, but linearly with headcount). If, however, after initial setup many clients’ clones run largely autonomously (with occasional updates, possibly self-service by the client through a portal), then you have a classic SaaS scalability.

Margins: The combination of a software product with optional bespoke services can yield healthy margins. Software (especially if using some open-source components) has high gross margins. The major costs – cloud compute, API calls – are relatively low per unit as noted. Support and any manual operations are the bigger ongoing costs, so the more you automate, the better the margins. Successful scaling likely means shifting the mix toward recurring software revenue and minimizing the labor-intensive components over time.

In conclusion, the business model could start as a premium bespoke service to early clients (who essentially fund the development by paying higher fees) and transition into a scalable platform with recurring revenue. It’s important to keep an eye on churn (making sure clients see continued value, so they renew) – that means continually improving the clones so they remain useful (perhaps adding new features like multi-lingual support or better integrations, ideally included in the subscription). Because if clones become a “set it and forget it” one-time novelty, clients might not see why they should keep paying monthly. But if the clone is directly tied to business outcomes (e.g. every month it brings in 5 new leads or saves 10 hours of admin work), then the value is obvious and retention will be high. Aligning pricing with value delivered (for instance, tiered by number of leads captured, etc.) could further ensure clients feel it’s worth it.

4. Go-to-Market Strategy

Developing a great product is only half the battle – you also need an effective go-to-market (GTM) plan to find and convince local professionals to use AI clones. This can be challenging, as many of these individuals are not actively seeking tech solutions or may be skeptical of AI. Below are strategies and considerations for customer acquisition and sales:

Identify Target Segments: First, even within “local professionals,” you may want to target specific niches initially. Each vertical has its own channels and mentality. For example, real estate agents are often very sales- and marketing-oriented, heavy users of social media, and might eagerly adopt a tool that gives them an edge with clients (they’re used to paying for marketing tools). Lawyers and doctors are more conservative on average; within those, maybe focus on subsegments like estate planning attorneys or cosmetic surgeons – fields where competition for clients is high and they value marketing. Starting with one or two niches allows you to tailor messaging: e.g. “AI Assistant for Dentists” might be a more compelling pitch than a generic “AI for professionals,” because it can speak directly to their needs (like handling patient inquiries about procedures, insurance, etc.).

Customer Acquisition Tactics:

LinkedIn Outreach and Content: LinkedIn is a powerful platform to reach professionals (lawyers, consultants, realtors are all there). You can use a combination of organic content and direct outreach. For organic: establish yourself (or your company page) as a thought leader in “AI for small business.” Post case studies, statistics (like how 30% more leads convert with chatbots ), and short videos demonstrating an AI clone in action (e.g. a mock conversation between a client and a lawyer’s AI assistant). This content can generate inbound interest or at least warm up your audience. For direct outreach, you can search for, say, “Realtor in [City]” or join groups (there are LinkedIn groups for small firm attorneys, etc.) and then send connection requests with a friendly note about how you’re helping professionals with AI. Be careful to avoid spammy approach – personalize messages, maybe referencing something about their practice. The goal is to secure an intro call or demo. LinkedIn Sales Navigator can be useful to filter by industry, location, company size (likely 1-10 employees). Keep in mind many small providers might not be super active on LinkedIn (some are, some aren’t), so it’s one channel among several.
Cold Outreach (Email/Phone): You can compile lists of local professionals via public directories (e.g., bar association lists for lawyers, health clinic directories for doctors, Realtor directories). A targeted cold email campaign could work if done tactfully. The email should identify a pain point and solution: e.g. “Hi Dr. Smith, I noticed your clinic’s website doesn’t currently offer live chat. We have an AI-powered assistant that could answer patient FAQs and even book appointments 24/7, helping reduce calls to your front desk. Imagine your patients getting instant answers at 10pm while your staff is home – and those inquiries turning into appointments for you. . We handle all the setup. Would you be open to a quick demo?” This highlights benefits (availability, lead capture) and social proof if you have any (“Already used by 3 clinics in town”). Keep it short and outcomes-focused. Some will ignore it, but even a small response rate can get you initial clients. Follow-up calls can be useful; many small business owners still respond to phone better than email. “Billboard scraping” as mentioned in the prompt likely refers to literally finding local professionals who advertise (on billboards, benches, local ads) – those who advertise are clearly investing in marketing. You could reach out to them specifically (“Saw your billboard – ever thought of having an AI rep that talks to interested clients online? We can help capture even more leads from your advertising efforts.”). It’s a creative angle to identify active marketers.
Partnerships and White-Labeling: Partnering can accelerate GTM by leveraging others’ relationships. For instance, marketing agencies or web design firms that serve small businesses might love to offer AI clones as a new service (without building it themselves). You could white-label your solution for agencies – they sell it to their clients as “Website AI Assistant” and either you handle the backend anonymously or you co-brand. They get to appear cutting-edge and you get distribution. Similarly, partnering with industry-specific software providers could work: e.g. a company that makes practice management software for clinics might integrate your chatbot as an add-on. Those deals can be longer-term and require integration, but are worth exploring once you have a stable product. Local business associations or chambers of commerce can also be channels – you might give a talk or webinar through the chamber, educating members about AI opportunities. That builds credibility and generates leads.
Demonstrations and Free Trials: Many professionals will be skeptical until they see it. Setting up demo AI clones for fictional or famous personas in each field can wow potential clients. For example, have a demo on your website like “Chat with Einstein, Attorney at Law (AI demo)” where they can play and see the responsiveness. Better yet, if you secure one or two early adopter clients, get permission to use their AI (or a clone of it with anonymized content) as a demo. People like to see something relevant to them. Offering a free trial or pilot period can lower the barrier – e.g. “Try your AI clone free for 14 days with your own website.” This could be resource-intensive if done bespoke, but maybe you offer a basic version trial. Alternatively, structure a money-back guarantee on the setup fee if not satisfied in X days.
Addressing Trust and Objections: Expect a lot of questions and objections in the sales process. Common ones: “Will this actually work? I don’t want it saying the wrong thing.”; “I’m not tech-savvy, will it be a hassle to manage?”; “My clients prefer human touch, will this turn them off?”; “Is it secure – where is the data going?”; and of course “It sounds expensive, what’s the ROI?”. To overcome these, prepare case studies and testimonials (even hypothetical scenarios at first, or borrow stats from industry reports). For example: show how a law firm chatbot can answer 100 common questions and captured 50 leads in a month that the firm might have otherwise missed . Emphasize that the AI is there to assist, not replace the professional – it handles the mundane repetitive inquiries so that the professional can focus on complex tasks or actual client meetings . Also stress that it can seamlessly hand off to a human whenever needed (e.g. “If the AI can’t handle something, it will notify your staff to follow up – so nothing falls through the cracks”). On the tech fear, reassure that you handle everything end-to-end; they don’t need any technical skill. Use analogies: “It’s like hiring a virtual assistant who never sleeps – we train them for you, and you just enjoy the results.” For security, have a clear answer: e.g. “All conversations are encrypted and stored securely, we don’t use your data to train any models outside your clone, and we can even deploy it on-premises if needed for compliance.” Being upfront about limitations is also good for trust: acknowledge it’s not perfect but show how you mitigate errors (like review and refine process). If you have any certifications or known partnerships (say you’re using a reputable AI API that’s known to be secure), mention those. Ultimately, building trust might require doing some pilot projects at low cost to get referenceable successes. You might even do a couple of free or at-cost implementations for influential local figures (like a well-known real estate agent or a clinic) just to get that success story which you can then market.
Marketing Collateral: Develop simple, clear marketing materials – a website with explainer, short video demos, one-pagers for each industry highlighting pain points and how the AI clone helps, with quotes and stats. For instance, a flyer or PDF for attorneys might start: “Tired of fielding the same questions from prospective clients? An AI Legal Assistant on your website can handle initial consultations, 24/7, and free up your time. Law firms using chatbots have seen a 30% increase in client conversion and reduced wasted consultation time . Stay ahead of the curve with your own firm’s AI – ethically compliant and always on-message.” – followed by features. Similar tailored messaging for doctors (focusing on patient satisfaction and reduced admin burden, citing how many hours doctors waste on admin which AI can cut ), and for realtors (focusing on capturing online leads and instant responses as sellers/buyers often go with the first responsive agent).

Sales Cycle Expectations: Prepare that some professions have longer decision cycles. A solo realtor might decide quickly on their own, but a doctor in a small group may need to consult partners or an office manager. Lawyers might be cautious and maybe consult their bar association guidelines or want to see it in action elsewhere first. That’s why social proof and education are so important – the more they see AI assistants becoming common (perhaps via media or peers), the easier the sale. Being early means you’ll do more evangelizing. Possibly organize a webinar or workshop: e.g. “AI for the Small Law Firm – what you need to know,” providing value by educating (not just a sales pitch). This can generate leads who trust you as an expert.

White-Labeling & Agency Partnerships: As mentioned, this could be powerful. Many local professionals rely on agencies for their website and marketing. Those agencies are now themselves exploring AI to offer to clients. By making your solution white-label friendly, an agency could bundle it as part of a “premium website package” or “digital marketing retainer”. You could provide a portal where an agency user can create and manage clones for each of their clients, with the agency’s branding on the interface. The end professional might not even know your company name – and that’s fine if the volume is there. This approach could significantly cut your cost of acquisition, because the agency acts as a channel. To get in with agencies, you might attend or sponsor events that agencies go to, or simply directly reach out to some known local marketing firms with a pitch. Offering a revenue share or discount for multiple clients can entice them (“Resell our AI clones and get 20% of all fees” or “For agencies: 5 clones for the price of 4”). Agencies will care that your solution doesn’t make them look bad, so you may need to prove it out with a pilot for one of their willing clients first. If successful, they could introduce it to many more.

Scaling Sales: Initially, much will be manual (founder-driven sales). As you learn what resonates, you can scale through more systematic approaches: content marketing (SEO – e.g. writing blogs like “Top 5 Ways AI Can Boost a Small Law Firm’s Revenue” to attract inbound interest), possibly paid ads targeting keywords like “chatbot for [industry]”. Given the novelty, PR could also help – a story in a local business journal or a niche trade publication (“Local startup creates AI ‘clones’ of doctors to help with patient queries”) could bring inbound leads and credibility.

Throughout GTM, a key is trust – you’re asking a professional to stake their reputation on an AI. Emphasizing that they remain in control is vital. The AI clone should be pitched as an extension of them, not an independent actor. Perhaps use language like “digital assistant” more often than “clone” when talking to clients, as “clone” can spook some (implies replacement). Save the fun “AI clone” phrasing for marketing copy where appropriate, but one-on-one, frame it as giving them a superpower of being available 24/7 in a controlled, safe way.

5. Tech Stack Overview

Building bespoke AI clones requires a robust yet flexible tech stack. Below we outline the key components and choices, from AI models to databases and integration tools, to ensure the solution is effective and maintainable.

Core AI Models: At the heart is the language model that will converse as the professional. There are two main routes: use a large API-driven model like OpenAI’s GPT series, or use an open-source model (possibly fine-tuned) that you host.

Proprietary APIs (OpenAI, etc.): Using models like GPT-4 or GPT-3.5 via API offers excellent language capabilities out-of-the-box. GPT-4, for instance, can produce very fluent, contextually appropriate answers and follow instructions for tone. This means less work to get a good persona mimic. You can add system or user instructions to set the style (e.g. “Always respond in a calm, compassionate tone as Dr. Smith would.”). OpenAI also allows some degree of fine-tuning on their smaller models, which could be used to incorporate a professional’s specific style or canned answers. The benefit of APIs is convenience and quality – you leverage cutting-edge models maintained by others. The downside is cost (API calls incur fees, though, as discussed, these are manageable for moderate usage) and dependency on external services (which raises privacy concerns if sensitive data is flowing to a third party, as in medical scenarios). Other providers like Anthropic (Claude) or Cohere also offer strong models via API.
Open-Source Models: There’s a growing array of open LLMs (like LLaMA 2, GPT-J, etc.) that you can host on your own servers. These give you more control over data (everything stays in-house) and potentially lower variable costs (once you invest in hardware or cloud instances). However, raw open models might not be as polished in quality as GPT-4, especially for complex conversation or strict persona fidelity. That said, you can fine-tune open models on domain data. For example, you could maintain a fine-tuned 7B or 13B parameter model for each vertical (or even each client if needed, though that might not scale well) which captures common Q&A in that domain. Fine-tuning can make a model more “on brand” but requires a good amount of training data. Alternatively, instruct-tuning (giving it examples of how the persona responds) can help. Some people have successfully run personal clones on local models using RAG to supplement them . A middle-ground approach is using smaller models for cheaper tasks and calling the big model only when needed. For instance, use an open model to answer simple, repetitive questions (after all, a question like “What are your hours?” doesn’t need GPT-4), and use GPT-4 for more nuanced queries or when the open model isn’t confident. This kind of routing can optimize costs.

Retrieval System (Knowledge Base): As discussed, Retrieval-Augmented Generation (RAG) will likely be a core part of the stack. This means you need a vector database to store embeddings of the professional’s documents/content. Popular choices: ChromaDB (open-source, easy to integrate – as in that DIY example ), Pinecone (managed service, scales well), or even ElasticSearch with vector capabilities. The pipeline is: when a user asks something, embed the query (using an embedding model like OpenAI’s text-embedding-ada or SentenceTransformers like all-MiniLM ), query the vector DB for similar content, and feed the top results into the LLM’s context. This ensures the clone’s answers are grounded in the professional’s real info. The RAG approach also helps reduce hallucinations – if the model tries to make something up, ideally the absence of relevant context will make it say “I’m not sure” instead (though in practice, some prompt work is needed to encourage deferral on unknowns). The vector DB will need to be updated whenever the client’s knowledge base updates (so incorporate that into maintenance workflows).

Prompt Orchestration & Persona Alignment: To get the AI to reliably mimic the client’s persona, you’ll craft a system prompt or few-shot prompt that defines the role. For example: “You are [Professional Name], a [description of credentials]. You speak in a [friendly/authoritative/etc.] manner. Your goal is to assist users with accurate information about [professional’s domain], and to do so with empathy and clarity. You always include a brief friendly greeting with the user’s name, and you end conversations with an offer to help further or schedule a meeting.” etc. This acts as an initial guideline. Additionally, you might include example Q&A pairs in the prompt (few-shot learning) demonstrating ideal answers. These examples could come from the client (like actual emails they’ve answered or an intake script). This helps the model pick up subtle style points. With GPT-4’s large context window, you could even include quite a few examples or documents directly in the prompt if needed (but vector DB is more scalable).

It’s important to have a consistent prompt template that you can programmatically fill with each client’s details. That way, improvements to the prompt apply to all. Tools like LangChain facilitate managing prompts and chaining steps (retrieval then generation).

Tools and Integrations: For agent-like behavior, incorporate tools. For scheduling, you might use an API integration with Google Calendar or Calendly. If you’re not building those integrations from scratch, you could use services like Zapier or Make (Integromat) that trigger actions when the AI outputs a certain structured command. For example, if the AI says <> [ClientName] on [Date] at [Time], a backend could parse that and create an event via Google Calendar API, then the AI confirms the booking. OpenAI’s function calling feature could be handy: you define a function like schedule_appointment(date, time, name) and the AI can decide when to call it with given arguments, which you then handle in code. This structured approach avoids the AI free-form interacting with tools – it will output a JSON for the function, which your code executes, making it more robust.

If providing a chat widget on websites, you’ll need a frontend component – maybe a Javascript widget or integration with existing chat systems. There are libraries and services to embed chat UIs that connect to your backend API. Alternatively, using a platform like EveryAnswer or Delphi under the hood could be considered if you wanted to build on someone else’s tech initially, but that may limit customization.

Voice and Multimodal: If you plan to offer voice interactions (like a voice clone that can talk on phone or produce videos like some realtors want ), additional components enter the stack. For voice, you’d need speech-to-text (to convert caller voice to text for the AI) and text-to-speech (TTS) to respond in the professional’s voice. Services like ElevenLabs or Microsoft’s neural TTS can generate very realistic custom voices if you provide a sample of the person’s speech. For inbound phone, you could integrate with a telephony API like Twilio to handle calls and feed into the AI. For video avatars (like an AI clone that appears as a talking head video), there are APIs such as HeyGen or D-ID where you can supply text and a digital avatar (trained on a person’s appearance) to generate a video. These are more cutting-edge and might be phase 2 offerings, but the tech is there. Real estate agents, as seen, are exploring AI avatars for property tours ; incorporating that could differentiate your service later.

Off-the-Shelf vs Custom Development: There are some off-the-shelf solutions for “chatbot on your data” (like EveryAnswer, Chatfuel AI, etc.). One option could be to use such a platform behind the scenes initially – basically configure client bots on those and charge your margin. However, that might limit flexibility (especially for adding custom tools or voice, etc.). Building your own stack gives you control, and with open-source components it’s quite feasible. For instance, using the LangChain framework, you can within a few hundred lines wire up an LLM, a vector store, and some custom logic for each query. The Medium example we saw used Ollama to easily run a local LLaMa model with a one-liner command . That and similar innovations show that you don’t need a huge engineering team to build these systems.

One critical piece will be a multi-tenant architecture if this becomes SaaS – meaning you can serve many clients from one system securely. This means separating each client’s data (each gets their own vector index or at least namespace), and ensuring prompts and histories don’t leak between clients. It also means scaling the infrastructure: possibly using containerization or serverless functions for handling requests, and scaling vector DB and caching etc. Early on, a lot can be done in a scrappy way (maybe even each client has a dedicated small server if needed), but eventually consolidating to a cloud infrastructure (AWS/GCP/Azure) for reliability will be wise.

Maintenance and Updates (Tech Perspective): We touched on updating the knowledge base – you’ll need a simple process for that, possibly an admin interface where either you or the client can upload new info and re-embed it. Versioning of the AI’s prompt or model might be needed: when the underlying models improve (e.g. GPT-4 gets an update or you switch to GPT-5, or you fine-tune new vertical models), you should test that it doesn’t break the clone’s behavior. It might be useful to maintain a set of test queries for each client’s clone that you run after any major change.

Monitoring is also part of maintenance: track usage (for billing and for scaling decisions), track errors, and have logs (with sensitive data protections) for conversations. If an issue arises (e.g. AI gave an inappropriate answer), you want to be able to trace why – was it the prompt, the data, or the model.

Security-wise, ensure all endpoints are secured (if a client is embedding your chat widget, you don’t want others to be able to hit an API that exposes someone else’s data by guessing an ID). Authentication if needed for certain data (though most will be public info it’s giving out, but the backend where they update data should be secure).

Finally, consider data analytics: Over time, you’ll collect a lot of Q&A pairs. This data (with permission and anonymization) could help improve the system, perhaps even fine-tune a generic model for each vertical using aggregate data (not individual client confidential stuff, but general queries pattern). For example, if you have 50 dentists using the system, you have a lot of insight into what patients commonly ask. You could fine-tune a model to be exceptionally good at dental inquiries, which benefits all dentist clients. This is a virtuous cycle where more usage leads to a smarter product, which leads to better results and more usage. Just ensure any such approach respects privacy (you might need an opt-in in sectors like law/medicine).

Example Tech Stack Summary: To crystallize, here’s a hypothetical stack choice:

Frontend: Chat widget using React that calls your backend API. Possibly white-labeled per client.
Backend: Python (FastAPI or Flask) server handling chat requests. Uses LangChain to orchestrate.
LLM: OpenAI GPT-4 API for primary responses (with fallback to GPT-3.5 for cost if appropriate). For privacy-sensitive clients, an option to use local model via HuggingFace (like a fine-tuned Llama2) running on an AWS EC2 with GPUs.
Vector DB: Pinecone or Chroma for document embeddings and retrieval.
Embeddings: OpenAI’s text-embedding-ada-002 or SentenceTransformer model for generating vector embeddings of client documents.
Storage: An SQL or NoSQL DB for storing client profiles, conversation logs, etc.
Integrations: Calendar API integration microservice (Node.js or Python) to handle scheduling actions; Twilio for telephony; etc.
Monitoring: Logging with Sentry or similar for errors; analytics with a simple dashboard showing queries per day, etc.
Security: HTTPS, encryption of stored sensitive fields, and separate indices per client in vector DB.

This setup would allow you to rapidly deploy clones and incrementally improve them. It also balances usage of best-in-class AI services with in-house control where needed.

In summary, the technology to build bespoke AI clones is readily available and has matured to the point that even a small startup can assemble a powerful system. By combining a strong LLM (to handle natural language and reasoning) with the professional’s own data (to provide factual grounding ) and adding integration points for actions, you can create a compelling digital assistant. The key is to design it in a modular way so you can adjust each part (swap models, change data sources, add a new tool) as requirements evolve. With this stack in place, you’ll be well-equipped to deliver AI clones that are knowledgeable, safe, and helpful – effectively giving local professionals a scalable digital version of themselves to amplify their reach and efficiency.

Author

Comments

There are currently no comments on this article.

Comment

Enter your comment below. Fields marked * are required. You must preview your comment before submitting it.