AI & Automation

Should You Disclose When AI Writes Your Customer Messages?

AI-generated customer message with and without disclosure labels

An auto repair shop in Dallas uses Podium's AI assistant to respond to customer texts. A customer sends "Hey, my check engine light is on, what should I do?" and within 15 seconds gets back a well-crafted reply: "Thanks for reaching out! A check engine light can indicate a range of issues from a loose gas cap to more serious engine problems. I'd recommend bringing it in for a diagnostic scan so we can identify the exact cause. Would tomorrow morning work for you?"

The message is signed with the service advisor's name. It was written entirely by AI. The customer thinks they are texting with Mike, the guy who worked on their car last time. They are not. Mike is at lunch. Mike has no idea this conversation is happening.

Is this a problem? The shop owner does not think so. The customer got a fast, helpful reply. An appointment got booked. Everyone is happy. But "everyone is happy because nobody knows what actually happened" is not an ethical framework. It is a description of successful deception.

The Scale of AI-Written Communication

This is not a niche practice. Platforms like Podium, Broadly, Birdeye, and ServiceTitan have integrated AI message generation into their core offerings. Many of these tools compose responses automatically and either send them directly or queue them for one-click approval by a human who may or may not actually read what the AI wrote.

The speed and quality of these AI-generated messages are genuinely impressive. They are often better written than what a busy service advisor would produce between oil changes. They respond faster, maintain a consistent tone, and never forget to include a call-to-action. From a pure customer service standpoint, the output is good.

But "good output" and "honest communication" are not the same thing. A customer who believes they are receiving personal attention from a knowledgeable human is making decisions based on that belief. When the reality is different, the customer's decision-making is built on a false premise.

AI-powered chat system responding on behalf of a service business

Why Businesses Resist Disclosure

The arguments against disclosure are predictable and not entirely unreasonable:

"Customers will not trust AI responses." This is probably true for some customers. But the solution to "customers would not like what we are doing if they knew" is not "make sure they never find out." It is "either stop doing it or make it trustworthy enough that disclosure is not a liability."

"Our competitors are not disclosing." Also probably true. But competitive pressure has never been a valid ethical justification. It is a reason to do something, not a reason it is right. And the competitive landscape will shift as disclosure becomes legally required in more contexts.

"The AI is just helping us draft replies faster. It's the same as using spell check." This analogy fails on inspection. Spell check corrects your writing. AI generates writing. There is a meaningful difference between a human composing a message with tool assistance and a machine composing a message with optional human review. The customer's reasonable expectation is that messages signed by a person were written by that person.

The Legal Direction Is Clear

California's AB 3030, which took effect in 2025, requires healthcare providers to disclose when patient communications are AI-generated. Several other states have introduced similar bills covering broader categories of businesses. The FTC has signaled that undisclosed AI impersonation of humans in commercial contexts could constitute an unfair or deceptive practice.

The regulatory direction is unmistakable: nondisclosure of AI-generated communications is becoming a legal liability, not just an ethical question. Businesses that build their customer communication workflows around undisclosed AI are setting themselves up for expensive retrofitting when disclosure requirements reach their industry.

Even without specific legislation, existing consumer protection frameworks may already cover this. If a customer makes a purchasing decision based on what they reasonably believed was a human expert's recommendation, and it was actually generated by an AI that has no expertise and no accountability, the potential for a deceptive practices claim exists.

What Disclosure Actually Looks Like

Disclosure does not have to be heavy-handed or disruptive. It can be as simple as a brief note at the end of an AI-generated message: "This message was drafted with AI assistance." Some businesses use a small tag in their chat interface indicating AI involvement. Others include a note in their initial customer communication explaining that they use AI tools in their messaging.

The key is that the customer should be able to distinguish between a message composed by a human who knows their situation and a message generated by a system that is pattern-matching against their input. These are different types of communication and the customer is entitled to know which one they are receiving.

Some platforms are starting to build disclosure options into their AI features. Podium, for example, offers the ability to add AI disclosure labels, though it is not enabled by default. This is a step in the right direction, but making disclosure opt-in rather than the default tells you where the platform's priorities are.

The Trust Argument for Disclosure

Here is what the anti-disclosure crowd misses: customers who discover undisclosed AI feel betrayed in a way that is disproportionate to the actual harm. The message itself might have been perfectly helpful. But finding out that "Mike" never actually read their text, that a machine composed the response and attached a human name to it, creates a visceral reaction that damages the relationship far more than the disclosure itself would have.

This is because the issue is not about AI quality. It is about honesty as a baseline expectation. Customers can accept that businesses use technology to operate more efficiently. What they cannot accept is being deceived about who they are communicating with. That feels personal in a way that other business practices do not.

Businesses that disclose proactively tend to find that customers appreciate the honesty and adjust their expectations accordingly. "This response was generated by our AI assistant. A team member will review your request shortly" sets clear expectations. The customer knows what they are getting and can calibrate accordingly. If the AI's answer is good enough, great. If they want human interaction, they know to wait for it.

Customer trust metrics displayed alongside automation usage data

The Practical Path Forward

For service businesses currently using AI-generated customer messages without disclosure, the transition is straightforward:

Start by auditing which of your customer communications involve AI. Many businesses are surprised by the extent. If your platform has AI features enabled, they may be composing messages you assumed a human was writing.

Next, implement disclosure for any message that is primarily AI-generated. A message where a human types the core content and AI cleans up grammar is different from one where AI generates the entire response. The latter requires disclosure. The former is closer to tool assistance.

Finally, review your AI messaging prompts. Many businesses have AI systems making service recommendations they would never authorize a new employee to make. If your AI is diagnosing problems, recommending services, or quoting prices without human review, disclosure is the bare minimum. The better fix is human oversight.

The businesses that will navigate this transition successfully are the ones that treat disclosure not as a liability but as an opportunity to differentiate. In a market where most competitors are using undisclosed AI, being the business that says "we use AI tools, and here is exactly how" is a genuine competitive advantage.

Your customers will eventually find out whether your messages are AI-generated. The only question is whether they find out from you or from someone else. One of those builds trust. The other destroys it.