Table of Contents
Toggle


While companies increasingly look to automated systems to manage rising account volumes, new research suggests that many customer-facing models, including chatbots, remain vulnerable to exploitation.

01/28/2026 2:00 P.M.
The business community is caught in a disconnect between the rapid adoption of generative AI and the implementation of security measures necessary to protect consumer data. According to data from Fuel iX, an enterprise AI platform, companies are projected to spend $644 billion on generative AI in 2025, but only $2.6 billion — roughly 0.4% of that total — is earmarked for AI-specific security.
This spending gap comes as organizations face mounting pressure to automate tasks. High interest rates and inflation have led to a surge in delinquent accounts, pushing 18% of firms to invest in AI or machine learning in 2024, up from 11% the previous year, TransUnion found.
Chatbot Risks
Research from Milton Leal, an AI researcher at TELUS Digital, involving 24 leading AI models configured as enterprise chatbots, found that every single model demonstrated exploitable security vulnerabilities. The study noted that attack success rates ranged from 1% to 64%, creating significant risks for highly regulated industries.
The study identified three core vulnerabilities across chatbot systems:
- Inaccurate or incomplete guidance: Financial services chatbots may misquote interest calculations or reveal eligibility criteria before verifying an identity. These automated answers carry the “same legal weight” as human advice, yet quality assurance frequently lags behind the speed of deployment.
- Sensitive leakage: Attackers use conversational pressure and creative prompts to bypass safeguards. In one instance, a chatbot was tricked into generating fabricated customer testimonials that could be “weaponized in phishing campaigns.”
- Operational opacity: Many systems lack audit trails and data logs that regulators require. This creates a scenario where organizations cannot reconstruct how a complaint was mishandled, revealing weaknesses that are “architectural, not rare edge cases.”
Unlike traditional software, which is deterministic, generative AI is probabilistic. This means the same prompt can yield different results across multiple interactions. For accounts receivable management companies, this unpredictability is a liability. A chatbot that provides a compliant disclosure in nine out of 10 interactions but fails on the 10th could trigger a regulatory investigation or a class-action lawsuit.
“In regulated industries like financial services … even a 1% vulnerability rate may violate compliance requirements,” the Fuel iX report states. The risk is compounded by the fact that 76% of organizations currently prioritize speed to market over security validation.
Some companies are attempting to mitigate these risks by shifting away from public cloud-based platforms toward “sovereign AI” — models built or fine-tuned on proprietary data. This approach enables better data lineage and greater control over the origin and processing of information. Still, the upfront capital expenditure remains a hurdle for many mid-sized and smaller firms.
The labor market is also feeling the impact. A recent analysis from the Stanford Digital Economy Lab found that early-career workers in AI-exposed occupations have seen a 13% relative decline in employment since 2022. This suggests that while AI is becoming a necessity for scaling operations, it is also disrupting the traditional entry-level talent pipeline that companies have historically relied on for future leadership.
ACA’s Take
For members navigating these waters, ACA International offers several resources to help bridge the gap between innovation and compliance, including:
Remember, subscribe to ACA Daily and Member Alerts under your My ACA Assistant profile when logged in to acainternational.org.
link