• Wed. Nov 27th, 2024

What Are The Dangers Of AI Chatbots?

What Are The Dangers Of AI Chatbots?

There are several practical issues and risks for the financial sector when deploying chatbots to interact with retail clients.

The financial industry is not a stranger to leveraging AI technology to provide user-friendly, efficient, and effective services. The adoption is expected to accelerate rapidly with the advancements provided by generative AI (GenAI). An emerging and transformative use of GenAI is the deployment of GenAI-powered chatbots to interact with clients.

Nonetheless, it is still a budding technology.

Letting a chatbot powered by GenAI interact with clients raises several nuanced and considerable risks. With the Financial Conduct Authority (FCA) keenly championing consumer protection, here are some risks that should never be overlooked.

Core Principles

In general, the FCA’s approach to supporting consumer protection is informed by its Principles for Business, guidance, and rules. Specifically, Principle 12, also called Consumer Duty, needs companies to “act to deliver good outcomes for retail customers” in the whole customer journey.

On that note, the FCA’s accompanying handbook guidance for companies on Consumer Duty explains that an instance of failing to meet the Consumer Duty, at the service or product design stage, is to utilize artificial intelligence within products in a manner that might result in consumer harm.

Additionally, the FCA has enforcement powers under the Consumer Protection from Unfair Trading Regulations (CPUT), which prohibits unfair commercial practices.

Generative AI-Specific Issues

Due to the nature of the large language models (LLMs) that power GenAI chatbots, there is an inherent danger that the model will generate various answers that are inaccurate or even entirely invented, called hallucinations. These responses might also be biased, highly discriminatory, and objectionable. They may also, at a fundamental level, mislead consumers to their detriment.

A GenAI-powered chatbot is not a great option to use in the context of regulated activities. Even in cases where a chatbot is not used for regulated activities, where the chatbot is limited to offering only generic guidance, companies must be mindful of the overarching risks of consumer harm.

The particular risks linked to a GenAI-powered chatbot are informed by the context where the technology is deployed. For example, enabling staff to ask a chatbot questions about a policy handbook will pose various risks to implementing and deploying a chatbot to ask questions for creditworthiness assessments.

AI chatbot risks

How To Minimize The AI Chatbot Risks

Do Background Checks

The FCA’s expectation for companies that use AI is they must have a huge understanding of the underlying technology and need to consider the ‘worst case scenario’ for that use. Additionally, they should set up security measures to help protect their users. This essentially involves conducting an extensive risk assessment before they deploy the technology into their ecosystems.

Engaging legal and technical experts before expending resources and money will help highlight critical risks at an early stage.

Be Transparent

While a clear disclaimer will never avoid consumer protection responsibilities in general, a clearly worded and notable disclaimer comes in handy when educating consumers that the bot is AI and not human. It helps to reduce the risk of consumers getting misled or claiming unfair treatment. This is an important feature of the emerging EU Artificial Intelligence Act regime.

The language used and the presentation of a disclaimer need to consider a user’s particular business context and how the customer base might perceive its interactions with the AI bot.

Eliminate Pestering And Pressure

The AI chatbot should never influence, pressure, or pester the users, for example by utilizing emotive or ‘guilty’ language. These practices might run counter to Consumer Duty and might be considered a ‘dark pattern’, possibly breaching CPUT.

Take advantage of the available controls provided by the AI bot provider to ‘blacklist’ emotionally charged wording and to avoid the accusations of pressure tactics.

Emergence Brake

Generative AI technology is changing rapidly. Users are advised to check their AI bots often to determine their performance and be ready to deprecate the bot whenever they have evidence that it might be causing adverse consumer outcomes.

Consider building in operability and functionality to allow consumers to escalate a conversation to human support and report the bot’s response as highly problematic.

Identify, Implement, And Evaluate Guardrails

Identifying, implementing, and then constantly Evaluating the effectiveness of guardrails is important to responsibly and safely use AI chatbots.

For example, most providers of generative AI tools will offer controls to ‘dial down’ all hallucinations. Some of the chatbot models include the ability to focus the AI chatbot’s outputs on specific content and to constrain answers to ring-fenced materials.

Horizon Scanning

When it comes to artificial intelligence, the regulatory and enforcement position is quickly evolving.

From a consumer protection point of view, in line with the UK government’s AI white paper that calls on regulators to assess all their current regulatory powers, the Competition, and Markets Authority has already published a review of UK consumer protection and competition law to form an ‘early view’ on the potential consumer protection impacts of the deployment of AI bot foundation models.

The Bank of England (BoE), Prudential Regulation Authority, and the FCA also issued a discussion paper in October 2022 to get feedback on several issues related to artificial intelligence in financial services, including whether extra regulatory clarifications may help the regulated companies.

Within the discussion paper, the regulators hinted that more regulation might be coming up: in line with their statutory objectives and to support the safe and responsible adoption of artificial intelligence in UK financial services, the supervisory authorities confirmed that they:

“May need to intervene further to manage and mitigate the potential risks and harms AI may have on consumers, firms, and the stability and integrity of the UK financial system and markets”

Kevin Moore - E-Crypto News Editor

Kevin Moore - E-Crypto News Editor

Kevin Moore is the main author and editor for E-Crypto News.

Leave a Reply

Your email address will not be published. Required fields are marked *