BOSTON (AP) — Developers, suppliers, and users of artificial intelligence must comply with existing state consumer protection, anti-discrimination, and data privacy laws, the Massachusetts attorney general cautioned Tuesday.
In an advisory, Attorney General Andrea Campbell pointed to what she described as the widespread increase in the use of AI and algorithmic decision-making systems by businesses, including technology focused on consumers.
The advisory is meant in part to emphasize that existing state consumer protection, anti-discrimination, and data security laws still apply to emerging technologies, including AI systems — despite the complexity of those systems — just as they would in any other context.
“There is no doubt that AI holds tremendous and exciting potential to benefit society and our commonwealth in many ways, including fostering innovation and boosting efficiencies and cost-savings in the marketplace,” Cambell said in a statement.
“Yet, those benefits do not outweigh the real risk of harm that, for example, any bias and lack of transparency within AI systems, can cause our residents,” she added.
Falsely advertising the usability of AI systems, supplying an AI system that is defective, and misrepresenting the reliability or safety of an AI system are just some of the actions that could be considered unfair and deceptive under the state’s consumer protection laws, Campbell said.
Misrepresenting audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner — as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud — could also violate state law, she added.
The goal, in part, is to help encourage companies to ensure that their AI products and services are free from bias before they enter the commerce stream — rather than face consequences afterward.
Regulators also say that companies should be disclosing to consumers when they are interacting with algorithms. A lack of transparency could run afoul of consumer protection laws.
Elizabeth Mahoney of the Massachusetts High Technology Council, which advocates for the state’s technology economy, said that because there might be some confusion about how state and federal rules apply to the use of AI, it’s critical to spell out state law clearly.
“We think having ground rules is important and protecting consumers and protecting data is a key component of that,” she said.
Campbell acknowledges in her advisory that AI holds the potential to help accomplish great benefits for society even as it has also been shown to pose serious risks to consumers, including bias and the lack of transparency.
Developers and suppliers promise that their AI systems and technology are accurate, fair, and effective even as they also claim that AI is a “black box”, meaning that they do not know exactly how AI performs or generates results, she said in her advisory.
The advisory also notes that the state’s anti-discrimination laws prohibit AI developers, suppliers, and users from using technology that discriminates against individuals based on a legally protected characteristic — such as technology that relies on discriminatory inputs or produces discriminatory results that would violate the state’s civil rights laws, Campbell said.
AI developers, suppliers, and users also must take steps to safeguard personal data used by AI systems and comply with the state’s data breach notification requirements, she added.
2024-12-25 22:151115 view
2024-12-25 21:00668 view
2024-12-25 20:56218 view
2024-12-25 20:281844 view
2024-12-25 20:172677 view
2024-12-25 19:511991 view
It's nearly here.After an exciting and drama-filled season featuring grueling challenges, complicate
BARCELONA, Spain (AP) — Vinícius Júnior delivered a stellar all-round performance and Jude Bellingha
HELENA, Mont. (AP) — Montana Republicans gathered in a hotel ballroom this weekend aiming to unite a