Din varukorg är för närvarande tom!
NPA 2024: US privacy in the age of AI
The Nordic Privacy Arena (NPA) is the Nordic region’s largest annual gathering on the topic of data protection. Last month, the 9th edition of the NPA took place from September 30 to October 1 at Münchenbryggeriet in Stockholm. It included 25 agenda items and attracted many participants who took part in person and online.
This is the third in a three-part series on notable presentations and discussions centred on the topic of AI, which has become increasingly important with the EU’s AI Act and rapid technological advances.
The US is going back to basics
Odia Kagan, a partner at Fox Rothschild specializing in GDPR compliance and international privacy practice, delivered an insightful overview of the state of privacy in the United States (US).
In this day and age where AI is everywhere, US. data protection is returning to basic principles like transparency, accuracy, and data minimization. These foundational elements are crucial for managing digital data governance amid advanced technologies. Transparency has become a key focus in privacy enforcement against the US private sector.
Transparency
Kagan highlighted ongoing efforts to increase transparency, particularly with regard to AI technologies. Although the US does not yet have specific AI laws comparable to the European Union’s AI Act, extensive enforcement measures are already in place. The US Federal Trade Commission (FTC) has been active in regulating AI by enforcing existing laws on consumer protection that ban unfair and deceptive practices. The FTC’s stance is that illegal activity is not allowed merely because AI is used. This approach is intended to ensure that companies are transparent about their AI applications, particularly with regard to their capabilities and limitations.
Data accuracy
In addition, Kagan explained that accuracy is a primary focus of recent enforcement actions by the agencies. Authorities are increasingly cracking down on false claims about the accuracy and reliability of AI. Kagan pointed to recent cases in which companies have been held to account for making misleading claims about their AI technologies. For example, the authorities in Texas took action against a company that falsely claimed that its AI hallucinated (made up facts) less than one in 100,000 times.
This focus on accuracy is not limited to AI, but extends to various sectors, including the automotive industry, where car manufacturers have come under fire for inaccurate claims about data collection from connected vehicles. Action has been taken against practices such as sharing data with insurance companies, which has impacted consumer’s ability to get a car insurance.
Data minimization
Data minimization, which is the practice of limiting data collection to what is necessary for a specific purpose, is also gaining renewed attention in the US. The principle is being incorporated into new state laws, such as Maryland’s stringent data minimization standards, and is emphasized in regulatory guidance and litigation.
Privacy in the workplace
The privacy debate also extends to workers’ rights, particularly in relation to workplace surveillance and the use of AI in employment decisions. Kagan pointed out that while workers in the US are not always defined as consumers, their privacy is still protected. The FTC collaborates with the Department of Labor. Enforcement is also by applying state laws that address automated decision-making in employment.
In summary, while the privacy landscape in the US. may appear complex and fragmented with varying state laws and regulators, there is a clear trend toward strengthening the core data protection principles of transparency, accuracy and data minimization.