NPA 2023: AI and Data Protection





The Nordic Privacy Arena (NPA) is an annual conference for people working in the field of data protection. The 8th edition of the NPA was held as a semi-digital event on September 25-26, 2023. The conference featured no less than 20 agenda topics on data protection and had more than 300 on-site participants at Münchenbryggeriet in Stockholm. Participants also attended online. This is the second of three articles highlighting some of the lectures and discussions that took place.

Artificial Intelligence was one of several major topics. First Abtin Kronold, Privacy Lawyer and founder of TransPr!v4cy, gave an introductory talk on “AI for Privacy Dummies”, explaining in simple terms how AI affects privacy and other fundamental rights. Using many educational examples, he explained how difficult it is to be sure what results an AI application will deliver.

Panel on the opportunities of AI
The “AI for Privacy Dummies” was followed by a panel discussion on the possibilities of AI, moderated by Daniel Westman. In this panel discussion, experts with different backgrounds discussed the potentials and risks of AI development. The discussion included an examination of existing legislation and its application to AI, potential challenges and future legislative considerations.

The panel consisted of Markku Suksi, Modeen Professor of Public Law, Åbo Akademi University, Finland, Alexander Hanff, That Privacy Guy! Hanff & Co. AB, Peter Fleischer, Global Privacy Counsel at Google, Per Nydén, Legal specialist at the Swedish Authority for Privacy Protection, (IMY) and Carolina Brånby, Director of Digital Policy at the Confederation of Swedish Enterprise

Markku Suksi, Modeen Professor of Public Law, shared his views on the use of AI in the public sector. He argued that most commercial applications of AI are based on machine learning, while governments should focus on rule-based applications. Suksi believes that only rule-based applications can currently meet the requirements of the rule of law and legal reasoning. He also stated that machine learning technology, which is based on statistical inference from a sample of data, is not yet mature enough to be suitable for automated decision-making in the public sector.

Peter Fleischer from Google highlighted that AI has already been used for years in services such as Gmail for spam filtering, YouTube for content filtering and Google Translate. He explained Google’s approach to AI regulation, which includes consulting data protection authorities before launching new products, working on privacy enhancing technologies and creating transparency by publishing detailed documentation on their AI models. He believes this approach of external engagement and collaboration with authorities and experts in the field of AI is crucial.

Carolina Brånby from the Confederation of Swedish Enterprise discussed the potential benefits of AI from a business perspective. She highlighted how AI can improve competitiveness and prosperity, increase efficiency, reduce costs and drive innovation. In addition, she pointed out that AI can help solve the problem of staff and skills shortages, which is becoming a major issue in Sweden and the EU due to stagnating population growth. Lastly Brånby stressed the importance of learning about AI and using AI tools, which she sees as a great opportunity for workers and companies.

Per Nydén from the Swedish Authority for Privacy Protection (IMY) then explained his view of the current situation, and the opportunities and risks related to AI. He mentioned the Innovation Hub, an initiative that aims to follow global developments and provide guidance to Swedish innovators on AI. Nydén also spoke about the first pilot project with IMY’s regulatory sandboxes, which focused on federated learning – a privacy-friendly technique for training algorithms. Per acknowledged that existing legislation, while applicable to AI, was not designed with AI in mind. He reminded us that existing laws need to be interpreted with today’s reality in mind, but also updated occasionally.

Last but not least, Alexander Hanff, That Privacy Guy! Hanff & Co. AB, spoke about his concerns regarding the impact of technology on society, especially in the areas of surveillance, commercialisation and control. He stressed that his concerns were not about the technologies themselves, but about the way they would be used and how they would affect individuals and their fundamental rights and freedoms. He argued that while the General Data Protection Regulation was a good model, there was a risk that the discussion would stagnate because large companies that had already developed their products when there was no AI regulation were now pushing for new regulations that would make it harder for new start-ups, leading to competition problems. Hanff stressed the need for full cooperation between all stakeholders, including regulators, companies, civil society, academia and the public, to avoid one-sided regulation on AI.