Luka (Replika), TikTok, and ING Financial institution Śląski – Cyber Tech

 

 

Luka Inc. 

On 10 April 2025, the Italian Information Safety Authority (Garante per la protezione dei dati personali) issued a call towards Luka Inc., the U.S. firm behind the Replika chatbot. Replika is marketed as an AI “companion” designed to spice up customers’ temper and wellbeing, and will be arrange as a good friend, mentor, therapist or perhaps a romantic companion. However in response to the Garante, the best way Luka dealt with customers’ private knowledge fell far wanting what the GDPR requires.

The investigation confirmed that Replika’s privateness coverage didn’t clearly establish the authorized floor for the various other ways by which customers’ knowledge had been processed – for instance, knowledge used for operating the chatbot versus knowledge used for creating the big language mannequin behind it. As an alternative of specifying functions and corresponding authorized bases, the coverage solely gave imprecise, generic statements like: “We care in regards to the safety and confidentiality of your knowledge. We subsequently solely course of your knowledge to the extent that: It’s crucial to supply the Replika companies you’re requesting, you could have given your consent to the processing, or we’re in any other case approved to take action beneath the information safety legal guidelines” (btw – doesn’t that sound acquainted from many privateness insurance policies?). This lack of granularity made it unattainable for customers to know how their knowledge had been actually getting used, in breach of Articles 5(1)(a) and 6 GDPR.

What’s extra, the privateness discover was solely accessible in English, though the service was provided in Italy. It additionally failed to elucidate key factors required beneath GDPR: what sorts of information had been collected, how lengthy they had been saved, whether or not knowledge had been transferred exterior the EU, and for what function. Some statements had been even deceptive, as an example, suggesting that non-public knowledge is likely to be transferred to the U.S., whereas the corporate later claimed no such transfers occurred. Such gaps and contradictions meant that customers couldn’t make knowledgeable selections about their knowledge.

Nonetheless, probably the most troubling discovering was that the Garante concluded Luka had didn’t implement efficient safeguards for youngsters. Though the service was formally meant for adults, it lacked real age-verification mechanisms. Registration required solely a reputation, e mail deal with, and gender, which allowed minors to create accounts. Even when customers declared they had been beneath 18, no technical barrier prevented them from accessing the platform. In follow, this meant that youngsters could possibly be uncovered to age-inappropriate content material, together with sexually express materials. Furthermore, even after updates to the privateness coverage, technical testing confirmed that under-18 customers might nonetheless bypass the age restriction just by enhancing their profile. 

 

For these violations, the Garante imposed an administrative wonderful of EUR 5,000,000, representing half of the utmost quantity accessible beneath Article 83(5) GDPR.

 

 

TikTok Expertise Restricted

 

One other important determination was issued by the Irish Information Safety Fee (DPC) in Might 2025 towards TikTok Expertise Restricted. Though the total textual content of the choice has not but been revealed, the official press launch supplies perception into the explanations for the sanction.

 

The inquiry examined each the lawfulness of TikTok’s transfers of European customers’ private knowledge to China and the adequacy of the corporate’s transparency relating to these transfers. The DPC concluded that TikTok had infringed the GDPR in two key respects.

 

First, the Fee discovered that TikTok’s transfers of consumer knowledge to China violated Article 46(1) GDPR. The corporate didn’t confirm, assure, and reveal that non-public knowledge of European customers – remotely accessed by workers in China – was afforded a stage of safety basically equal to that required inside the EU. TikTok’s personal assessments of Chinese language regulation highlighted critical divergences from EU requirements, significantly dangers beneath the Anti-Terrorism Regulation, the Counter-Espionage Regulation, and the Nationwide Intelligence Regulation. Nonetheless, the corporate didn’t adequately deal with these dangers or make sure that its contractual safeguards had been efficient.

 

Second, the DPC held that TikTok had not complied with the data duties set out in Article 13(1)(f) GDPR. Earlier variations of its privateness coverage (in drive between July 2020 and December 2022) didn’t establish the international locations concerned in knowledge transfers and didn’t clarify the character of the processing – as an example, that personnel in China might remotely entry knowledge saved in Singapore and the USA. This lack of readability prevented customers from understanding who might entry their knowledge and beneath what circumstances.

 

The choice imposed not solely administrative fines but in addition corrective measures. TikTok was given six months to carry its practices into compliance, failing which knowledge transfers to China must be suspended altogether. The entire wonderful amounted to EUR 530,000,000, comprising EUR 485,000,000 for the illegal transfers and EUR 45,000,000 for the shortage of transparency.

 

 

ING Financial institution Śląski

 

The third mentioned determination was delivered on 23 July 2025 by the Polish Information Safety Authority (UODO) towards ING Financial institution Śląski S.A., which was fined PLN 18,416,400 (round EUR 4,000,000). The case revolved across the financial institution’s widespread follow of copying and scanning ID playing cards of each present and potential shoppers, even in conditions the place such a step was not required by regulation. The financial institution launched this follow after the modification of Polish anti-money laundering provisions, decoding them as justifying the systematic copying of IDs.

 

The investigation revealed that between April 2019 and September 2020 the financial institution systematically scanned ID paperwork not solely throughout buyer onboarding, but in addition in contexts the place no anti-money laundering (AML) obligations utilized – for instance, when a buyer filed a grievance about an ATM. In follow, the financial institution’s inside procedures made the supply of companies conditional on handing over a scanned ID, leaving customers with no actual selection.

 

As emphasised within the determination, each AML regulation and the GDPR require banks to conduct a risk-based evaluation and decide, case by case, whether or not copying an ID is genuinely crucial. ING didn’t carry out such assessments. As an alternative, it adopted blanket guidelines requiring ID copies in quite a few conditions, no matter whether or not AML obligations utilized. Consequently, the financial institution processed intensive quantities of delicate figuring out info with out a legitimate authorized foundation beneath Article 6 GDPR. Though no particular hurt was demonstrated, the choice underscores that ID playing cards comprise a variety of private knowledge – together with full identify, date of beginning, dad and mom’ names, distinctive nationwide ID quantity (PESEL), {photograph}, and doc collection. Taken collectively, these knowledge considerably improve the danger of id theft or fraudulent loans. Provided that ING had tens of millions of particular person and company shoppers throughout the interval in query, the potential penalties of such pointless knowledge assortment had been substantial.

Add a Comment

Your email address will not be published. Required fields are marked *

x