Brazilian data regulator bans Meta from mining data to train AI models

RIO DE JANEIRO-- On Tuesday, Brazil’s national data protection regulator ruled that Meta, the parent company of Instagram and Facebook, cannot use data from the country to train its artificial intelligence.

Meta’s updated privacy policy allows the company to feed people’s public posts into its AI systems. However, that practice will not be allowed in Brazil.

The decision stems from "an imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the data subjects," the agency said in the nation's official gazette.

Brazil is one of Meta's largest markets. Facebook alone has about 102 million active users in the country, the agency said in a statement. The country has a population of 203 million, according to the country's 2022 census.

A Meta spokesperson said in a statement that the company is "disappointed" and insists that its method "complies with privacy laws and regulations in Brazil."

“This represents a setback for innovation, competition in the development of artificial intelligence and further delays in bringing the benefits of artificial intelligence to people in Brazil,” the spokesperson added.

The social media company has also faced pushback in Europe over its privacy policy update, where it recently suspended plans to start feeding users’ public posts into artificial intelligence training systems, which were scheduled to launch last week.

In the United States, where there is no national law protecting online privacy, this type of training is already in place.

In May, Meta said on its Brazilian blog that it may “use information that people have shared publicly about Meta products and services for some of our generative AI features,” which could include “public posts or photos and their captions.”

Opting out is possible, Meta said in that statement. Despite that option, there are "excessive and unjustified obstacles to accessing information and exercising" the right to opt out, the agency said in a statement.

Meta added that it did not provide enough information to allow people to be aware of the possible consequences of using their personal data for the development of generative AI.

Meta isn’t the only company that has tried to train its AI systems on data from Brazilians.

Human Rights Watch released a report last month that found that personal photos of identifiable Brazilian children from a large online image database, pulled from parenting blogs, professional event photographers’ websites, and video-sharing sites like YouTube, were being used to create AI image-generating tools without the families’ knowledge. In some cases, those tools were used to create AI-generated nude images.

Hye Jung Han, a Brazil researcher for the human rights group, said in an email Tuesday that the regulator’s action “helps protect children from the fear that their personal data, shared with friends and family on Meta platforms, could be used to inflict harm on them in ways they cannot predict or protect against.”

But the Meta decision will “very likely” encourage other companies to refrain from being transparent about their data use in the future, said Ronaldo Lemos of the Rio de Janeiro Institute of Technology and Society, a think tank.

“Meta has been severely punished for being the only Big Tech company to clearly and upfront disclose in its privacy policy that it would use data from its platforms to train AI,” he said.

The company must demonstrate compliance within five business days of notification of the decision, and the agency has set a daily fine of 50,000 reais ($8,820) for non-compliance.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Subir