During Meta’s representative’s testimony to the Organized Crime CPI on Tuesday (24), senators stated that the company is able to expand the traceability of illegal content on its platforms and adopt measures to protect and compensate affected users, even without a court order.
On the other hand, the company’s management in Latin America — represented by Yana Dumaresq — reported that Meta has invested in the modernization and automation of alert mechanisms, in addition to implementing actions against misleading content. Meta is the parent company of the following platforms: Facebook, Instagram, and WhatsApp.
The CPI rapporteur, Senator Alessandro Vieira (MDB-SE), regretted that the company did not present metrics on complaints, removed accounts, and cases of scams carried out through Meta’s platforms in Brazil.
According to the senator, data from the Public Security Yearbook indicate that 56 million Brazilians are victims of internet scams every 12 months, with losses exceeding R$ 50 billion. This yearbook is a survey on crime published by the Brazilian Forum on Public Security.
Alessandro said it is unreasonable for a company of this size not to have specific knowledge of its market in order to improve user protection strategies.
“A company of this size, with so many resources, which has among its many qualities the segmentation of its operations, must recognize its markets, their agility, and their strengths. I imagine that Brazil must be one of the company’s three or four largest markets globally in terms of number of users, and you don’t know how many accounts have been removed due to scams?” he asked.
Yana Dumaresq, Meta’s director of public policy in Latin America, argued that the chain that sustains the network of scammers and criminals in the digital environment is broad and has many links.
She claimed that often the ads run on Meta’s platforms, which appear to be legitimate, serve as a hook to lure users to another environment, where the scam is then carried out and completed.
For Dumaresq, it is essential that, in such cases, users return to Meta’s platform to report the scam, thereby speeding up the process of protection, content removal, and possible punishment of the criminal network.
“We need this feedback about what happens outside our platforms. Many other issues, besides the segmentation of what happens outside our platforms, such as which ads are targeted at the Brazilian public, are complexities of our business model, not just Meta’s, but the digital business model, which really places certain limitations on us in segregating these statistics,” she said.
Responsibility
Alessandro Vieira cited a Reuters article published in November 2025 that mentions an internal Meta survey. According to the article, the company earned around 10% of its total annual revenue (estimated at US$16 billion) from false advertising and unauthorized merchandise. He wanted to know what the company thinks about this.
In response to the rapporteur’s question, Yana Dumaresq pointed out that she has been working specifically on this issue at the company for about three years and is unaware of any survey confirming the figures released by Reuters.
“Our technology detects and removes this content. In the overwhelming majority of cases, [this is done] even before a complaint is filed. Our systems are constantly improving, with the use of artificial intelligence and machine learning [which is a branch of artificial intelligence]; they are always improving to detect and remove this content, often before a complaint is filed,” she emphasized.
However, in the senators’ assessment, there is a problem with the business model of these platforms, which is the transfer of responsibility to the user.
— You keep repeating phrases such as “after the complaint is made,” “based on the number of complaints,” and “based on the volume of demand that comes in from outside.” However, the company’s profit, in this specific point, comes from the volume of ads published and the amount that the advertiser spends to reach that audience. Therefore, it is clear, within our legal system, that the company has responsibility, beyond the transfer,” said Alessando Vieira.
The president of the CPI, Fabiano Contarato (PT-ES), presented similar arguments. For him, the company takes advantage of the probability of “who will sue Meta or not.”
— In a scenario where you have 100 people who have suffered losses, Meta comes out ahead, with all due respect, when only one person takes legal action because they believe their rights have been violated and that Meta is liable under strict liability rather than subjective liability, simply because it was the platform that enabled [the scam] to be propagated and disseminated. Let’s be pragmatic: the company usually comes out ahead because, if 1 million users are entitled to compensation and only ten go to court, it has profited from almost 1 million users,” Contarato explained.
The Meta director reiterated that the company does not act solely on user reports and denied any omission in combating illegal content. According to her, more than 90% of removals in 2024 occurred before any formal report, based on the company’s own detection mechanisms.
According to Yana Dumaresq, the company uses continuous monitoring tools, with content review “24 hours a day, seven days a week,” to identify and block fraudulent campaigns. She also said that there are court decisions that remove the company’s liability for omission.
“There is no omission on the part of the company. We are facing an enormous challenge, which requires dedication and continuous work,” said the Meta director.
Account verification
Senators questioned the witness about the Meta Verified product (which is a kind of verification seal to try to confirm the authenticity of a user’s account; this service is offered by Meta to those who pay a monthly subscription). For lawmakers, Meta Verified causes confusion because it leads users to believe that such a service is a complete guarantee of security and credibility.
According to Dumaresq, this seal is a product that aims to reinforce the company’s concern for user security by encouraging users to provide more data and documents to obtain Meta Verified. On the other hand, she reiterated that Meta’s security system ensures that all accounts created in the company’s environment are genuine and verified.
“The accounts are real, regardless of the verification badge,” she said.
Alessandro Vieira warned about the widespread use of WhatsApp in Brazil. According to him, despite the ban on the use of this platform for criminal and fraudulent practices, Meta does not have the tools to assess whether WhatsApp Business is used by criminal organizations.
“It is very difficult for us to understand how this will work, because it is a paid product that has terms of use, but the company [Meta] does not have effective tools to verify compliance with these terms of use, because such verification would necessarily involve monitoring the content of conversations. Because just the group photo, let’s face it… All of this is done after the fact by the police, the Public Prosecutor’s Office, or whoever else. It’s not the company doing it,” emphasized the CPI rapporteur.
Dumaresq replied that Meta has invested in increasing security tools for WhatsApp. She cited as an example the use of artificial intelligence to detect potentially fraudulent interactions, in addition to sending alerts to users.
— There is no anonymity on networks. We retain registration data and IP logs, as determined by the Brazilian Civil Rights Framework for the Internet, and we handle the use of this data in accordance with and in line with applicable legislation, whether it be the Brazilian Civil Rights Framework for the Internet or the General Personal Data Protection Law. And every day, we collaborate with police and public authorities in providing this data,” she said.
Sexual exploitation
In 2020, according to a report by the non-profit organization Human Trafficking Institute, Facebook was the platform most used by sex traffickers to groom and recruit children: 65% of cases of grooming and recruitment of children allegedly occurred through this platform. In addition, the report points to Instagram as the second most prevalent network for recruiting children.
Alessandro Vieira and Fabiano Contarato questioned whether Meta, with its account verification system, has the capacity to detect and prevent the transfer of, for example, images of child sexual abuse.
The CPI rapporteur recalled the case of Brazilian influencer Felca, which demonstrated how quickly child content is suggested by Instagram and how easy it is to identify comments from abusers and pedophiles in the respective posts.
“In 2023, a recent revelation, due to a lawsuit filed in the United States by the state of New Mexico, demonstrates the concern of Meta employees regarding approximately 7.5 million annual reports of child sexual abuse material that would no longer be disclosed after the decision to implement end-to-end encryption on Messenger and Facebook as well,” Alessandro pointed out.
Dumaresq replied that the issue is a “top priority” among the company’s concerns and that there are several teams dedicated to it in various areas of Meta (such as product, compliance, legal, etc.).
Legislation
Senator Hamilton Mourão (Republicanos-RS) questioned which rule prevails in the event of a conflict between Meta’s global moderation guidelines and the demands of Brazilian authorities, considering national sovereignty and the country’s own legislation.
Yana Dumaresq replied that the company complies with local legislation in all countries where it operates. She emphasized that Facebook Brazil is a company incorporated in the country and fully complies with Brazilian regulations.
Senator Eduardo Girão (Novo-CE) questioned Meta’s response to the promotion of illegal sports betting on digital platforms. He emphasized his concern about the use of this market (illegal sports betting) by criminal organizations to obtain and launder funds. Girão also asked how many advertiser and influencer accounts had been removed or sanctioned in the last 24 months.
“I am a user of Instagram and Facebook. I think all my colleagues here use them; they are important tools. But I am impressed by the number of views we see going to Jogo do Tigrinho and things like that. That scares me,” said the senator.
In response, director Yana Dumaresq Sobral Alves stated that the company checks in advance whether betting advertisers are registered with the competent authority, in accordance with Brazilian law.
According to her, Meta maintains a specific channel with the National Sports Betting Secretariat (an agency linked to the Ministry of Sports) to identify irregular content that may escape initial checks, with immediate removal when irregularities are found.
Proactivity
Dumaresq emphasized that the company has invested in modernizing and automating alert mechanisms, defense, protection, and user awareness, as well as blocking and removing misleading content.
According to her, the company uses a multifaceted network to improve user protection mechanisms, with tools that include technical and automated defenses, dismantling criminal networks, collaboration between industry players and authorities, and providing mechanisms for users to protect themselves in the digital environment and report criminals.
Regarding advertisements, she reported that the company is expanding its efforts to verify the authenticity of individuals and organizations that place ads on its platforms, particularly in approaches related to financial risks and investments, including the use of facial recognition.
Collaboration network
In the opinion of Meta’s director, combating criminal networks through digital platforms requires partnerships between the private sector and public agencies, as well as the involvement of society as a whole.
Last year, according to Dumaresq, nearly 12 million accounts on Facebook, Instagram, and WhatsApp (all platforms controlled by Meta) were shut down worldwide. According to her, these profiles were linked to criminal scam networks.
“Although combating fraud and scams is an ongoing battle, we have already achieved measurable results: in 2025, Meta removed more than 134 million fraudulent ads globally; supported authorities in investigating, identifying, and arresting scammers; and saw a more than 55% drop in user reports of scam ads.
She also reported that, since December 2025, the company has added layers of protection for advertiser verification, requiring more information about who benefits from it and who pays for the ads served.
Dumaresq added that the company promotes exchanges and partnerships (both domestic and international) with major companies in the technology, finance, and digital security sectors to share information and technologies with the goal of improving security.
According to the director, with regard to advertisements, in addition to machine learning (which detects patterns of suspicious activity and proactively removes such content), the system is capable of performing a preliminary technical analysis of content before it is published.
According to Dumaresq, violations that directly contravene company rules and policies are removed before they are reported. In the case of Brazil, she stated that since the end of 2025, the company has begun to include new checks for advertisers that bring several filtering methods, selected from an internal risk perception analysis. The criteria, she noted, involve data collected from SMS, email, identity verification, user behavior history, location, and payment method.




