20 Jun 2022 • 10 min read
20 Jun 2022 • 10 min read
An update on the EU’s Code of Practice on disinformation is launched on Thursday last week. The new agreement builds on the Code of Practice that was launched in 2018 which is a voluntary code between regulators and signatories from the advertising ecosystem, fact-checkers, civil society, research, and other organizations. The initial Code of Practice aims to combat the influence of online disinformation operations.
Since 2018, Europe and the whole world have been through several major events: Brexit, Covid-19, and the war between Russia and Ukraine. The world, especially the cyber world, was full of rumors and speculation, which accelerated the EU’s crackdown on fake news and accounts that spread it across the internet.
The new Code of Practice on Disinformation is coordinated efforts of both regulators and signatories. It gives signatories 6 months to implement the commitments and actions on fake accounts and disinformation, which means that at the beginning of 2023, the signatories would provide the European Commission with their first implementation reports, or they may have as much as 6% of a company’s global turnover or banned from Europe.
The European Commission Vice President Vera Jourova said publicly at a news conference, "The new code is a testimony that Europe has learned its lessons and that we are not naive any longer”.
“Disinformation is a form of invasion of our digital space, with tangible impact on your daily lives,” said Thierry Breton, Commissioner for Internal Market. Disinformation is spread by fake accounts that are manipulated by bot operators. According to Imperva’s 2022 Bad Bot Report, bad bot traffic accounted for 27.7% of all global website traffic in 2021, and account takeover is among the top 3 most common bot attacks last year. Spreading disinformation is one of the goals of account takeover.
The financial incentive is the most prevalent motivation. Cybercriminals want to spend the least effort to get the most financial gain. By taking regular users’ accounts, cybercriminals can get personal information, sought-after products, cryptocurrency, or deliberately spread fake news for money. Removing fake accounts is a critical priority for online platforms if they want to tackle disinformation.
Cybercriminals use bots to create fake accounts and take over existing accounts. Bots forge human behavior to register on online platforms or use stolen credentials to enter somebody else’s account. CAPTCHA, a human-bot verification system, can identify regular users and bots, and take different measures to bot requests to online platforms. It is a commonly used tool to mitigate account takeover and fake account creation.
Online platforms, whether they are operated in Europe or not, should tackle disinformation, fake accounts, and bot traffic. Tolerance is the fertile ground for bad bots to grow and become incredibly challenging to control.
Try GeeTest Adaptive CAPTCHA 30-day free trial to get started with tackling disinformation!
Hayley Hong
Content Marketing @ GeeTest
Subscribe to our newsletter