Categories: Crypto Freedom News

Amnesty International head says AI innovation vs. regulation is ‘false dichotomy’

[ad_1]

The secretary-general of Amnesty International, Anges Callamard, released a statement on Nov. 27 in response to three European Union member states pushing back on regulating artificial intelligence (AI) models. 

France, Germany and Italy reached an agreement that included not adopting such stringent regulations for foundation models of AI, which is a core component of the EU’s forthcoming EU AI Act.

This came after the EU received multiple petitions from tech industry players asking the regulators not to over-regulate the nascent industry.

However, Callamard said the region has an opportunity to show “international leadership” with robust regulation of AI, and member states “must not undermine the AI Act by bowing to the tech industry’s claims that adoption of the AI Act will lead to heavy-handed regulation that would curb innovation.”

“Let us not forget that ‘innovation versus regulation’ is a false dichotomy that has for years been peddled by tech companies to evade meaningful accountability and binding regulation.”

She said this rhetoric from the tech industry highlights the “concentration of power” from a small group of tech companies who want to be in charge of the “AI rulebook.”

Related: US surveillance and facial recognition firm Clearview AI wins GDPR appeal in UK court

Amnesty International has been a member of a coalition of civil society organizations led by the European Digital Rights Network advocating for EU AI laws with human rights protections at the forefront.

Callamard said human rights abuse by AI is “well documented” and “states are using unregulated AI systems to assess welfare claims, monitor public spaces, or determine someone’s likelihood of committing a crime.”

“It is imperative that France, Germany and Italy stop delaying the negotiations process and that EU lawmakers focus on making sure crucial human rights protections are coded in law before the end of the current EU mandate in 2024.”

Recently, France, Germany and Italy were also part of a new set of guidelines developed by 15 countries and major tech companies, including OpenAI and Anthropic, which suggest cybersecurity practices for AI developers when designing, developing, launching and monitoring AI models.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

[ad_2]

Source link

PrepTeam

Share
Published by
PrepTeam

Recent Posts

Dear Diary, It’s Me, Jessica: Part 16

[ad_1] If you're new here, you may want to subscribe to my RSS feed. Thanks…

4 months ago

Google Faces Lawsuit After $5M in Crypto Stolen via Play Store App

[ad_1] A Florida woman, Maria Vaca, has sued Google in a California state court, alleging…

4 months ago

All About Water Purification: A Complete Tutorial

[ad_1] You may need to purify water to make it safe to drink. The process…

4 months ago

Protocol Village: Quai Releases Mainnet-Compatible Devnet, Crunch Lab Raises $3.5M

[ad_1] The latest in blockchain tech upgrades, funding announcements and deals. For the period of…

4 months ago

The Grim New Daily Life in Venezuela

[ad_1] If you're new here, you may want to subscribe to my RSS feed. Thanks…

4 months ago

World’s 3rd largest public pension fund buys $34M MicroStrategy shares

[ad_1] The third-largest public pension fund in the world has just bought nearly $34 million…

4 months ago