Friday, December 6, 2024, 11:11 AM
BREAKING NEWS
**Donald Trump projected to become the 47th president of the United Stat. **We have a country that needs help and it needs help very badly: Trump **We’re going to fix our borders and we’re going to fix everything about our country: Trump
Friday, December 6, 2024, 11:11 AM
Home » ChatGPT maker OpenAI plans to introduce tools to counter election disinformation

ChatGPT maker OpenAI plans to introduce tools to counter election disinformation

OpenAI says working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information

by NWMNewsDesk
0 comment

In a major development to ensure election integrity across the globe, the world’s leading artificial intelligence (AI) firm OpenAI has announced to launch tools to counter disinformation.

The ChatGPT maker, in a blog post, stated that the company is “working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information”.

The move is considered significant amid the dangers of fake news and misinformation affecting the electoral processes around the world with a slew of countries going to polls including Pakistan, India, US and the European Union.

The World Economic Forum (WEF) in its Global Risk Report has also declared AI-driven misinformation as the “biggest short-term threat” to the global economy.

banner

Vowing to stop the harmful use of its technology — ChatGPT and DALL·E, OpenAI sought to invite all stakeholders to protect the integrity of elections.

“We want to make sure our technology is not used in a way that could undermine this process. We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used,” the company said.

Preventing abuse

The company added that before releasing new systems, they red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm.

“DALL-E has guardrails to decline requests that ask for image generation of real people, including candidates,” it said.

OpenAI said they are still working to understand how effective their tools might be for personalised persuasion and until it is clear they won’t allow people to build applications for political campaigning and lobbying.

The company has also built new GPTs through which users can report potential violations.

You may also like

Blogs

Latest Articles

© 2024 News World Media. All Rights Reserved.