Amid growing concerns that AI could make it easier to spread misinformation, Microsoft is offering its services, including AI content-identifying digital watermarks, to crackdown on deepfakes and cyber security ahead of several elections around the world. Can help in increasing.
In a blog post co-written by Microsoft President Brad Smith and Teresa Hutson, Microsoft’s corporate vice president of Technology for Fundamental Rights, the company said it would provide a number of services to protect the integrity of elections, including the launch of a new tool Which uses content credentials? The watermarking system is developed by the Coalition for Content Provenance Authenticity (C2PA). The service aims to help candidates protect the use of their content and likeness and prevent the sharing of misleading information.
Called Content Credentials as a Service, users such as election campaigns can use the tool to attach information to the metadata of an image or video. Information may include the provenance of, how, when, and who created the content. It will also include whether AI was involved in creating the content. This information becomes a permanent part of the image or video. C2PA, a group of companies founded in 2019 that works to develop technical standards for certifying content provenance, launched the Content Credential this year. Adobe, a member of C2PA, released a content credential symbol to accompany photos and videos in October.
Content Certificates as a Service will launch in the spring of next year and will first be made available to political campaigns. Microsoft’s Azure team created the tool. the verge Contacted Microsoft for more information about the new service.
“Given the technology-based nature of threats, it is important for governments, technology companies, the business community and civil society to adopt new initiatives, including building on each other’s work,” Smith and Huston said.
Microsoft said it has created a team that will provide advice and support to campaigns strengthening cybersecurity defenses and working with AI. The company will also set up an Election Communication Hub, where world governments can access Microsoft security teams ahead of elections.
Smith and Hutson said Microsoft is working on the Protect Elections from Deceptive AI Act introduced by Senators Amy Klobuchar (D-MN), Chris Coons (D-DE), Josh Hawley (R-MO) and Susan Collins (R-ME). Will support. The bill seeks to ban the use of AI to create “misleading content that falsely portrays federal candidates.”
“We will use our voice as a company to support legislative and legal changes that will contribute to the protection of campaigns and electoral processes from deepfakes and other harmful uses of new technologies,” Smith and Huston wrote.
Microsoft also plans to work with groups like the National Association of State Election Directors, Reporters Without Borders and Spanish news agency EFE to feature reputable sites for election information on Bing. The company said it expands on its previous partnerships with NewsGuard and Claim Review. It is expected to issue regular reports on foreign influence in major elections. It has already released the first report analyzing threats from foreign malign influences.
Already, some political campaigns have been criticized for circulating manipulated photos and videos, although not all of these were created with AI. bloomberg Ron DeSantis’ campaign was reported to have released fake photos of his opponent Donald Trump posing with Anthony Fauci in June, and the Republican National Committee promoted a fake video of an apocalyptic America blaming the Biden administration. Both were relatively benign acts but were cited as examples of how technology creates opportunities for the spread of misinformation.
Misinformation and deep fakes are always a problem in any modern election, but the ease of using generic AI tools to create misleading content raises concerns that it will be used to mislead voters. The US Federal Election Commission (FEC) is discussing whether it will ban or limit AI in political campaigns. Representative Yvette Clarke (D-NY) also filed a bill in the House to force candidates to disclose their use of AI.
However, there are concerns that watermarks, like content credentials, will not be enough to stop disinformation completely. Watermarking is a central feature in the Biden administration’s executive order around AI.
Microsoft isn’t the only big tech company hoping to curb AI misuse in elections. Meta now requires political advertisers to disclose AI-generated content after banning them from using its generative AI advertising tools.