Home News Microsoft cracks down on deep fakes, adding ‘ID’ to AI-generated images and...

Microsoft cracks down on deep fakes, adding ‘ID’ to AI-generated images and videos

0

Microsoft has recently launched several AI-based software and services, and in March the company officially released Bing Image Creator, which lets users create artwork with simple text prompts. in April, the company launched a full public beta of Microsoft Designer, which lets users create blog posts, websites and other projects with text prompts and In April, the company launched a full public beta of Microsoft Designer, which lets users create blog posts, websites, and other projects with text prompts and AI models.

However, there is also concern that AI-generated artwork could be used maliciously to spread false information. As more and more “deep-fake” images and videos emerge, Microsoft has decided to take proactive steps to ensure that its program-generated AI art can be identified. Today, at the Microsoft Build 2023 developer conference, the company announced that it will add a feature in the coming months that will allow anyone to identify whether images or video clips generated by Bing Image Creator and Microsoft Designer are generated by AI.

The technology uses cryptographic methods to tag and sign AI-generated content with metadata information about its origin,” Microsoft said. Microsoft has been a leader in the development of provenance verification methods and co-founded Project Origin and the Content Origin and Authenticity Alliance (C2PA) standards body. Microsoft’s media provenance verification will sign and validate generated content in accordance with C2PA standards.”

Microsoft announced earlier this month that Bing Image Creator now supports more than 100 languages and that the program has created more than 200 million images.

Exit mobile version