The shield against the images produced by the AI exists and must be used as soon as possible


New member
To restore credibility to press photos in the face of AIs like Midjourney or Stable Diffusion, manufacturers must massively implement the tools already available through the Content Authenticity Initiative. Not only is it possible, it is urgent.

The pope in a gangsta-chic down jacket in the Vatican or Emmanuel Macron with his wand on the barricades in a demonstration against his own decisions: you may have seen these fake photos on social networks, read articles that ridicule them or care. The latter are undoubtedly the most prescient: if the “cliches” of the AIs, which one could call ” synthographies (synthetic photos, nothing to do with the medical act of scintigraphy!), still remain imperfect at the moment, the current quality is already reaching an alarming level. And that’s just the beginning.

In these shots, the sovereign pope certainly has the style.  But these images are totally false: they are syntheses.

In these photos, the sovereign pope certainly has “the swag”. But these images are totally false: they are syntheses.

Because the tap of credible content production created by the AIs at Midjourney or Stable Diffusion can never be closed. Philosophers, sociologists and other behavioral specialists are already studying the phenomenon, legislators could quickly seize the subject to (try!) to frame it. Above all, for you and me, the genius of AI in creating photo-realistic images requires us to immediately take a step back from every photo shared or published. Because we suffer the reverse of this technological success: the loss of confidence in the images and the risk of their instrumentalization. Especially from countries that have been implementing real mass disinformation strategies for several years, such as Russia.

Without falling into melodramatic, there is an urgent need for all parts of the image chain industry to respond and implement solutions to believe in the images we see again. A lost battle? Not necessary. Because there is already a technical framework for press photos. The first steps were timid, sporadic and uncoordinated. The threat from Midjourney et al. could be the spark needed to restore confidence in press images. And the solution could come from a project that predates the advent of AI models. And what seems ideal to protect us: the CAI.

Tracking changes and counterfeits

Over there Content Authentication Initiative or CAI (Content Authentication Initiative) was co-developed by Adobe and The New York Times. The project was not focused on AIs, only digital recordings. The term CAI includes the term a frame and software tools designed to ensure the veracity of an image. In the first place, the principle means that the camera can create a unique digital signature at the time of recording. Then this signature is supplemented with information about the changes to the image: cropping, changing the white balance, small sharpening on a slightly blurry image, etc. Thus, the photo editor, journalist or ordinary citizen can check the route, as well as the changes of the photo .

As you can imagine, this is an image protection chain that must be complete. It starts in the cameras, it continues in the software for editing and distributing images (as well as any database for cross-checking the images). But it must also reach readers, through viewing modules integrated into the sites that should enable them to be quickly reassured. All this is well known to those responsible for setting up the technical framework of the standard, such as Adobe engineers. Like Bluetooth, HDMI and other protocols, it develops CAI tools. But it’s up to the different manufacturers to implement it. And here it gets a bit stuck.

A messy combat industry

The first image manufacturer to announce hardware compatible with the CAI is not the one you think. If you imagine a Japanese actor in the picture, you lost, since it is the American Qualcomm, the champion of smartphone chips. Because smartphones are sold in much larger volumes than photo boxes, Qualcomm, the world’s number one mobile chip, is the world’s first “photo brand.” Logic: It is its technologies (particularly the Spectra image processor) that are responsible for taking most of the pictures of the planet. For MediaTek and Apple, photo brands (Sony, Nikon, Fujifilm, etc.) only lag far behind in terms of volumes.

Read also: Snapdragon 888: Why Its Spectra 580 Image Processor Is A Revolution (December 2020)​

It was in December 2020 when it announced its flagship device, the Snapdragon 888, that Qualcomm announced it was a partner in the initiative. And thus the first designs that support the CAI equipment. Only here Qualcomm can’t do everything: it’s the responsibility of smartphone manufacturers and other software designers to seize the technology. Which, as far as we know, they haven’t done so at this point.

It was then Sony, then Leica and Nikon who joined the CAI alongside Adobe. Each announces a (unique!) update to one of their flagship cases: the Alpha A7 IV, M11 and Z9 respectively. Announcements that have had the merit of showing that in some cases it seems that a simple update of the internal software makes it possible to integrate this technology. It would be good if photo brands take their responsibility – and show that cameras are still of great value! – by going a little further back in time to “patch” as many boxes as possible. But aside from the fact that the number of compatible boxes is pretty meager – and the absence of Canon and Fujifilm, two brands dear to reporters, is a serious limiting factor – the reality is that the hardware alone won’t be enough. You have to design software that looks like “business” apps on paper, but in a field (press photography) that has very little money.

The responsibility of press agencies… and government agencies

The workflow and certification of the Content Authenticity Initiative. This association unites media, social media, chip designers and image software companies, who together develop technologies to certify the veracity of our images.

The manipulations of “real” photos do not date from digital, they appeared during the development of film photography – Stalin did not wait for Photoshop to make Trotsky disappear from the official images! The initial topic of the CAI was important as it was about deterring the manipulation of negatives and facilitating their authentication. If the circuit of press photos, especially from agencies, is quite robust in terms of production verification, there are always few retouching professionals to cast doubt on.

Faced with the rare photographers who sometimes add a little smoke to dramatize an image or erase an object to purify a composition (acts often punished by agencies with dismissal or termination of their collaboration), the threat of AI much bigger. important. Because we are talking about a potential production of billions of false images, more real than life. If the manufacturers follow suit, the CAI can clean up quickly. The control of EXIF data (technical information about the shooting) combined with that of the CAI would be a good shield to protect against synthesis. The problem? The economic interests are inversely proportional to the threat.

Read also: How Google and researchers want to ‘vaccinate’ us against disinformation (August 2022)​

Misinformation can have a big impact – look at Brexit! – but the press economy is anemic. Between the imposition of the inclusion of the CAI in social networks such as Twitter and Facebook (these two actors constantly need a strong legal motivation to do what is expected of them…) and the inclusion of software modules that enable the general public is to control the images shown in the various media, the involvement of States in the subject must be strong. The best scenario is, of course, a European initiative. But in the meantime, one possible French scenario would be to route help to the press in France, which has been widely criticized for the opacity of the version, whose fees are paid to a narrow panel of editors. Développer un cadre légal ainsi que des outils logiciels ouverts pour l’ensemble des actors médias seraient de l’argent directement investi dans le soutien non seulement de la presse, mais surtout de la (re)construction de la confiance du grand public envers le quatrième be able to.