For anyone following Ukraine closely, your social media feeds are likely flooded with various footage from the war with a wide range of video quality. Drone feed footage in particular typically lacks quality, but is often the most useful in geolocating troop movements and strikes, as well as helping identify vehicles and targets.
A Russian telegram channel posted this photo of an alleged Leopard 2 tank, but it looks more like an AMX-10 (hard to be sure based on the quality of the image). h/t @CalibreObscura https://t.co/yBE3x5PQrq pic.twitter.com/it7In3q7yC
— Rob Lee (@RALee85) June 5, 2023
Earlier today, photos and videos had been making their way around Telegram and Twitter purportedly showing Russian drone footage of French-supplied, Ukraine-operated AMX-10 armored fighting vehicles. The main reason why this was being talked about was due to people mis-identifying the vehicles as Leopard tanks, but what stood out to me was a single post by the OSINT account L_Team10 saying that AI-generated content detectors identified one of the photos as likely being the creation of artificial intelligence. To me, the footage did not seem out of place or indifferent from other Russian drone feeds of Ukrainian equipment. To me, the footage also did not seem fake, so I began to question whether poor video quality would lead these detectors to believe real footage was AI-generated.
— ?-???? (@L_Team10) June 5, 2023
To test my hypothesis, I began to look for other footage or pictures and check it against Maybe’s AI Art Detector, which can be accessed here. To start, I wanted to test a verified video so I chose Ukrainian drone footage from the May 22 incursion into Belgorod, Russia, which show smoke rising from the Grayvoron border crossing. The footage has been verified and geolocated, so there is no reason to believe it is fake. However, once I plugged it into the AI detector, the results came back showing a toss up.
For my next test, I took a screen shot from footage of the May 3 drone attacks against the Kremlin. To provide some context, two drones attacked the Kremlin between 2:00am and 3:00am local time, with Russia accusing Ukraine of attempting to assassinate President Vladimir Putin, who was not there at the time. Ukraine denied involvement, but the United States said it was likely carried out by Ukrainian intelligence assets in Ukraine. Like the last photo, this has been verified and geolocated, however, the AI detector said it was AI-generated.
For my last test, I wanted to check satellite imagery. I chose imagery of Mariupol, specifically showing the destroyed drama theater, which has been well documented in footage, video, and imagery. I started by running the photo, which unsurprisingly came back as most likely being man-made. Next, I took a picture of the imagery on my computer with my phone, similar to how many videos and pictures from drone feeds are posted online. After running the imagery again through the software, which was still fairly crisp, the degree in confidence shifted. I ran it through one last time after adding some blur and noise to decrease the imagery’s quality, which then showed a toss up.
After testing out other images and footage, I have come to the conclusion that this software will mistakenly identify real photos and footage as AI-generated the worse the image quality is. I believe that images and footage that comes off as looking “soft,” which is often caused by pixel compression, are more likely to be misidentified. Due to this, I would advise against solely relying on this software as a true and final say on whether something is AI-generated on not, but rather leveraging reserve image searching and geolocation in addition. It appears the online OSINT community is on edge following the fake Pentagon explosion debacle due to an AI-generated image, so many accounts may be more apt to check for this sort of thing now. However, as I have shown, these tools are not reliable and it is important to verify things yourself rather than believe a false negative.