With the rise of generative AI, creating convincing deepfakes has become easier and faster than ever. These manipulated videos can spread misinformation rapidly, challenging the long-held assumption that video footage is a reliable source of truth..

“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” said Abe Davis, assistant professor of computer science at Cornell.

“Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it’s only getting harder to tell what’s real.”

This trend makes it increasingly difficult to tell what is real, raising serious concerns for fact-checkers and the public alike.

To tackle this challenge, a team of Cornell researchers developed a method to embed nearly invisible watermarks into videos by altering the lighting during recording.

Rather than watermarking video files digitally, which requires compliance from the camera or AI model used to create the footage, this new approach hides secret codes in subtle fluctuations of light in the environment itself.

To read more, click here.