True, there are many, many, videos that show the true tragedy of what happened on Jamaica in that terrible storm. Those videos shock us, upset us, make us want to help. And they get ton of views.
There's also a lot of fake AI Hurricane Melissa videos going around. Some of them really look like the real thing.
But if you look closely, they were generated by whatever AI service the maker used. The motivation? Make a pile of money out of somebody else's tragedy.
 Though AI videos have exploited disasters and other big events  for awhile now, Melissa
a is arguably the first calamity to get the full AI treatment on social media. 
"Experts noted that Melissa is the first big natural disaster since OpenAI launched the latest version of its video generation tool Sora last month.
'Now, with the rise of easily accessible and powerful tools like Sora, it has become even easier for bad actors to create and distribute highly convincing synthetic videos,' said Sofia Rubinson, a senior editor at NewsGuard, which analyzes online misinformation.
I supposed the AI videos of Melissa are less harmful than political deep fakes, ones that make leaders really appear to be something they're not, to sway the public based on big lies, to create a political environment detrimental to everyone except a cadre of billionaires and their brown nosing fans.
Disaster AI videos are just clickbait. Sure, they're scammy. But they're easy and cheap to produce, and if whoever makes any of these gets lucky, they can get hundreds of thousands or even millions of views. A really lucky strike with a fake disaster AI video can rake in a few thousand dollars.
These fake AI videos can spread heartache, and interfere with storm recovery.
"In times of crisis, like a dangerous and imminent natural disaster these fake videos can create confusion panic and distraction at a time when accuracy can be life-saving."
One of the AI videos going round depicted the main airport in the Jamaican capitol of Kingston completely destroyed. If taken at face value, that video could have affected or delayed relief supplies.
In fact, the airport in Kingston was only lightly damaged. It opened for both relief and commercial flights last Thursday, October 30, just a couple days after the hurricane.
"I am in so many WhatsApp groups and I see all of these videos coming. Many them are fake....And so we urge you to please listen to the official channels" said Dana Morris Dixon, Jamaica's Education Minister.
AI videos can also falsely minimize the severity of a disaster. As Forbes noted, one widely circulated Melissa AI video shows Jamaicans partying, boating and swimming, which made the hurricane appear much less dangerous than it really was.
It's trickier when you're in a disaster zone and trying to find solid information to keep you safe. Often people in the middle of a disaster look to social media for guidance. An AI video made by someone who is unfamiliar with the area affected and how to respond to disasters can easily lead storm victims into danger, rather than helping them.
The AI videos are confusing the public so much that people are writing off legitimate videos as AI. One video shows the dramatic views inside the eye of Hurricane Melissa taken from aboard a hurricane hunter plane. The video is legit, but it's so incredible, people tried to flag it as AI.
But yet another video purportedly taken above the eye of Melissa portrays it as sort of a wormhole, with clouds swirling around the eye, and pouring into the "hole." If you understand meteorology, you would catch the problem right away.
A hurricane's eye is the storm's "chimney," and the clouds around it are rising, not sinking.
Though it's still pretty possible to distinguish a real video from AI nonsense, it's getting harder.
"In the past, people could often identify fakes through telltale signs like unnatural motion, distorted text, or missing fingers. But as these systems improve, many of those flaws are disappearing, making it increasingly difficult for the average viewer to distinguish AI-generated content from authentic footage.'"
It's kind of annoying that nowadays, you have to look at everything with a super critical eye to distinguish fact from fantasy.
But it's still possible. With AI video, as always, check as best you can to determine whether the video is from a reliable source. Check the account's history on social media. If the past is pretty clickbait-y, you are probably looking at more AI clickbait in the video you're watching.
You can sometimes see a Sora watermark, which shows it is AI. Apps can get rid of the watermark, but often smudge is left behind where you'd expect a watermark, so that can act as a clue.
Although scammers are getting better at producing videos, look for strangely shaped objects or blurred commercial signs in the background. Sometimes one object in the video has odd lighting, is strangely blurred or doesn't seem to have quite the right shape.
If you still can't tell whether it's real or fake, welcome to the club. I have a good eye for this type of thing and sometimes I'm still not sure whether what I'm seeing is real or AI. In those cases, let's play it safe and don't share the video on social media.
Credible sources usually end up verifying legit videos. The one you're fascinated with will either be proven true and debunked. Once that happens, then you can share it if you feel safe about it.
Meanwhile our social media feeds are getting more and more clogged with annoying fake AI videos posing a news.
It's just another example of how we just can't have nice things.

No comments:
Post a Comment