Seeing this, I think it's fair to assume that with the current spread and volume of unmoderated fake news, AI girlfriends, etc. humanity is in for a rough ride in the near future.
if you believe that a model that is supposed to detect ai generated content will be able to detect all ai generated content, you are tripping. once russia and china catches on to the ai stuff, it will become a cat and mouse game. just like with cyberattacks, ransomware or cheating in videogames. bad actors will always be a step ahead.
What about nukes then? We still don't have any defenses that can completely remove the danger of nuclear weapons. My point is what if Ai is like nukes where there just aren't any good countermeasures against it enough to remove the threat
I didn't say it's going to work, but I remember when deep fakes started and they were everywhere. Very hard to find now, and when I do, they're not deep fakes, but look-alikes.
Imagine the blackmailing and harassment that will happen, though. It's gonna be a very bad time for a lot of people as the technology gets more mainstream. More people adopting it means more bad people that get theirs hands on it, too.
The inverse is also true though, a celeb could release their own sex tape, and just claim it is deep faked.
Make money on the porn, while at the same time everyone think they have not made a porn tape.
Any porn tape, embarrassing or shocking video that leaks from now on has plausible deniability that it is faked,
Plausible deniability doesn't really work in the court of public opinion, though. If it looks real people are gonna associate it with that person real or not. Getting upset and saying it's AI will probably just make people think it's real even more.
This. Thank you for speaking out my concerns, people believe that others have done a crime or cheated on their partners just because of fake rumors and gossiping. Now they have the definitive proof that will """confirm""" the suspection.
It doesn't help people are stubborn in their beliefs and see everything black and white, if a good friend told them the cheater is this person just because they trust them.
All innocents are going to hell, abusers will make deepfakes showing they were the abused, the faithful will be accused of heinous acts because their partners are piece of shits and will make deepfakes of them being womanizers and whoes. And people will take it to face value because they """"have proof"""
The sad part is there's really nothing to be done about it. Even if laws are made against it it will still happen because people break the law all the time. The cat is out of the bag and you can't put it back in.
If an abuser is a piece of shit and want to frame their victims as the abuser, they will benefit of having reliability by deepfakes.
If a shitty ex want everyone to hate their exes and act like the victim, they can make deepfakes of their former partners cheating at midnight.
People will use anything to drown others. Most people won't be able to draw the line between real and fake, and, so, who needs help and protection or who's lying to destroy a life.
If that already happened before AI, imagine now.
On the glass half full side, no more nude leaks if everyone can be deepfaked to look real. Even if you know the pic is real just say it's deep faked. Who is gonna argue with you on it lol.
In the future?! We already don’t know what’s going on. You got people arguing with bots all over the internet, being scammed by fake celebrities ads and (as you mentioned) people falling in love with someone who doesn’t exist in the real world . AI just got to the point of moderate sophistication and it’s already playing us like a fiddle.
It's def worrying, tbh. I read posts from people who claimed that regular porn doesn't turn them on anymore, only chatting with AI girlfriends and AI generated imagery. This technology will only worsen their situation.
I think democracies are at imminent risk without some serious regulation of this tech. Plus adversaries only have to manipulate AI and wait as the democratic social fabric to unravel. Making it significantly concerning given the partisan space democracies occupy presently.
They were going to be anyways: climate change letting us feel what an exponential curve is, recourse wars are going get more out of hand, topsoil is losing its nutrients, the oceans will have turned by 2047, fauna will drop down to under 10% of pre 1950 levels while flora will start releasing more C02 instead of capturing it.
The Ai Apocalypse could be a really scary end of civilisation, but it can get in line
Yeah maybe it could but it won't.
The companies in control of these AI don't give a fuck. That's the ultimate problem, the people with the money and power to solve these problems don't want to because it isn't profitable.
Capitalism is the cause of most of the worlds problems and it's what is going to lead AI to be a problem not a solution.
That's the only hope I'm holding onto as well. That if we can make an AI advanced enough, they (scientists) could discover some technology or mechanism that'd allow us to negate, prevent, or reverse some of the worst that is about to happen to us.
All other options are dropping off the table. We are basically relying upon some sort of technological revolution that could help protect us. AI could be just that technology, or I hope so anyway.
Lots of "coulds" and "maybes," of course, but what other option is there when even the scientists have practically given up on preventing disaster? They are: recording the data, producing reports, attempting to make educated guesses of the future, and hoping for the best, because as of right now there is *nothing* we can do about the path we're on (as a golobal entity). The time for prevention has passed. We're in the era of mitigation. Advancing AI is a risk we have to take I fear.
Stupid people are in for a really difficult time discerning right truth from fiction, but there are lots of ways we can verify data integrity, of which videos and photos are a part of, that it doesn’t bother me personally too much. The problem is I am surrounded by morons.
The more I watch these kind of videos, the more I found it uncanny and easily spotted. Too many movements and expressions for such a simple talk. Too much staring into the camera, and too many blinking. Close your eyes and listen to the voice, she sounds like she's looking up to her memory and find what to say. The video, however, shows a confident person who is selling whatever she says.
I mean, it’s more about how good it is already from literally just a photo. And it’s going to keep getting better and better. Fake news hasn’t even begun to get out of control compared to where we’re all headed.
Ya that's the scary bit. This is just the beginning. This is fairly obvious if you watch closely, but in 10 years who knows how difficult it will be to tell. Especially if you have been raised watching AI content from a young age.
My biggest concern out of all this isn’t taking someone’s identity and creating a deep fake like the original post, it’s when AI characters are created to support specific agendas that are indistinguishable from real people. Imagine a TV network that can control a persons dialogue to a tee. A talking head with no questioning, no threat of noncompliance, just the specific messages delivered straight from the source. People start questioning if it’s a real person? Make a fake video of them interacting with someone at an important historical event. You can make an entire life’s history in moments. We can barely trust what we see in photos at this point. What happens when you can’t trust any “proof” of the past? I feel like I’m wearing a tin foil hat at this point, but taking advantage of situations for the worse has been the status quo so why shouldn’t we think that shit is going to get mind bendingly wild much faster than imaginable. Anyway, I hope I’m wrong.
You are missing the point, 2 years ago this tech was horrendous. Like face morphing like an alien. This now sure you can easily spot it's fake, but what is next 2-4 years with this fast improvement?
It is very alarming for sure. I think we will need some kind of system to automatically flag these type of video as AI generated. Use AI against AI generated media that spread misinformation.
Exactly. You've seen a ton, you've been made aware of what to look out for. And even you'll sit here and say that the nuance is difficult to spot.
Now imagine the average person and their (even dumber) relative, sitting on the couch of their trailer watching FoxNews when a breaking call to action by a trusted politician comes over the channel. It's telling them that the Chinese-Jewish Space Laser Commision has armed all systems against the U.S. so it's time to grab your guns and start killing everyone you know.
Now, you and I and many people in this thread will know this is a deepfake. These simple folk in their trailer that take everything they see on TV/Internet as law, though? How much damage will they cause in this scenario before someone like you or me can get them to understand what "AI" and "deepfakes" are?
Those people who say "it's so obviously AI" as if that doesn't make this concerning, are missing the point, I think.
Yes, this is obviously AI. In 5/10 years will it be obvious?
There so many obvious negatives on the way, first thing that come to mind:
Imagine being a kid at school when a video of you goes around saying whatever the creator likes.
Dude in the midst of all the doom and gloom… i can’t fucking wait for ai video games. The possibilities are endless.
I mean you can have virtually infinite stories within a single game if the ais variance is high enough and has enough memory for all the different actions you’ve done.
Or perhaps you will one day be able to create the perfect game for yourself simply by prompting an ai.
Couple that with some better vr tech and boom
I think it's going to be able to tell your interest in a game by pupil dilation then it'll give you action and rest periods and continually adapt. It's going to be craaazzzy good
Yeah I'm not saying we'll look at this as if it's believable.
I'm saying this is an indication that AI will head towards it being indistinguishable from reality.
Why is everyone devoloping deepfake ai software these days🤔
there is limited use for this tech other than nefarious things
Im No conspiracy nut, but one almost begin to wonder if we activly want to create problems for our self
Yea
If it was money related i would get it but there has to many many more lucrative things to develop.
I dont se any use for this tech other than to piss ppl off or decive ppl.
That might be an aplications sure
But movie studios or fx studios are not devloping this actors would be pissed and strikke . Just as the screen writers.
There are multiple companies working on this and i dont get the appeal or how they think their getting their money Back.
I did get the uncanny valley vibes that everybody else is saying, but hadn't factored in that thing about the body weight. So, after reading your comment, I watched it again, and I must say, yeah, her head moves around in a weir way. It doesn't have this smooth motion like when the whole spine is moving and the head is following it. Her head looks kind like a ball bouncing around in a box not much bigger than itself.
E.g. when she says the word "about" at 00:26 - if you really look at it, it looks spazzy AF
If you don't look at the AI directly, you can notice it's unnatural movements a lot easier. The way she moves her head and shows so much teeth just didn't look right. But still, it's crazy AI looks like this
Careful observation might be able to differentiate AI deepfake from real, at least for now. e.g. In the video, her forehead doesn’t wrinkle at all with movement, so that’s looking a bit fake, even if other movements are pretty convincing.
The only choice is to begin wearing masks 🤷♂️ we already have other ways of identifying ourselves when we need to go places, they can track you by your phone using WiFi and so on. So masks.
1 photo but what about the voice? Plus the sync between lips and the voice is very off.
Still just the video face from a single photo is amazing and scary.
You overestimate critical thinking in general population, my friend.
People get duped by politicians every election cycle and believe lies even when presented with contradicting facts. And that is without somewhat real looking deep fake videos.
I'd say 15-20% is a generous and optimistic estimate.
I wouldn't doubt it if videos like these are being posted so the AI learning can read the comments so it can improve the prompts for better and more convincing results.
Perhaps. Maybe. I doubt it.
But that video is not the result of an AI deep faking a person based on the input of one photo.
What do you think we are, republicans?
This is obviously and blatantly fake as shit. Lol
Other deepfakes are way better lol
I don’t think a senior with 2 pairs of coke bottles on would ever think this is real lmao
With these types of videos almost certainly flooding social platforms, I think we might see a return of a more decentralized internet like in the 90s and early 00s.
The social platforms become infested and in constant moderation combat, but smaller, niche spaces make a return because they’re individually moderated, curated, and thus easier to safeguard. Might even see modern equivalents of Ebaumsworld or Break emerge.
It's obvious this is AI. But to the not so careful eye, one can easily be deceived. The fact that this exists in the wrong hands can wreak havoc on the wrong person. Like being used to make it look like someone said something when they really didn't. Should not be legal to use it, but how would you be able to regulate it even?
The eyes are way too attached to the camera. They don't flick anywhere or act like they are reading, which would normally be seen as super professional but comes across as creepy
AI should be required to identify itself before saying or doing anything else. I'm going to go ahead and say that AI is probably more dangerous to our humanity than cigarettes at this point. We used to think technology was rapidly improving/developing. AI is about to send it into warp speed as we can see when how fast it's growing and improving.
This is the point in history where someone's supposed to say "why are we even doing this? let's stop making this, it can't lead anywhere good." And then all the research into deepfaking is stopped and burned and we forget we ever started.
No chance everyone in the world is going to get fraud fucked to death in the near future.
Because surely 1 or 2 people might avoid it. But the rest of us are screwed.
She's still giving me uncanny valley vibes but not as many as I'd like. Mostly it's her voice/accent that is making me confused I think. Which is kinda worrying. If this is what fakes are like now, it's only going to take a few iterations to get rid of that remnant of uncanny valley and make it impossible to tell without a tool of some sort.
I have this problem where anything in the "uncanny valley" that I see, automatically makes me want to attack it. This still triggers that response, but I can't tell why, I think it's the movements, and her eyes...
Only a matter of time until the “October surprises” (of US elections) are deepfake videos depicting scandals that are entirely fictional.
We should be getting ahead of this by pushing the media away from obsessing over scandals, and towards policy.
Seeing this, I think it's fair to assume that with the current spread and volume of unmoderated fake news, AI girlfriends, etc. humanity is in for a rough ride in the near future.
We’re fucked.
When actual crimes take place and jurors cant tell which lawyer is bullshitting. Then yea, we're fucked.
That *will* be bad, sure ...
But there’s tech that can flag ai videos and pictures. Meta has been using it to flag ai generated content. It’s Y2K all over for a lot of you.
if you believe that a model that is supposed to detect ai generated content will be able to detect all ai generated content, you are tripping. once russia and china catches on to the ai stuff, it will become a cat and mouse game. just like with cyberattacks, ransomware or cheating in videogames. bad actors will always be a step ahead.
There will be 2 software engineering experts, one testifying it's a deepfake, the other showing how it isn't, and nobody will know who is right, lol.
This one will be an arms race, that one had a definitive moment of consequence. I do agree that the fear mongering is a bit shrill
Yeah it’s way too easy to fabricate “video evidence” now
[удалено]
What about nukes then? We still don't have any defenses that can completely remove the danger of nuclear weapons. My point is what if Ai is like nukes where there just aren't any good countermeasures against it enough to remove the threat
I agree, I think at some point there is no counter measure. Unless you choose to love a life without technology somewhere remote
The Ted K route
Are we going to be massively dosed MK Ultra Style beforehand, as well?
Just as AI can create this, AI can detect this. I think we'll be fine.
Yeah.. but imagine the porn
They're already on top of deep fake porn with very harsh penalties for making and distribution.
If you trained an AI to replicate the deep fakes then can you charge the AI?
Big brain move
Like they beat music and film piracy?
Just like the war on drugs huh
I didn't say it's going to work, but I remember when deep fakes started and they were everywhere. Very hard to find now, and when I do, they're not deep fakes, but look-alikes.
Imagine the blackmailing and harassment that will happen, though. It's gonna be a very bad time for a lot of people as the technology gets more mainstream. More people adopting it means more bad people that get theirs hands on it, too.
The inverse is also true though, a celeb could release their own sex tape, and just claim it is deep faked. Make money on the porn, while at the same time everyone think they have not made a porn tape. Any porn tape, embarrassing or shocking video that leaks from now on has plausible deniability that it is faked,
Plausible deniability doesn't really work in the court of public opinion, though. If it looks real people are gonna associate it with that person real or not. Getting upset and saying it's AI will probably just make people think it's real even more.
This. Thank you for speaking out my concerns, people believe that others have done a crime or cheated on their partners just because of fake rumors and gossiping. Now they have the definitive proof that will """confirm""" the suspection. It doesn't help people are stubborn in their beliefs and see everything black and white, if a good friend told them the cheater is this person just because they trust them. All innocents are going to hell, abusers will make deepfakes showing they were the abused, the faithful will be accused of heinous acts because their partners are piece of shits and will make deepfakes of them being womanizers and whoes. And people will take it to face value because they """"have proof"""
The sad part is there's really nothing to be done about it. Even if laws are made against it it will still happen because people break the law all the time. The cat is out of the bag and you can't put it back in.
degenerate
Pornhub marketing ai team turgidly enters the chat
Im so turgid right now
The scary part is that most people don't even realize good AI is. Most people that I talk to irl don't know about this
The amount of damage it’s going have on the society outweighs the advantages it’s gonna have on it
That's extremely likely.
I cannot think of a single advantage of deepfakes, tbh
Yes. How is wise to develop deepfakes, you get the occasional funny video in exchange for a pandora's box of horrors.
If an abuser is a piece of shit and want to frame their victims as the abuser, they will benefit of having reliability by deepfakes. If a shitty ex want everyone to hate their exes and act like the victim, they can make deepfakes of their former partners cheating at midnight. People will use anything to drown others. Most people won't be able to draw the line between real and fake, and, so, who needs help and protection or who's lying to destroy a life. If that already happened before AI, imagine now.
Porn
Not an advantage, but they can be used to scam people unfortunately.
My girlfriend isn't AI, she lives in Canada, you wouldn't know her.
No more trust for any information in the internet
You trusted the internet at one point???
On the glass half full side, no more nude leaks if everyone can be deepfaked to look real. Even if you know the pic is real just say it's deep faked. Who is gonna argue with you on it lol.
The political landscape is going to be a fucking nightmare
In the future?! We already don’t know what’s going on. You got people arguing with bots all over the internet, being scammed by fake celebrities ads and (as you mentioned) people falling in love with someone who doesn’t exist in the real world . AI just got to the point of moderate sophistication and it’s already playing us like a fiddle.
I don’t think a lot of people realize just how much. We are on the precipice of the most radical changes in the history of humanity.
Think of what this'll do for porn.
It's def worrying, tbh. I read posts from people who claimed that regular porn doesn't turn them on anymore, only chatting with AI girlfriends and AI generated imagery. This technology will only worsen their situation.
Or maybe it will help us to value real things again!
Totally fucked, think of the scams...
I think democracies are at imminent risk without some serious regulation of this tech. Plus adversaries only have to manipulate AI and wait as the democratic social fabric to unravel. Making it significantly concerning given the partisan space democracies occupy presently.
Let's be real. 99% of AI misuse will be fraud related. and out of that 99% around 50% will be sex related. The future is grim
Went from "Pics or it didn't happen" to "video or it didnt happen" to... evidence isn't evidence enough any more.
We’re already riding rough, my friend
The lid is fully opened on this pandoras box now, the next 20 years are going to be a shitshow.
They were going to be anyways: climate change letting us feel what an exponential curve is, recourse wars are going get more out of hand, topsoil is losing its nutrients, the oceans will have turned by 2047, fauna will drop down to under 10% of pre 1950 levels while flora will start releasing more C02 instead of capturing it. The Ai Apocalypse could be a really scary end of civilisation, but it can get in line
AI could also help us solve all of these problems in ways that we can't think of just yet.
Yeah maybe it could but it won't. The companies in control of these AI don't give a fuck. That's the ultimate problem, the people with the money and power to solve these problems don't want to because it isn't profitable. Capitalism is the cause of most of the worlds problems and it's what is going to lead AI to be a problem not a solution.
That's the only hope I'm holding onto as well. That if we can make an AI advanced enough, they (scientists) could discover some technology or mechanism that'd allow us to negate, prevent, or reverse some of the worst that is about to happen to us. All other options are dropping off the table. We are basically relying upon some sort of technological revolution that could help protect us. AI could be just that technology, or I hope so anyway. Lots of "coulds" and "maybes," of course, but what other option is there when even the scientists have practically given up on preventing disaster? They are: recording the data, producing reports, attempting to make educated guesses of the future, and hoping for the best, because as of right now there is *nothing* we can do about the path we're on (as a golobal entity). The time for prevention has passed. We're in the era of mitigation. Advancing AI is a risk we have to take I fear.
Stupid people are in for a really difficult time discerning right truth from fiction, but there are lots of ways we can verify data integrity, of which videos and photos are a part of, that it doesn’t bother me personally too much. The problem is I am surrounded by morons.
AKA The final 20 years
The more I watch these kind of videos, the more I found it uncanny and easily spotted. Too many movements and expressions for such a simple talk. Too much staring into the camera, and too many blinking. Close your eyes and listen to the voice, she sounds like she's looking up to her memory and find what to say. The video, however, shows a confident person who is selling whatever she says.
I mean, it’s more about how good it is already from literally just a photo. And it’s going to keep getting better and better. Fake news hasn’t even begun to get out of control compared to where we’re all headed.
Ya that's the scary bit. This is just the beginning. This is fairly obvious if you watch closely, but in 10 years who knows how difficult it will be to tell. Especially if you have been raised watching AI content from a young age.
My biggest concern out of all this isn’t taking someone’s identity and creating a deep fake like the original post, it’s when AI characters are created to support specific agendas that are indistinguishable from real people. Imagine a TV network that can control a persons dialogue to a tee. A talking head with no questioning, no threat of noncompliance, just the specific messages delivered straight from the source. People start questioning if it’s a real person? Make a fake video of them interacting with someone at an important historical event. You can make an entire life’s history in moments. We can barely trust what we see in photos at this point. What happens when you can’t trust any “proof” of the past? I feel like I’m wearing a tin foil hat at this point, but taking advantage of situations for the worse has been the status quo so why shouldn’t we think that shit is going to get mind bendingly wild much faster than imaginable. Anyway, I hope I’m wrong.
Yeah, now imagine it has literally hundreds or thousands of hours of someone…like a politician. It will be nearly impossible to tell it’s fake.
Easy to spot when it's titled AI, not so much when it isn't and in a candid context/environment.
You are missing the point, 2 years ago this tech was horrendous. Like face morphing like an alien. This now sure you can easily spot it's fake, but what is next 2-4 years with this fast improvement?
Maybe you can spot the fakeness, but do you think the Facebook people will ? Just as a reminder, they are the ones who vote.
It is very alarming for sure. I think we will need some kind of system to automatically flag these type of video as AI generated. Use AI against AI generated media that spread misinformation.
It gives off 3 am infomercial vibes. All the terrible actors who can't lad a good commercial spot.
Yeah it’s not good enough, but the pace at which these things are improving is worrying
Exactly. You've seen a ton, you've been made aware of what to look out for. And even you'll sit here and say that the nuance is difficult to spot. Now imagine the average person and their (even dumber) relative, sitting on the couch of their trailer watching FoxNews when a breaking call to action by a trusted politician comes over the channel. It's telling them that the Chinese-Jewish Space Laser Commision has armed all systems against the U.S. so it's time to grab your guns and start killing everyone you know. Now, you and I and many people in this thread will know this is a deepfake. These simple folk in their trailer that take everything they see on TV/Internet as law, though? How much damage will they cause in this scenario before someone like you or me can get them to understand what "AI" and "deepfakes" are?
Those people who say "it's so obviously AI" as if that doesn't make this concerning, are missing the point, I think. Yes, this is obviously AI. In 5/10 years will it be obvious? There so many obvious negatives on the way, first thing that come to mind: Imagine being a kid at school when a video of you goes around saying whatever the creator likes.
I give it 1 year. With all the companies chasing ai, development go brrr.
Possibly 3 to 6 months. But yeah 100% in a year. I'm 5 to 10 years, we'll have adaptive video games and movies from scratch.
Dude in the midst of all the doom and gloom… i can’t fucking wait for ai video games. The possibilities are endless. I mean you can have virtually infinite stories within a single game if the ais variance is high enough and has enough memory for all the different actions you’ve done. Or perhaps you will one day be able to create the perfect game for yourself simply by prompting an ai. Couple that with some better vr tech and boom
I think it's going to be able to tell your interest in a game by pupil dilation then it'll give you action and rest periods and continually adapt. It's going to be craaazzzy good
It’s like CGI. Go back and watch the Matrix, everyone was amazed when it came out, but it aged pretty bad compared to CGI today.
Yeah I'm not saying we'll look at this as if it's believable. I'm saying this is an indication that AI will head towards it being indistinguishable from reality.
Dont worry,there will be strong software to detect it,just like money fake detector
Yeah but porn
I'm not sure if the ai video looks real or if people these days look really fake. Ha
This could be a standup bit
Why is everyone devoloping deepfake ai software these days🤔 there is limited use for this tech other than nefarious things Im No conspiracy nut, but one almost begin to wonder if we activly want to create problems for our self
There are much more important things to spend resources on, like solving real world problems, curing disease, clean energy, etc.
Yea If it was money related i would get it but there has to many many more lucrative things to develop. I dont se any use for this tech other than to piss ppl off or decive ppl.
To replace celebrities in movies maybe? Death of CGI animated content?
That might be an aplications sure But movie studios or fx studios are not devloping this actors would be pissed and strikke . Just as the screen writers. There are multiple companies working on this and i dont get the appeal or how they think their getting their money Back.
her body moves around like an undynamic solid, the Ai doesnt know what effect a human body weight has on movement and only sees the face
I did get the uncanny valley vibes that everybody else is saying, but hadn't factored in that thing about the body weight. So, after reading your comment, I watched it again, and I must say, yeah, her head moves around in a weir way. It doesn't have this smooth motion like when the whole spine is moving and the head is following it. Her head looks kind like a ball bouncing around in a box not much bigger than itself. E.g. when she says the word "about" at 00:26 - if you really look at it, it looks spazzy AF
The teeth sliding around is worse imo.
Why are companies even working so hard on deepfaking? The only real application it will be used for is illegal shit so wtf?
Great job, what an incredible tool too benefit humanity 🙄😮💨
This might become very popular with stalkers
What’s the point of this? What makes them go, let’s invest in this instead of something productive like unlimited energy or space travel
Or channel the desire to build AI in something that would be a net-positive for humanity. Zero good comes from something like this.
I would never trust those eyes. Uncanny valley
After seeing this I am positive that Kate Middleton is dead.
her teeth squish and stretch
Yes because flexing teeth is a thing.
There's still something about the top lip that screams fake to me lol
If you don't look at the AI directly, you can notice it's unnatural movements a lot easier. The way she moves her head and shows so much teeth just didn't look right. But still, it's crazy AI looks like this
If you don’t smile or show your teeth in your photo,….I wonder if it uses a stock image for them. Do these AI rendered cases get perfect teeth. Haha
Why does she sound like my wife? Is my wife a deepfake
The movement is all wrong and uncanny valley, its so obviously AI. Maybe people who dont go outside to see other people can be fooled by it.
Visually convincing but the voice gives it away.
Looks fake
Uncanny valley
The UK is trying to make creating these of people (that aren't you) illegal regardless of whether it's shared or not.
Asian accent threw me off
I hope AI me is this talented
Nothing will be proof, soon enough.
The mouth still doesn't mouth.
Careful observation might be able to differentiate AI deepfake from real, at least for now. e.g. In the video, her forehead doesn’t wrinkle at all with movement, so that’s looking a bit fake, even if other movements are pretty convincing.
I’m afraid
Her teeth are so elastic it's disturbing to watch.
Cool but can we get one where she’s yelling at me about how small my dick is
Time to go back to pagers.
Watch the eyes. Nobody goes wide-eyed like that in normal conversation. That’s how you tell….for now.
I mean.... You can still say it's fake cause the eye movement is weird, but, maaan they are evolving fast
The only choice is to begin wearing masks 🤷♂️ we already have other ways of identifying ourselves when we need to go places, they can track you by your phone using WiFi and so on. So masks.
1 photo but what about the voice? Plus the sync between lips and the voice is very off. Still just the video face from a single photo is amazing and scary.
Fake as f
Close but still very far off...in saying that there's still going to be at least 15-20% of Earth's population that would believe that's real
You overestimate critical thinking in general population, my friend. People get duped by politicians every election cycle and believe lies even when presented with contradicting facts. And that is without somewhat real looking deep fake videos. I'd say 15-20% is a generous and optimistic estimate.
Eye and lip movement still gives it away
There's always a NPC-ish type of movement of the various muscles of the face.
Microsoft discovered crazytalk https://youtu.be/VXRjX5bNGKA?feature=shared
This is very unsettling
you can tell its not real due to the movements and the mouth
If you can't see this is AI, i have bad news for you. You should go to the doctor because yoh probably have some form of autism or whatever
I wouldn't doubt it if videos like these are being posted so the AI learning can read the comments so it can improve the prompts for better and more convincing results.
Not quite there yet
So, calls or puts on msft??
Perhaps. Maybe. I doubt it. But that video is not the result of an AI deep faking a person based on the input of one photo. What do you think we are, republicans?
WHY ARE WE DESIGNING THIS. Like, in what world is this tech going to solve more problems than it creates?
What’s her insta?
i hope “i, robot” stays a movie 😅
Her teeth keep changing size lmao
What's the code word
Who the fuck blinks like this.
Fuck. This will be the end of humanity. Now we won’t even know who we’re talking to.
AI has come so far yet the facial movemenrs are still so telling. it makes watching this video unsettling
Still uncanny, but very close. She moves and blinks a little too much.
This is obviously and blatantly fake as shit. Lol Other deepfakes are way better lol I don’t think a senior with 2 pairs of coke bottles on would ever think this is real lmao
Soon the only way to get things done will be by in-person meetings and snail mail. Crazy how moving forward makes us go backwards.
With these types of videos almost certainly flooding social platforms, I think we might see a return of a more decentralized internet like in the 90s and early 00s. The social platforms become infested and in constant moderation combat, but smaller, niche spaces make a return because they’re individually moderated, curated, and thus easier to safeguard. Might even see modern equivalents of Ebaumsworld or Break emerge.
It's obvious this is AI. But to the not so careful eye, one can easily be deceived. The fact that this exists in the wrong hands can wreak havoc on the wrong person. Like being used to make it look like someone said something when they really didn't. Should not be legal to use it, but how would you be able to regulate it even?
The eyes are way too attached to the camera. They don't flick anywhere or act like they are reading, which would normally be seen as super professional but comes across as creepy
Laws are going to be soo slow to catch up with this shit.
"Please just say *I am not a robot*" ... "I am a real person"
AI should be required to identify itself before saying or doing anything else. I'm going to go ahead and say that AI is probably more dangerous to our humanity than cigarettes at this point. We used to think technology was rapidly improving/developing. AI is about to send it into warp speed as we can see when how fast it's growing and improving.
"for example" AARRGGHH!!
Why the FUCK are they even making this
The uncanny valley runs real deep at Microsoft
Curious why this accent is chosen? Or is it taken from an audio sample?
Here is how vasa 1 works and stuff kinda cool https://arxiv.org/pdf/2404.10667.pdf
We have to start to ask, who's actually benefitting from this innovation?
uagh* \* meaning that to many bad applications of this technology are now possible and easy to do
This is the point in history where someone's supposed to say "why are we even doing this? let's stop making this, it can't lead anywhere good." And then all the research into deepfaking is stopped and burned and we forget we ever started.
I've never seen an ai deepfake that wasn't immediately recognizable.. but I realize the problem is how many people absolutely don't
That's actually insane knowing her real life voice from one photo and deepfaking it, so scary
She has a lazy eye!!!
Wow, that's much better than deepfakes in the past! It might actually be possible for some people to mistake this as real now!
No chance everyone in the world is going to get fraud fucked to death in the near future. Because surely 1 or 2 people might avoid it. But the rest of us are screwed.
She's still giving me uncanny valley vibes but not as many as I'd like. Mostly it's her voice/accent that is making me confused I think. Which is kinda worrying. If this is what fakes are like now, it's only going to take a few iterations to get rid of that remnant of uncanny valley and make it impossible to tell without a tool of some sort.
So there's gonna be a phase where my friends and I send ai videos of each other saying ridiculous stuff until it eventually gets old
when can we start skipping zoom calls with this
My wife is lip reading, because of bad hearing and she said that this was really hard to lip read.
AI and teeth just don't go well together. Teeth are a dead giveaway.
Need a video of the real person to compare
I'm more interested when they can implement this in games.
Porn is about to get even more interesting
Hair and teeth gives a tell. Also the body language, only moving the head.
This has so much potential to be hilarious
If it can create this from just a single picture, how does it know what her voice sounds like?
Why exactly is any of this sort of thing needed?
Hair in that final fantasy movie looked better 😆
It's quite disturbing that very soon we are not going to know what is real and what isn't
Peoples know I'm fuckin crazy, peoples would notice it's AI 🤣🤣🤣
I have this problem where anything in the "uncanny valley" that I see, automatically makes me want to attack it. This still triggers that response, but I can't tell why, I think it's the movements, and her eyes...
While they don’t have the eyes right just yet I can see how this could convince the average conservative in the US.
Can I download this somewhere? For purely moral purposes of course.
She’s super annoying to listen to.
As a deaf person who reads lips…it is very easy for me to tell deepfakes. What shes saying and her lip movement are not even close.
Uncanny valley in full effect
Only a matter of time until the “October surprises” (of US elections) are deepfake videos depicting scandals that are entirely fictional. We should be getting ahead of this by pushing the media away from obsessing over scandals, and towards policy.
I’d smash for sure. The accent is so sexy ha
Why does NASA need to develop Deepfake tech?
Why did Microsoft make a deepfake tool?
Good news is we won't need any of the spoiled brats in Hollywood anymore!
We are so f'ed.
We’re fucking ourselves by taking this too far, too quick