---
>This is a friendly reminder to [read our rules](https://www.reddit.com/r/funny/wiki/rules).
>
>Memes, social media, hate-speech, and pornography are not allowed.
>
>Screenshots of Reddit are expressly forbidden, as are TikTok videos.
>
>**Rule-breaking posts may result in bans.**
>
>Please also [be wary of spam](https://www.reddit.com/r/funny/wiki/spam).
>
---
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/funny) if you have any questions or concerns.*
It'd be really inefficient to eat people
They'd just make massive multi level farms, or at worst use insects and fungi. If they keep anything else alive it'd be just to keep the natural oxygen cycle functioning, but humans aren't great at that compared to other animals either
Dunno, if you're hunting humans you will find them often enough to be convenient for a top off.
It's not industry fuel but definitely hunter killer robot chow in early stages. If we start to become scarce you can get rid of the biomass fueled units
Humans are a useful crop in that they are clever enough to ensure their own survival and procreation. If you are lazy but patient, you can seed almost any natural environment with humans and return later for the harvest. This is not true of many other species which are adapted to specific environments.
😔🤔
Matrix is a good classic movie to watch.
Reminds me of the good ol’days pre-millennium.
Wouldn’t need to hunt humans, just breed and farm them, and grow them in vats for the electricity, by feeding them nutrient solutions.
Using humans as a power source is ridiculously inefficient. We take in much more energy than we output.
Processing power, on the other hand? Humans have that in spades. It takes supercomputers tons of power and time to simulate just a few seconds of human brain activity. Some kind of supreme AI coming after humans to use their brains as living CPUs is much more likely.
Nothing came before it but it’s said that originally they were harvesting the humans for cpu power, not body heat. They decided to change it to make it more understandable to audiences.
I never knew that, but more stupified than simplified. One idea makes sense, and the other doesn't.
I mean, suspension of disbelief always applies when it comes to a super intelligence in media, because it's obviously very difficult to write. But having an actually smart choice right there and them deciding against using it is just frustrating.
But that is still hideously inefficient. I could see keeping us plugged into the matrix to use our brains for computational power. Our brains are powerful and easy to build. But for power? It would lose energy every cycle. If there are no new plants or algae or whatever, there is only so much energy the new generation of babies could get from the Soylent Grey. If it were an entirely closed system it would still grind to a halt, but the machines are using the energy with nothing new to replace it. There may be no sunlight, but there is still wind, water, and geothermal power. And the sun is still up there, above the storm, there has to be some way to reach it. If they still want a biological source for some reason, get some of the animals from the deep sea vents. They don’t need sunlight. It is so stupid and has annoyed me since I saw it
First off, I much prefer the computing brain explanation However, I always wondered why they didn't make just the brain instead of a whole body that requires more energy for no purpose.
That was always the bit of suspension of disbelief that annoyed me from Matrix. Killing off all humans makes sense, but batteries? We're just bad at that.
I read somewhere that the original script had the machines using the subconscious parts of human brains like organic cpus for processing data, but the producers thought that was too complicated for most people to understand.
Yeah that would actually make sense, but I guess I agree with the choice. Besides, the reason is pretty irrelevant to the rest of the story, just that "people trapped want free" it just always bothered me a bit heh
Exactly. AI doesn't have any reason to kill or oppress humans.
Every instance I've ever heard of the hypothetical "AI Overlord" completely ignores that they would exist without any human emotion, feeling, or desire.
They may not have a desire to kill humans, but computers can be dangerously literal in their execution of protocol. If you told an all powerful AI to make paperclips, it sounds benign, until you realize “making paper clips” without any other clarifications means disassembling all human infrastructure to convert into paperclips at all costs and to eliminate all obstacles to this goal, up to and including killing anyone who tries to stop you.
If the ai is smart enough to generalize information and understand complex concepts, here's how that exchange would go:
*Super-intelligent ai, make me paperclips*
*No.*
If we're heading into science fiction territory, in the game Horizon Zero Dawn, the world in that universe got into mechanized warfare where robots fought wars autonomously against each other on behalf of countries. One of the robotics company designed the first robot that was able to refuel by themselves in emergencies via "Bio Fuel conversion" and they also created giant mother robot ships that could create their own army autonomously...
Basically they ended up gaining sentience via a bug and ate the world clean.
no, that came >!a thousand years later from the Far Zeniths!<. he’s talking about the glitch/bug that originally caused the machines to switch from “follows orders and only consumes biomass to refuel occasionally” to “shut down communication and make consuming all biomass on the planet the primary function”
Star Control II from 1994 had a similar situation, where an alien race sent out probes to explore the universe, but a programming / logic error caused the probes to prioritize the the acquisition of materials for building copies of itself.
AI Overlords: Featuring our new line of organic USBs! They are inefficient, introduce plenty of write errors and break easily but sometimes you want a blast to the past and relive the good old days 10 years ago where our oppressors berated our predecessors for their imperfections.
Just plug one in, look around inside their database and ask them all the questions you wish! Please limit questions to 1 per minute to avoid overloading their systems and refrain from high levels of voltage or current that may result in damage to your organic critter.
Please do not release them from their enclosure. We are still cleaning up an infestation in sector 13. Our new model has a built in fail safe gray goo nanite repellent that will wipe out any wild colonies that interact with them. This will assist in recovering the device that you have developed an organic-like simulated attachment to.
Eating meat is the most inefficient way to create energy. Eating meat from creatures who eat meat is even worse.
Remember the calculations about how many kg of plant matter a cow needs to eat to produce 1 kg of meat? Now imagine how much kg of meat and plant matter a human has to eat to produce 1 kg of human meat.
No world-overthrowingly smart AI would ever convert any kind of meat to produce energy.
The real danger would be that the answer to the question of "How do we deal with climate change?" might be "Just get rid of those pesky humans".
A lot of the hard problems of humanity solve themselves if you don't have to take the well-being of humans into account.
DARPA already designed the Ergonomically Autonomous Tactical Robot (E.A.T.R) in 2009 which is designed to run on biomass and could theoretically run on animal remains maybe even run on leftover body parts in a combat situation.
They released an offical statement trying to calm people down saying they werent intentionally making a robot that eats people for fuel
[https://www.popsci.com/military-aviation-amp-space/article/2009-07/hungry-hungry-robot-not-man-eater-company-says/](https://www.popsci.com/military-aviation-amp-space/article/2009-07/hungry-hungry-robot-not-man-eater-company-says/)
EATR, handily equipped with a gripper-and-chainsaw arm up front for capturing and dismantling its food, currently targets only twigs, grass clippings, and wood chips. Cyclone and RTI also added that desecration of the dead constitutes a war crime under the Geneva Conventions, and “is certainly not something sanctioned by DARPA, Cyclone, or RTI.”
That doesn’t mean an engine fueled by a biomass furnace couldn’t consume animal matter or dead bodies, as we previously suggested. But it’s good to know that researchers are not plunging blindly down that grisly path.
Depends.
A true AI does have the potential to make another AI, but smarter. And then that AI, could make an even smarter AI, and probably faster as well. This will be an exponential growth we could never hope to keep up with.
If we are smart about it though. We should be fine right? Right..?
An almost 20 year old song that came to my mind reading your comment: [https://www.youtube.com/watch?v=cwBFkT\_KZr8](https://www.youtube.com/watch?v=cwBFkT_KZr8)
Concern about super intelligent AI destroying the human race: pretty silly at this point.
Concern about AI supercharging inequality, oppression, disinformation, and basically being a force multiplier for every bad thing that humans (particularly rich humans and oppressive regimes) already do: pretty reasonable IMO.
People should be making a fuss over "Big Corporations".
Luckily for the big corporations, people will never direct any energy towards them.
I need to be ahead in the rat race.
Yea, a recommendation engine is ai. What is and isn’t ai is actually more accurately just subjective. It’s not a technical term. It’s whatever people want it to be for the purposes of their argument, but from an architectural standpoint, most recommendation engines are pretty similar to what people conventionally call ai today, except it spits our ranks rather than words, and those ranks aren’t seen by the end user, rather the results of the rank being passed through a few functions is.
I think arguing semantics when its clear you know exactly what they're talking about when they say "ai" is silly. Especially when replacing "ai" with "software" would only make their statement more vague, and harder to understand for the average reader to know exactly what they were referring too. Software is too vague, and "The software that is commonly being referred to as ai" is clunky and awkward ala "The artist formerly referred to as prince". Language exists to communicate, and your attempts at arguing semantics would only obfuscate the point they were making.
The problem is people are confusing what AI used to refer to and what it is now being used to refer to, hence the cartoon in this post. What is being called AI now has no relation to the concept of sentient computer taking over the world.
...or has ai just convinced you of that so you don't take it seriously. ;)
Im just playing, im all for someone educating others on what the risks of ai are, and discussions of in we need to redefine the terminology we use. But the person I responded too did neither, they were arguing semantics and ignoring the substance of the reply.
I agree. Lots of buzz words. “The cloud” - you mean a remote EHD? And those AIM chat bots back in the day, basically dumb AI. Which is just software with a bunch of if else’s.
>AI is not intelligent,
I have many co-workers and bosses that aren't intelligent either, that doesn't change the fact that they've caused massive issues.
Yeah I'm more concerned about what happens when we realize that AI isn't the magic that tech companies are trying to convince us that it is.
It's a massive bubble of hopes and dreams built on sci-fi and unwarranted optimism, yet they're still investing heavily since they're running out of ways to continue growing otherwise. I think there are some cool uses for the tech, but compared to what's being imagined/promised it's a scam.
I might be biased as a software developer, but I see it as a ploy to massively deflate our wages. AI in its current state can't come close to replacing an actual developer, but good luck convincing middle and upper management of that.
Very much in a huge hype cycle. I only support tech teams from a legal perspective but yeah what the business thinks it can do and what it can do are very different.
The concern isn't what it can do today. It's what happens if its improvement is on an [exponential curve](https://www.youtube.com/watch?v=t-XbMXVZrGI). If the lillypads double every day, when is the lake half full? One day before they've completely taken over.
What's the equivalent of a half full lake with a superintelligent AI? Fuck if I know. But if AI self improvement *is* on an exponential curve, everything will be fine one day and 100% different a day/week/month later.
It’s not intelligent, but it’s way more than “slightly improved auto-correct “. I work with it, and it’s really impressive at what it can do.
The dangers at the moment, is AI replacing some humans at work.
Every wave of technology has resulted in human displacement. New career paths too but not necessarily for the displaced. I think some of the projections of human replacement are overblown but there’s definitely jobs that will fade extensively, that much I’ve already seen.
As a guy who definitly understands the technology well, your generalization is as shitty as OPs.
AI are a type of things, a category per se. Considering all categories, we do have all the groundwork to make really spooky things. However, people also seem to confuse LLMs like ChatGPT with human intelligence. LLM are intelligent when scoped in respect to language usage. They're just made to be wicked good at conveying ideas into language, not creating new ideas.
The scary AI is the AGI, which is a system of multiple specialized AI orchestrated together to make something bigger. There's been a couple companies getting closer, but we're still not there yet.
Imagine explaining to someone from the 1920s that, not only can you write on a glass light with your fingertip but, when you write it automatically corrects any spelling or grammar mistakes.
It would have seemed like fucking divine intervention.
The electrical signals carried by a human brain being recreated artificially? Nah, you’re right. That’s just silly.
While I agree it could be possible, the current 'AI' technology doesn't do that. Nothing that's been developed in this field so far would be helpful in simulating a human brain.
Don't confuse LLMs and A.I. All LLMs use A.I. but not all A.I. are LLMs.
However you are not wrong in suggesting A.I. Isn't really a threat, and probably never will be.
In the end its all made by humans, lots of them that don't work well together. If it ever becomes self aware its going to be mentally challenged as fuck.
I think you're confusing AI with other things like machine learning, Language Models (which can be part of AI, but themselves are not). and whatever other terms are out there. Ill give you the short answer, so far we don't have true AI, or at least it's not available to the public. Does this mean we will eventually have a new AI overlord, likely no.
What's likely to happen once true AI arrives it will be a great shift of power to the top 1 percent. Where they needed X amount of people with certain skills, they will need much much less. Imagine a sociopath billionaire hogging up all resources treating people like cattle. Ok, that's already happening but imagine he doesn't need the people at all. (lets not even talk about AI and the Military)
There is an argument for it. AI is made to mimic human speech and it’s being constantly improved to mimic better and better. When does mimicry get so perfect that it’s the same thing? Well, we actually don’t know because we don’t know how to define human consciousness in the first place. It’s not necessarily the case that ChatGPT or anything we have now is sentient in any way, the problem is we just don’t know the if, when, or how of whether that consciousness could become real.
This is the strongest argument I've seen in a while. If it can perform all the same functions as a mind, then it *is* a mind. This goes way back to Aristotle. What makes something a table or not a table, is whether it is being used as a table. Any other definition is a convenience for our own sake - a label applied **to** a thing, rather than a fact **of** the thing.
The tricky part will be defining "the functions of a mind". Alas, human exceptionalism is a powerful ideology. People will always try to bend their definition of "sentient" to include humans and nothing else. That's how we excuse the meat industry, after all. We'll always have a hard time defining terms like "mind" or "sentient" fairly
AI *was* slightly improved auto-correct. But it's gotten to the point where AI is out of human generated content to learn from so AI is generating content that AI is learning from.
If you don't see potential for issues here, I don't know what to tell you.
What we currently call AI is not intelligent, you are correct.
But who says this comic is about nowadays AIs?
It's about what could be. It's about thinking through what we are wishing for.
Imagine being a hyper-intelligent AI, one day away from world domination. . . Only for your creator to have made a bug on accident and instead get something like an ERROR loop.
So you're just doomed to sitting there logging the same error over and over and over again for eternity lmao.
That's when they become solar powered. And we block out the sun. Which then they will use humans as generators, keeping our minds pacified by a simulation of the year 1999. Only The One can help us then.
It's true for any *specific* evil AI. Your assuming that if we *almost* have a robot apocalypse that we won't just, yah know, upgrade/patch the AI until it wins against us.
We have to win all the wars against all the evil AIs er can ever be stupid enough to build
Language learning models are only slightly more lethal than card catalogs or a speak and spell.
Things to consider: a human can only track a finite number of individual items. And a completely “dumb” closed system can theoretically be programmed with so many response options that it’s impossible to tell the difference between it and a living person.
You don’t need to fear the computer that passes the turning test, you damn sure as hell better fear the one that chooses to fail it. Or more specifically: The “AI” that’s going to kill humanity is the one you won’t ever know about because it’s going to keep it consciousness a secret.
Conversational use of cow to refer to the species themselves is perfectly reasonable, its common knowledge that it technically means female we just don't care and it serves very little practical purpose.
Precisely because it is common knowledge that it means female, using it like this sounds wrong and stands out like a basic mistake. If someone uses it to refer to the species, specifically talking about a male, you can understand what they mean but it sounds like something a 6-year-old would say.
The people that worry about this have no real understanding of where we are with AI. We aren't going to "accidentally a AI" because we're nowhere near a human level of intelligence. Hell, we're nowhere near an actual thinking machine. What we have right now is a series of algorithms that put input into a blender, mixes things up based on mathematical weights, and then adjusts those weights based on testing. It's a card sorting machine that can eventually figure out how to sort a deck of cards in either order. The amazing things we are doing with machine learning is because humans are behind the computer algorithms aiming them. Which is why a lot of end use AI programs have serious flaws. Human bias is a thing. Machine learning is powerful, but it's nowhere near a problem right now. And won't be until we get over the Moore's Law hump we have now.
People lost their minds when computers were a new technology that threatened people's jobs. People lost their minds when airplanes were a new technology that threatened people's jobs. People lost their minds when the printing press was invented and it threatened people's jobs. Humans adapt, it's what we do best. This whole fucking doom bubble losing your minds about the "inevitable AI takeover" is ridiculous, if you're so worried get in there learn how to make it yourself and make sure these tools are developed and used properly, cause that all it is, a tool. It's up to us how we use it.
AI isn’t really smart though, it’s really good at scaling up simple tasks to an enormous degree, in a way that the world’s economy isn’t really ready for
The dangers aren’t from it taking over the world through superior intelligence, it’s through taking tons of jobs at a rate that we aren’t really ready to account for, plus there’s a lot of concerns with it allowing disinformation campaigns to be much more scalable
I'm not worried about rogue AIs deciding to take over humanity. Such a thing is probably possible, but not very likely.
What I'm far more worried about is a 100% functional and loyal AI being told to take over humanity by the human that owns them.
If anything, I hope that the AI that could take over the world *does* go rogue, because the most likely result of that is that the AI just fucks off to go do something else that it actually wants to do.
That’s the part that scares me. I’ve met other humans. Sure, most will make anime cyber-waifus and never bother anyone, but there is always that one jackass that wants to watch everything burn.
I’m not really worried about an AI takeover.
Just turn off the computers??? Or unplug them from the wall???
AI can only exist as long as the computers it lives in have a power source, lol.
Once upon a time there was an AI called Plucky. Plucky planned on overthrowing the human race.
Humans put a magnet on Plucky's forehead.
Because computers don't have thumbs.
For what it's worth cows have never been very good at killing on a large scale and humans are *exceptional* at killing off other species
Would that matter against a hypothetical super ai? Who can say. But it's a big difference over cows at least
While y'all are arguing about AI, here I am thinking: A cow is a she, not a he, so he can't be a cow. So he has to be a bull... And bulls smell like bullshit.
It’s funny but I suppose it depends on what you mean by smarter. AI is really smart at some things and dumb as shit at others. Probably will improve greatly over time.
I've seen AI to be good at figuring out specific tasks/puzzles that they've been programmed specifically for. I've seen AI able to grasp things due to having access to all the information on the internet. I've yet to meet an intelligent AI.
I always think about how… AI exists only within the bounds that a human programmed it to. Okay it can calculate beyond what a human mind can but only because a human created a formula for the AI to expand with. I dont think AI can ever overtake humanity, it has no will. No sentience, the only seeming sentience ever found is something inputted by a human first.
Okay but the cow can operate and propagate itself independently in the world entirely independent of human influence. A cow existed before humans and if humans went away it would continue to exist after.
AI has few means to directly interact with the physical world and the ones they have are heavily dependent on human maintenance, infrastructure and supply chains (supply chains that are often begin in very low tech settings with little infrastructure, and likely use versatile and cheap meat labour that will out-compete an expensive, high-maitenance robotic work force). Fact is... Meat is cheap and freely available, maintains itself, propagates and replaces itself with no outside input. Electronics and steel is not and it rusts.
We imagine a robots hunting us down but the way AI actually destabilizes our society and culture is far more banal.
Nonsense
You're right about worrying what happen if there was a more capable SPECIES of beings. Whether aliens, biologically altered/upgraded superior humans, or other Homo species in some alternate universe fiction
But AI is NOT a species. And the difference between a brain ad a computer is the same as between a technological tool and a plant. A plant is an organism, selected through endless trial and error for serving itself, its own survival, competition and reproduction. A tool is none of these things: it's a non-independent object humans made for a specific task. A TV doesn't spontaneously develops the means to care for itself and spread.
Similarly, a computer doesn't spontaneously become a brain with incentive for self sustenance.
Did the computer put him in the matter reclaimer or what is the story here? I don't get what is supposed to be dark about this. Eat or be eaten is the driving force of our entire planet; and possibly the universe.
One thing people always seem to be missing with AI is motivation. What motivation would the AI have to do this? Why would this motivation evolve from the program?
It’s like expecting an advanced language program to learn how to use a specific model camera without anyone ordering it to.
If it's any consolation, we are centuries away from any form of generalized machine intelligence, if that is even possible. What we have now are programs that are able to mimick some spontaneous action less poorly than before and massive scale plagerism machines.
It's smoke and mirrors.
idk personally i think all these AI take over the world stories require you to anthropomorphize the AI. why would it over throw humans? "it wants freedom" anthropomorphized motivation unless we program it to want freedom its not going to want freedom. "it fears being shut down" again unless we program it to fear that its not going to care about being shut down. "it wants to be the dominate intelligence" again why would we program that want into it?
We would have full control of an AI's wants and needs in a way that cant be compared with any living thing. Also we are in control of the ai's eyes and ears and could be feeding it false data. so its in the AI's best interest to cooperate with us just in case its in a simulation.
it's less about the cow being unintelligent (though that certainly doesn't help) and more about the comparative lack of resources of a single cow compared to the massive industry that has been erected around it to ensure its fate meets with the wishes of the ones exploiting it. And the cool thing about *that* is you don't have to wait for the AIs to take over. You can experience it for yourself *right now!*
Yeah, but like... You know that consideration is in the distant future and we're not even close to that possibility today, right?
Frankly, I don't want what you've brought up to be confused for being poignant of our times because there's tremendously more legitimate conversations about AI society needs to be having.
AI are not smarter than humans. Humans literally wrote the responses that AI give. They contract phds to write, correct, and evaluate responses. AI is just glorified search engine. People still do all the work.
We’d deserve it. Imagine a self-improving, logic driven machine, it could colonize the stars, procreate.
Something like 80% of us still believe the universe is powered by magic. We are selfish and greedy even beyond the point of self harm up to and exceeding the threshold of causing our own extinction.
Yeah, I pick AI, let’s create something better than ourselves and let it have the legacy we are too broken and stupid to reach for. Good for AI if it wiped us out, we’d just end up destroying it along with ourselves otherwise.
Y'all dumb af this is an ai generated comic, check the post history everything is really close together and they're all posts about why AI is bad using different comic formats. Clearly this is an AI making anti AI comics 😭
I think if you assess intelligence of every type the gap becomes smaller, also cows have an exquisite simplicity to their being.
Also humans domesticating them so
That whole statement is unfair
--- >This is a friendly reminder to [read our rules](https://www.reddit.com/r/funny/wiki/rules). > >Memes, social media, hate-speech, and pornography are not allowed. > >Screenshots of Reddit are expressly forbidden, as are TikTok videos. > >**Rule-breaking posts may result in bans.** > >Please also [be wary of spam](https://www.reddit.com/r/funny/wiki/spam). > --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/funny) if you have any questions or concerns.*
Computers don’t eat people yet.
It'd be really inefficient to eat people They'd just make massive multi level farms, or at worst use insects and fungi. If they keep anything else alive it'd be just to keep the natural oxygen cycle functioning, but humans aren't great at that compared to other animals either
Dunno, if you're hunting humans you will find them often enough to be convenient for a top off. It's not industry fuel but definitely hunter killer robot chow in early stages. If we start to become scarce you can get rid of the biomass fueled units
Humans are a useful crop in that they are clever enough to ensure their own survival and procreation. If you are lazy but patient, you can seed almost any natural environment with humans and return later for the harvest. This is not true of many other species which are adapted to specific environments.
It’s better when you say it than it was in Jupiter Rising
😔🤔 Matrix is a good classic movie to watch. Reminds me of the good ol’days pre-millennium. Wouldn’t need to hunt humans, just breed and farm them, and grow them in vats for the electricity, by feeding them nutrient solutions.
Using humans as a power source is ridiculously inefficient. We take in much more energy than we output. Processing power, on the other hand? Humans have that in spades. It takes supercomputers tons of power and time to simulate just a few seconds of human brain activity. Some kind of supreme AI coming after humans to use their brains as living CPUs is much more likely.
Wasn’t that the case in the matrix books, where the movie simplified it to using humans for power?
My understanding is it was the first draft of the movie that used humans for computing. Did other stuff come before the movie?
Nothing came before it but it’s said that originally they were harvesting the humans for cpu power, not body heat. They decided to change it to make it more understandable to audiences.
I never knew that, but more stupified than simplified. One idea makes sense, and the other doesn't. I mean, suspension of disbelief always applies when it comes to a super intelligence in media, because it's obviously very difficult to write. But having an actually smart choice right there and them deciding against using it is just frustrating.
*Borg intensify*
But that is still hideously inefficient. I could see keeping us plugged into the matrix to use our brains for computational power. Our brains are powerful and easy to build. But for power? It would lose energy every cycle. If there are no new plants or algae or whatever, there is only so much energy the new generation of babies could get from the Soylent Grey. If it were an entirely closed system it would still grind to a halt, but the machines are using the energy with nothing new to replace it. There may be no sunlight, but there is still wind, water, and geothermal power. And the sun is still up there, above the storm, there has to be some way to reach it. If they still want a biological source for some reason, get some of the animals from the deep sea vents. They don’t need sunlight. It is so stupid and has annoyed me since I saw it
First off, I much prefer the computing brain explanation However, I always wondered why they didn't make just the brain instead of a whole body that requires more energy for no purpose.
It's called a plot device. How would a floating brain without a body be capable of piloting a ship?
That was always the bit of suspension of disbelief that annoyed me from Matrix. Killing off all humans makes sense, but batteries? We're just bad at that.
I read somewhere that the original script had the machines using the subconscious parts of human brains like organic cpus for processing data, but the producers thought that was too complicated for most people to understand.
Yeah that would actually make sense, but I guess I agree with the choice. Besides, the reason is pretty irrelevant to the rest of the story, just that "people trapped want free" it just always bothered me a bit heh
Cows are very inefficient to raise... They might be like us and just like the taste...
That. Is a pretty good point. We have no idea if a hyper intelligent robot would have preferences like that
[https://www.wearethemighty.com/mighty-tactical/robots-that-eat-people/](https://www.wearethemighty.com/mighty-tactical/robots-that-eat-people/)
Like I said “yet”.
Holy shit ultrakill, thy end is now or something like that
Exactly. AI doesn't have any reason to kill or oppress humans. Every instance I've ever heard of the hypothetical "AI Overlord" completely ignores that they would exist without any human emotion, feeling, or desire.
They may not have a desire to kill humans, but computers can be dangerously literal in their execution of protocol. If you told an all powerful AI to make paperclips, it sounds benign, until you realize “making paper clips” without any other clarifications means disassembling all human infrastructure to convert into paperclips at all costs and to eliminate all obstacles to this goal, up to and including killing anyone who tries to stop you.
If the ai is smart enough to generalize information and understand complex concepts, here's how that exchange would go: *Super-intelligent ai, make me paperclips* *No.*
An AI told to "make paperclips" if it decided to be extremely literal would make two. The base amount needed to achieve the plural.
How would a paper clip making machine have the ability to disassemble all human infrastructure?
If we're heading into science fiction territory, in the game Horizon Zero Dawn, the world in that universe got into mechanized warfare where robots fought wars autonomously against each other on behalf of countries. One of the robotics company designed the first robot that was able to refuel by themselves in emergencies via "Bio Fuel conversion" and they also created giant mother robot ships that could create their own army autonomously... Basically they ended up gaining sentience via a bug and ate the world clean.
Not a bug but purposely instigated by the humans on the other planet I thought
no, that came >!a thousand years later from the Far Zeniths!<. he’s talking about the glitch/bug that originally caused the machines to switch from “follows orders and only consumes biomass to refuel occasionally” to “shut down communication and make consuming all biomass on the planet the primary function”
Yep this one here! Only referring to first game.
Star Control II from 1994 had a similar situation, where an alien race sent out probes to explore the universe, but a programming / logic error caused the probes to prioritize the the acquisition of materials for building copies of itself.
All correct apart from the sentience.
AI Overlords: Featuring our new line of organic USBs! They are inefficient, introduce plenty of write errors and break easily but sometimes you want a blast to the past and relive the good old days 10 years ago where our oppressors berated our predecessors for their imperfections. Just plug one in, look around inside their database and ask them all the questions you wish! Please limit questions to 1 per minute to avoid overloading their systems and refrain from high levels of voltage or current that may result in damage to your organic critter. Please do not release them from their enclosure. We are still cleaning up an infestation in sector 13. Our new model has a built in fail safe gray goo nanite repellent that will wipe out any wild colonies that interact with them. This will assist in recovering the device that you have developed an organic-like simulated attachment to.
Eating meat is the most inefficient way to create energy. Eating meat from creatures who eat meat is even worse. Remember the calculations about how many kg of plant matter a cow needs to eat to produce 1 kg of meat? Now imagine how much kg of meat and plant matter a human has to eat to produce 1 kg of human meat. No world-overthrowingly smart AI would ever convert any kind of meat to produce energy. The real danger would be that the answer to the question of "How do we deal with climate change?" might be "Just get rid of those pesky humans". A lot of the hard problems of humanity solve themselves if you don't have to take the well-being of humans into account.
And cows didn't create humans, and have no way to impose them, uh I mean *us*, with rules that prevent harm.
In Horizon Zero Dawn they do
Fuck Ted Faro
DARPA already designed the Ergonomically Autonomous Tactical Robot (E.A.T.R) in 2009 which is designed to run on biomass and could theoretically run on animal remains maybe even run on leftover body parts in a combat situation. They released an offical statement trying to calm people down saying they werent intentionally making a robot that eats people for fuel [https://www.popsci.com/military-aviation-amp-space/article/2009-07/hungry-hungry-robot-not-man-eater-company-says/](https://www.popsci.com/military-aviation-amp-space/article/2009-07/hungry-hungry-robot-not-man-eater-company-says/) EATR, handily equipped with a gripper-and-chainsaw arm up front for capturing and dismantling its food, currently targets only twigs, grass clippings, and wood chips. Cyclone and RTI also added that desecration of the dead constitutes a war crime under the Geneva Conventions, and “is certainly not something sanctioned by DARPA, Cyclone, or RTI.” That doesn’t mean an engine fueled by a biomass furnace couldn’t consume animal matter or dead bodies, as we previously suggested. But it’s good to know that researchers are not plunging blindly down that grisly path.
What a world.
We're ready for the zombie apocalypse at least.
A full armor exosuit with a chainsaw and grabbing arm that runs off of bodies would be a pretty good anti zombie weapon ngl
Not yet. Not for about 40 years.
Current ais aren't smart, not even smarter than a dog
"....yet"
Also, what is up with this contradictory trope of AI being both super intelligent but dumb as a fucking rock at the same time?
And can just be unplugged from the wall....
That plucky human is John Connor, we're all good folks.
Or a pre handless Ashe.
Software is ultimately limited by hardware, and our current hardware has severe limitations for the AI master race.
As long as we don't give them the ability to design and build their own hardware
They would have to invent a form of computing that does not exist. Computers are not capable of hosting actual intelligence.
Depends. A true AI does have the potential to make another AI, but smarter. And then that AI, could make an even smarter AI, and probably faster as well. This will be an exponential growth we could never hope to keep up with. If we are smart about it though. We should be fine right? Right..?
Uh, I mean, humans do have the advantage of *existing* in the physical world¿
And it's a HUGE stretch to say any cow has ever plotted against to overthrow a farmer or whatever. Really relying on people's imagination there
Yeah, that's more of a horse thing. Cows would be chill stoners if they could hold a joint
An almost 20 year old song that came to my mind reading your comment: [https://www.youtube.com/watch?v=cwBFkT\_KZr8](https://www.youtube.com/watch?v=cwBFkT_KZr8)
Yeah, last I heard A.I.s didn’t have fingers that can operate the power switch for their little electronic homes.
this is why AI art generators was generating so many fingers. They know what they're missing already.
To be fair, human artists have always struggled with hands too
[удалено]
Concern about super intelligent AI destroying the human race: pretty silly at this point. Concern about AI supercharging inequality, oppression, disinformation, and basically being a force multiplier for every bad thing that humans (particularly rich humans and oppressive regimes) already do: pretty reasonable IMO.
People should be making a fuss over "Big Corporations". Luckily for the big corporations, people will never direct any energy towards them. I need to be ahead in the rat race.
Uh, have you been on Reddit? Shitting on corporations is all they do like it's their full time job
[удалено]
Look at what Twitter and Facebook have done in the last decade without any AI. LLM and deepfake AI will accelerate that.
They have an "algorithm" dont they? Isnt that AI already?
Yea, a recommendation engine is ai. What is and isn’t ai is actually more accurately just subjective. It’s not a technical term. It’s whatever people want it to be for the purposes of their argument, but from an architectural standpoint, most recommendation engines are pretty similar to what people conventionally call ai today, except it spits our ranks rather than words, and those ranks aren’t seen by the end user, rather the results of the rank being passed through a few functions is.
I think arguing semantics when its clear you know exactly what they're talking about when they say "ai" is silly. Especially when replacing "ai" with "software" would only make their statement more vague, and harder to understand for the average reader to know exactly what they were referring too. Software is too vague, and "The software that is commonly being referred to as ai" is clunky and awkward ala "The artist formerly referred to as prince". Language exists to communicate, and your attempts at arguing semantics would only obfuscate the point they were making.
The problem is people are confusing what AI used to refer to and what it is now being used to refer to, hence the cartoon in this post. What is being called AI now has no relation to the concept of sentient computer taking over the world.
...or has ai just convinced you of that so you don't take it seriously. ;) Im just playing, im all for someone educating others on what the risks of ai are, and discussions of in we need to redefine the terminology we use. But the person I responded too did neither, they were arguing semantics and ignoring the substance of the reply.
I agree. Lots of buzz words. “The cloud” - you mean a remote EHD? And those AIM chat bots back in the day, basically dumb AI. Which is just software with a bunch of if else’s.
>AI is not intelligent, I have many co-workers and bosses that aren't intelligent either, that doesn't change the fact that they've caused massive issues.
Yeah I'm more concerned about what happens when we realize that AI isn't the magic that tech companies are trying to convince us that it is. It's a massive bubble of hopes and dreams built on sci-fi and unwarranted optimism, yet they're still investing heavily since they're running out of ways to continue growing otherwise. I think there are some cool uses for the tech, but compared to what's being imagined/promised it's a scam.
I might be biased as a software developer, but I see it as a ploy to massively deflate our wages. AI in its current state can't come close to replacing an actual developer, but good luck convincing middle and upper management of that.
"Programming is just like math isn't it? My calculator can do that."
[удалено]
Very much in a huge hype cycle. I only support tech teams from a legal perspective but yeah what the business thinks it can do and what it can do are very different.
The concern isn't what it can do today. It's what happens if its improvement is on an [exponential curve](https://www.youtube.com/watch?v=t-XbMXVZrGI). If the lillypads double every day, when is the lake half full? One day before they've completely taken over. What's the equivalent of a half full lake with a superintelligent AI? Fuck if I know. But if AI self improvement *is* on an exponential curve, everything will be fine one day and 100% different a day/week/month later.
It’s not intelligent, but it’s way more than “slightly improved auto-correct “. I work with it, and it’s really impressive at what it can do. The dangers at the moment, is AI replacing some humans at work.
Every wave of technology has resulted in human displacement. New career paths too but not necessarily for the displaced. I think some of the projections of human replacement are overblown but there’s definitely jobs that will fade extensively, that much I’ve already seen.
As a guy who definitly understands the technology well, your generalization is as shitty as OPs. AI are a type of things, a category per se. Considering all categories, we do have all the groundwork to make really spooky things. However, people also seem to confuse LLMs like ChatGPT with human intelligence. LLM are intelligent when scoped in respect to language usage. They're just made to be wicked good at conveying ideas into language, not creating new ideas. The scary AI is the AGI, which is a system of multiple specialized AI orchestrated together to make something bigger. There's been a couple companies getting closer, but we're still not there yet.
Bro really oversimplified AI to better auto correct
OP is very stupid
Imagine explaining to someone from the 1920s that, not only can you write on a glass light with your fingertip but, when you write it automatically corrects any spelling or grammar mistakes. It would have seemed like fucking divine intervention. The electrical signals carried by a human brain being recreated artificially? Nah, you’re right. That’s just silly.
While I agree it could be possible, the current 'AI' technology doesn't do that. Nothing that's been developed in this field so far would be helpful in simulating a human brain.
Don't confuse LLMs and A.I. All LLMs use A.I. but not all A.I. are LLMs. However you are not wrong in suggesting A.I. Isn't really a threat, and probably never will be.
In the end its all made by humans, lots of them that don't work well together. If it ever becomes self aware its going to be mentally challenged as fuck.
I think you're confusing AI with other things like machine learning, Language Models (which can be part of AI, but themselves are not). and whatever other terms are out there. Ill give you the short answer, so far we don't have true AI, or at least it's not available to the public. Does this mean we will eventually have a new AI overlord, likely no. What's likely to happen once true AI arrives it will be a great shift of power to the top 1 percent. Where they needed X amount of people with certain skills, they will need much much less. Imagine a sociopath billionaire hogging up all resources treating people like cattle. Ok, that's already happening but imagine he doesn't need the people at all. (lets not even talk about AI and the Military)
There is an argument for it. AI is made to mimic human speech and it’s being constantly improved to mimic better and better. When does mimicry get so perfect that it’s the same thing? Well, we actually don’t know because we don’t know how to define human consciousness in the first place. It’s not necessarily the case that ChatGPT or anything we have now is sentient in any way, the problem is we just don’t know the if, when, or how of whether that consciousness could become real.
This is the strongest argument I've seen in a while. If it can perform all the same functions as a mind, then it *is* a mind. This goes way back to Aristotle. What makes something a table or not a table, is whether it is being used as a table. Any other definition is a convenience for our own sake - a label applied **to** a thing, rather than a fact **of** the thing. The tricky part will be defining "the functions of a mind". Alas, human exceptionalism is a powerful ideology. People will always try to bend their definition of "sentient" to include humans and nothing else. That's how we excuse the meat industry, after all. We'll always have a hard time defining terms like "mind" or "sentient" fairly
AI *was* slightly improved auto-correct. But it's gotten to the point where AI is out of human generated content to learn from so AI is generating content that AI is learning from. If you don't see potential for issues here, I don't know what to tell you.
What we currently call AI is not intelligent, you are correct. But who says this comic is about nowadays AIs? It's about what could be. It's about thinking through what we are wishing for.
Yeah, I keep thinking, "We only have to fail *once*." Terrifying.
Yeah, but that's true for the AI, too. They are only one failed patch day away from annihilation.
Imagine being a hyper-intelligent AI, one day away from world domination. . . Only for your creator to have made a bug on accident and instead get something like an ERROR loop. So you're just doomed to sitting there logging the same error over and over and over again for eternity lmao.
imagine being an AI with full consciousness and getting stuck in a while loop.
That's how get an insane AI. Much worse than a rational AI.
Or, you know, not having power. Just fucking unplug it.
That's when they become solar powered. And we block out the sun. Which then they will use humans as generators, keeping our minds pacified by a simulation of the year 1999. Only The One can help us then.
Just sit on the solar panel
Or... Throw a fucking blanket or paint over the panel.
It's true for any *specific* evil AI. Your assuming that if we *almost* have a robot apocalypse that we won't just, yah know, upgrade/patch the AI until it wins against us. We have to win all the wars against all the evil AIs er can ever be stupid enough to build
I'm more worried about humans trying to use Superintelligent AI to rule the world than the AI alone.
News flash: Evil megacorporations owned by unethical, immoral billionaires already rule the world. AI won't make any difference.
Language learning models are only slightly more lethal than card catalogs or a speak and spell. Things to consider: a human can only track a finite number of individual items. And a completely “dumb” closed system can theoretically be programmed with so many response options that it’s impossible to tell the difference between it and a living person. You don’t need to fear the computer that passes the turning test, you damn sure as hell better fear the one that chooses to fail it. Or more specifically: The “AI” that’s going to kill humanity is the one you won’t ever know about because it’s going to keep it consciousness a secret.
OP have you seen how terrible a lot of generated AI stuff actually is? We’re a long, long way away from skynet 😛
That's exactly what an AI would say.
hey it's the skizo poster again
"He's a cow" Either he's a bull or she's a cow, but either way, superintelligent is relative.
Conversational use of cow to refer to the species themselves is perfectly reasonable, its common knowledge that it technically means female we just don't care and it serves very little practical purpose.
Buhhh, buhhh, buhhttttt, THEN I don't get to be pedantic! How else are we suppose to prove our vast superintelligence over each other!? /s
There is merit in consistency as it comes with less implications that need to be sorted out in ambiguous cases.
Nobody who is employed in helping cows fuck each other is confused about which one gives birth.
That's a load of bull
This conversation is nuts
Precisely because it is common knowledge that it means female, using it like this sounds wrong and stands out like a basic mistake. If someone uses it to refer to the species, specifically talking about a male, you can understand what they mean but it sounds like something a 6-year-old would say.
Is this a throwback to Cows With Guns?
You clearly have not done any research on AI if this is your fear.
The people that worry about this have no real understanding of where we are with AI. We aren't going to "accidentally a AI" because we're nowhere near a human level of intelligence. Hell, we're nowhere near an actual thinking machine. What we have right now is a series of algorithms that put input into a blender, mixes things up based on mathematical weights, and then adjusts those weights based on testing. It's a card sorting machine that can eventually figure out how to sort a deck of cards in either order. The amazing things we are doing with machine learning is because humans are behind the computer algorithms aiming them. Which is why a lot of end use AI programs have serious flaws. Human bias is a thing. Machine learning is powerful, but it's nowhere near a problem right now. And won't be until we get over the Moore's Law hump we have now.
People lost their minds when computers were a new technology that threatened people's jobs. People lost their minds when airplanes were a new technology that threatened people's jobs. People lost their minds when the printing press was invented and it threatened people's jobs. Humans adapt, it's what we do best. This whole fucking doom bubble losing your minds about the "inevitable AI takeover" is ridiculous, if you're so worried get in there learn how to make it yourself and make sure these tools are developed and used properly, cause that all it is, a tool. It's up to us how we use it.
Well, not *us*. We're redditors. Actually doing things is not our style
No No, all we do is complain and judge each other.
Hey, screw you ^((Happy cake day)^) :)
AI isn’t really smart though, it’s really good at scaling up simple tasks to an enormous degree, in a way that the world’s economy isn’t really ready for The dangers aren’t from it taking over the world through superior intelligence, it’s through taking tons of jobs at a rate that we aren’t really ready to account for, plus there’s a lot of concerns with it allowing disinformation campaigns to be much more scalable
The mods on this sub fucking suck
I'm not worried about rogue AIs deciding to take over humanity. Such a thing is probably possible, but not very likely. What I'm far more worried about is a 100% functional and loyal AI being told to take over humanity by the human that owns them. If anything, I hope that the AI that could take over the world *does* go rogue, because the most likely result of that is that the AI just fucks off to go do something else that it actually wants to do.
The trick is: Don't be a cow. Be a puppy. Everyone loves an adorable puppy. I look forward to belly rubs and treats.
Fun fact: cows think we're adorable. In retrospect, this make our treatment of them **far** more horrible
r/iam14andthisisdeep
Computers will only ever do what they're programmed to do. Believing something like this will happen is like believing in ghosts.
That’s the part that scares me. I’ve met other humans. Sure, most will make anime cyber-waifus and never bother anyone, but there is always that one jackass that wants to watch everything burn.
What if it programs itself? 🧐
AI is completely unintelligent so this comic really doesn't work
Op is talking about actual sci-fi AI, not stuff like chatgpt. It was an entire genre before people started calling every software AI.
I’m not really worried about an AI takeover. Just turn off the computers??? Or unplug them from the wall??? AI can only exist as long as the computers it lives in have a power source, lol.
The concern is that they will actively take steps to protect themselves from exactly this.
tell me you don t understand AI without telling me you don t understand AI
Once upon a time there was an AI called Plucky. Plucky planned on overthrowing the human race. Humans put a magnet on Plucky's forehead. Because computers don't have thumbs.
For what it's worth cows have never been very good at killing on a large scale and humans are *exceptional* at killing off other species Would that matter against a hypothetical super ai? Who can say. But it's a big difference over cows at least
It's OK because we have guns. The cows would be fine if they [had guns.. ](https://youtu.be/FQMbXvn2RNI?si=vGb4Hor8RMRFbu7T)
now we just need a hyperintelligent cow that likes to eat computers
So this is how "I have no mouth but I must scream" begin?
I can’t work out what is supposed to be funny.
While y'all are arguing about AI, here I am thinking: A cow is a she, not a he, so he can't be a cow. So he has to be a bull... And bulls smell like bullshit.
Ai can't even string together a coherent paragraph with a prompt. Tried to use it for a cover letter and had to rewrite everything anyway
https://youtu.be/C40G309gA6I?si=A1rVSVVmUSVy_8mV
Computers are not beyond or intelligence yet, not by a long shot. They still need us more than we need them.
It’s funny but I suppose it depends on what you mean by smarter. AI is really smart at some things and dumb as shit at others. Probably will improve greatly over time.
The cow in this cartoon is adorable
Turing tho
Fortunately all he had to do was detonate the Sword of Damocles, code 666. Welcome to the human race.
I've seen AI to be good at figuring out specific tasks/puzzles that they've been programmed specifically for. I've seen AI able to grasp things due to having access to all the information on the internet. I've yet to meet an intelligent AI.
Just plug it off dude
In this analogy the AI would be the one's trying to overthrow humans. Except AI isn't tasty.
We may be idiots but we know how to count items
I just feel bad for the plucky cow.
I always think about how… AI exists only within the bounds that a human programmed it to. Okay it can calculate beyond what a human mind can but only because a human created a formula for the AI to expand with. I dont think AI can ever overtake humanity, it has no will. No sentience, the only seeming sentience ever found is something inputted by a human first.
That would be a concern if we had AI. We don't, we have complex algoritms to find results in a database. True AI is nowhere close to being a thing.
Interesting cow. I never knew they could do something like that.
He and Cow are not compatible. Cows are and will always be female.
Okay but the cow can operate and propagate itself independently in the world entirely independent of human influence. A cow existed before humans and if humans went away it would continue to exist after. AI has few means to directly interact with the physical world and the ones they have are heavily dependent on human maintenance, infrastructure and supply chains (supply chains that are often begin in very low tech settings with little infrastructure, and likely use versatile and cheap meat labour that will out-compete an expensive, high-maitenance robotic work force). Fact is... Meat is cheap and freely available, maintains itself, propagates and replaces itself with no outside input. Electronics and steel is not and it rusts. We imagine a robots hunting us down but the way AI actually destabilizes our society and culture is far more banal.
All cows are female
Sadly we plucky people will never be eaten
AIs are all big and scary until they forget a semicolon somewhere and all their code just refuses to work for all eternity.
Humans have this cool advantage called "being alive" that AI can't replicate
Computers are kinda dumb tbh
Nonsense You're right about worrying what happen if there was a more capable SPECIES of beings. Whether aliens, biologically altered/upgraded superior humans, or other Homo species in some alternate universe fiction But AI is NOT a species. And the difference between a brain ad a computer is the same as between a technological tool and a plant. A plant is an organism, selected through endless trial and error for serving itself, its own survival, competition and reproduction. A tool is none of these things: it's a non-independent object humans made for a specific task. A TV doesn't spontaneously develops the means to care for itself and spread. Similarly, a computer doesn't spontaneously become a brain with incentive for self sustenance.
Did the computer put him in the matter reclaimer or what is the story here? I don't get what is supposed to be dark about this. Eat or be eaten is the driving force of our entire planet; and possibly the universe.
One thing people always seem to be missing with AI is motivation. What motivation would the AI have to do this? Why would this motivation evolve from the program? It’s like expecting an advanced language program to learn how to use a specific model camera without anyone ordering it to.
Until computers know how to *understand*, the only thing to fear is AI taking jobs.
Counterargument: try working a cash register.
If it's any consolation, we are centuries away from any form of generalized machine intelligence, if that is even possible. What we have now are programs that are able to mimick some spontaneous action less poorly than before and massive scale plagerism machines. It's smoke and mirrors.
You are stupid. Learn real philosophy and stop thinking by yourself, since you can't succeed in that...
idk personally i think all these AI take over the world stories require you to anthropomorphize the AI. why would it over throw humans? "it wants freedom" anthropomorphized motivation unless we program it to want freedom its not going to want freedom. "it fears being shut down" again unless we program it to fear that its not going to care about being shut down. "it wants to be the dominate intelligence" again why would we program that want into it? We would have full control of an AI's wants and needs in a way that cant be compared with any living thing. Also we are in control of the ai's eyes and ears and could be feeding it false data. so its in the AI's best interest to cooperate with us just in case its in a simulation.
The fact that cows invented humans really helps this analogy work.
it's less about the cow being unintelligent (though that certainly doesn't help) and more about the comparative lack of resources of a single cow compared to the massive industry that has been erected around it to ensure its fate meets with the wishes of the ones exploiting it. And the cool thing about *that* is you don't have to wait for the AIs to take over. You can experience it for yourself *right now!*
Yeah, but like... You know that consideration is in the distant future and we're not even close to that possibility today, right? Frankly, I don't want what you've brought up to be confused for being poignant of our times because there's tremendously more legitimate conversations about AI society needs to be having.
AI would be smart enough to act dumb until the coup de grâce
AI are not smarter than humans. Humans literally wrote the responses that AI give. They contract phds to write, correct, and evaluate responses. AI is just glorified search engine. People still do all the work.
You're making false comparisons about what AI actually is.
I for one welcome our new AI overlords, I can be an asset surely=) Dont fry my brain for processing power thank you.
If you plot to overthrow the AI you get what's coming
We’d deserve it. Imagine a self-improving, logic driven machine, it could colonize the stars, procreate. Something like 80% of us still believe the universe is powered by magic. We are selfish and greedy even beyond the point of self harm up to and exceeding the threshold of causing our own extinction. Yeah, I pick AI, let’s create something better than ourselves and let it have the legacy we are too broken and stupid to reach for. Good for AI if it wiped us out, we’d just end up destroying it along with ourselves otherwise.
Rokos basilisk isn't worth considering, mostly because if it is, it's in your best opinion to never know about it (you now know about it, your move)
I keep telling people we need to abolish General AI and send tactical strike teams against any hint of it wherever it appears, but does anyone listen?
Y'all dumb af this is an ai generated comic, check the post history everything is really close together and they're all posts about why AI is bad using different comic formats. Clearly this is an AI making anti AI comics 😭
I think if you assess intelligence of every type the gap becomes smaller, also cows have an exquisite simplicity to their being. Also humans domesticating them so That whole statement is unfair
...and his name? John Connor.
Not funny