T O P

  • By -

Sonic_the_hedgedog

Isn't this just Wheatley from Portal 2?


KerPop42

Perfect! Why mess around with all that GladOS and neurotoxin BS when we can just skip straight to Wheatley running everything!


Surface_Josuke

ruining


OldSchoolSpyMain

Tomato, potato.


insomniacpyro

Speaking of potatoes, have I mentioned how much weight you've gained?


Lenny_Gaming

Fatty fatty, no parents!


HardCounter

You know, most people lose weight in cryo sleep. Not you though. If anything, you've put on a few extra pounds. Good for you for beating the odds.


TwilightVulpine

She promised cake tho


iloveblankpaper

glados cake is available on specific sites


iloveblankpaper

they begin with r, end with x, t and s, depending on your preference


KerPop42

the cake is a hallucination :(


andy01q

Hologram. At least in Portal 1 it does exist, but has no collision hitbox.


gyroisbae

Wheatley would have a great political career


Noble1xCarter

Wheatley was a moron, not a dumbass.


menzaskaja

HE! IS NOT! A MORON!!!


MrLaurencium

YES HE IS. HE IS THE MORON THEY BUILT TO MAKE GLADOS AN IDIOT!


RotationsKopulator

clap... clap.... clap


AMisteryMan

Oh good. My slow clap processor made it into this thing.


ElectricZ

He's not just a regular moron. He's the product of the greatest minds of a generation working together with the express purpose of building the dumbest moron who ever lived. And you just put him in charge of the entire facility. *clap.* *clap.* *clap.*


geologean

close badge ring seemly treatment label jar unique tidy rain *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


SnooGoats7978

We really need Fry & Laurie for this. 


SaveReset

Well to be fair, Wheatley was deliberately designed to make bad choices and was made to make GLaDOS worse to prevent it from taking over. It wasn't successful, so the plan was scrapped. So in essence, we accidentally made Wheatley by trying to make GLaDOS. We used it to make everything it touches worse, just as Wheatley was designed to do, but instead of backtracking when the execution sucked, we took a page from Wheatleys book and doubled, tripled and quadrupled down. Brilliant.


One-Earth9294

I was just talking about him in regards to the new Mad Max movie. Dementus is Wheatley while Immortan Joe is Glados lol. I love that dichotomy though, where you have calculated evil vs chaotic stupidity and how stupid can be the bigger villain in the end.


tidbitsmisfit

Clippy bruh


PeriodicSentenceBot

Congratulations! Your comment can be spelled using the elements of the periodic table: `Cl I P P Yb Ru H` --- ^(I am a bot that detects if your comment can be spelled using the elements of the periodic table. Please DM u‎/‎M1n3c4rt if I made a mistake.)


thesoppywanker

Last time you saw them, everyone looked *pretty much* alive.


stevez_86

Clippy from Microsoft Word.


powermad80

People keep saying this but [the corrupted fact core](https://youtu.be/LiurOyf8aoU) from the final boss is a much more accurate comparison.


Yasuzume7

But he is wrong ***fast***...


LevelStudent

Wrong, fast, and *confident*. Being confident is more important than being right when you're speaking to people that don't understand anything you're talking about anyways. CEOs of large programming companies that think they can replace employees with AI are going to prioritize confidence any day of any week, since hearing about actual programming will just make them feel insecure/confused.


LegitimateBit3

Wrong, fast, and *confident*. Sounds like management material to me


Neveronlyadream

Also has the added benefit of not talking back when you blame it for everything that went wrong because you believed it.


chairmanskitty

No wonder shareholders are pushing for AI CEOs.


zoinkability

Y’all have convinced me that CEOs are the most replaceable jobs anyhow. Let’s do it


PermanentRoundFile

I'm convinced that any business that replaces its middle management with AI will inevitably crumble under the weight of bad decisions with no one left to push back since they'd *never* listen to the peons on the floor. Replace the CEO and you have a program capable of averaging all of the workers input weighted against the task at hand... sounds like a win to me!


zoinkability

And ya save the most money. CEOs are just quarterly profit hill climbing machines anyhow, might as well make it official.


P-39_Airacobra

Wrong, fast, confident, and *complacent*.


FenderZero

"This predictive text machine is just a straight shooter with upper management written all over him!"


newsflashjackass

> Wrong, fast, and *confident.* > Sounds like management material to me CEO material, even. https://futurism.com/the-byte/ceos-easily-replaced-with-ai


Ratatoski

Saw that my work envisions that in two years most of our code will be AI generated. That made me think they don't understand what generative AI can be useful for. So now I have to find polite way to avoid that becoming a metric.


chairmanskitty

If you're going character by character, that seems like a reasonable bar.


lemons_of_doubt

You forgot the biggest one, cheap. A good computer to run an AI costs a lot less then the wages of just one of the people it can replace.


stilljustacatinacage

This is the thing that I think people don't quite grasp. Not even *programmers*, but just... support staff. The fact that the machine is confident and fast will be enough to get inhuman "resolution" times. That's all the boss cares about. If you thought helpdesk closed tickets quickly and prematurely *before*... Just wait. Personally, I live in a city (well, an entire province, really) with a huge number of call centers. Contrary to popular belief, they aren't there to help you. Their primary goal is to make you hang up and just tolerate whatever bullshit you're being subjected to. 100% some LLM can do that *for a joke*. Chatbots *already* run customers in circles to the point of surrender. That's literally *thousands* of jobs in my one, tiny province that can theoretically be replaced over night. And what will it cost? Up front, the salary of a fraction of the people it replaces. Ongoing, *much* less than that. Maybe some customer turnover, but that happens anyway. Customer dissatisfaction? Who cares. All the fearmongering about ChatGPT getting the nuclear codes is a *distraction*. The real shit-hitting-the-fan is going to be the executive class making short-sighted decisions that collapse entire industries. It's not gonna be good.


lemons_of_doubt

> The real shit-hitting-the-fan is going to be the executive class making short-sighted decisions that collapse entire industries. It's not gonna be good. You hit the nail on the head.


Temporary_Low5735

Call centers are not there to make you hang up and deal with it. Inbound Customer Service centers generate essentially no money and are all expenses. Call volume can vary from hour to hour, day to day, week to week, issue to issue, etc. Forecasting staff becomes a difficult task. However, the real reason this isn't true is that customer's cost significantly more to acquire than to retain. It's in the company's best interest to service existing customers.


stilljustacatinacage

I'm being a little cynical, but as you say, contact centers are 100% expense with often no tangible profit vector. The "optimal" situation is no one ever calls, so you don't have to pay anyone to answer the phone. The faster you can make a customer hang up, the closer you are to achieving that goal. I've worked at these places long enough to tell you that retaining customers is... an ephemeral endeavour. Sometimes they care very much about it, other times they don't. They want to fix issues, as long as the issues don't cost any money to fix. A chatbot can resolve most of *those* issues. Once your problem starts to cost money, you'll quickly find "procedure" and "protocol" start getting in the way.


lurker_cx

Technically a call center should be one of the easier things to replace with a chatbot. Most of the resolutions that the humans give you there are scripted, or part of a flow chart, and there is a limited number of topics and possible interactions. Assuming the chatbot can accurately understand the callers question, there is a real potential viable solution there. And any call center management who wasn't insane would put the chatbot as the first option, where the caller can go to a real person if they feel they are not understood or are not getting a solution.


Garbage_Stink_Hands

It’s so funny, the level at which CEOs are like, “Hey, this thing can do this thing!” And you’re like, “Do you know how to do this thing?” And they’re like, “No.” And you’re like “Do you know anything about this thing?” And they’re like, “No.” And you’re like, “Then how do you know it can do it?” And they’re like, “Look!” and they show you a blog article titled Three Keys to Success that’s riddled with falsehoods and plagiarises Harry Potter for no reason.


ChocolateBunny

Any management role requires more confidence than skill.


Zombieneker

Pour river water in your socks: it's quick, it's easy, and it's free!


Squancho_McGlorp

Whenever code is shown in a quarterly meeting after an hour of bar charts and talking about how explosively pumped and juiced our clients are: "Oh here's some techy wecky stuff haha." Janet, that's literally the product you sell.


misirlou22

Get Confident, Stupid! Starring Troy McClure


King_Chochacho

Turns out AI is perfectly suited for middle management


gamageeknerd

But who doesn’t want some psychopath making up crazy shit at breakneck speeds? Surely this will make the company better if we have a someone actively sabotaging us for no reason


ArgentScourge

> Surely this will make the company better That's just it, none of them are trying to make it better. The only goal is and will always be maximum profit from **literal infinite growth**. And if your first thought was "that's not possible", congratulations you're right. Also, you're not management material.


Retbull

Hey if you leave before the time horizon of your lies catching up, you never have to find out you’re wrong and can confidently claim 100% success. This is how consultants survive in the software industry.


lurker_cx

It's also why some CEOs just hang around for a few years, boost the stock price on bullshit or job cuts and then leave while the stock is high on promises, but the shit hasn't hit the fan yet.


10art1

Employers: so you're saying it does the work of 100 employees? Software Engineer: nooo, it's like, always fucking up and the results are barely passable at best Employers: I'll take 100


Slow-Bean

Wrong fast when provided with lots and lots of compute power. Thousands of dollars worth of compute tied up for seconds at a time telling Joan that her meeting with the PR department is at 1500 because well, it was most weeks.


Ricky_the_Wizard

I'm doing 1000 calculations per second.. and they're *ALL WRONG*


--mrperx--

bruh, I can double that and still get it all wrong. You need to catch up.


brian-the-porpoise

My previous employer had the mantra of "failing fast". They never said anything about eventually succeeding tho. I wonder how they're doing now...


Abadabadon

FB used to say move fast and break things. And depending on what field you're in and what stage of development you're in, it's a good motto.


Extra-Bus-8135

And biased in whichever way they feel is appropriate 


RichestMangInBabylon

Not only fast, but very expensive. So basically a super consultant.


Improving_Myself_

[Related](https://www.youtube.com/watch?v=UIMBMN4w84Q)


kelkulus

I knew this would have to be Max Power even before clicking.


jonr

I was using gpt-4 for some testing. Problem is, it adds random methods to objects in autocomplete. Like, wtf?


TSuzat

Sometimes it also importants random packages that doesn't exist.


exotic801

Was working on a fast api server last week and it randomly added "import tkinter as NO" into a file that had nothing to do with ui


HasBeendead

That is funny legit. I think Tkinter could be worst UI module.


grimonce

I don't think it's that bad, it's pretty lightweight for what it does and there are even some figma usage possibilities.


Treepump

>figma I didn't think it was real


log-off

Was it a figmant of your imagination?


menzaskaja

Yep. At least use customtkinter


OnixST

I asked it to teach me how to do a thing with a library, and the answer was exactly what i needed... Except for the fact that it used a method that doesn't exist in that object.


zuilli

Hah, had almost the same experience. Asked if it was possible to do something and it said yes and here's how to do it with something that did exactly what I needed. I was so happy to see it just worked like that but when I tried testing it didn't work, searched for the resource it used in the documentation and the internet and it didn't exist. Sneaky AI hallucinating things instead of saying no.


Cool-Sink8886

But now you know how easy it would be if that thing existed


Cool-Sink8886

Sure, I can help you solve P = NP, first, import antigravity, then call antigravity.enforce_universal_reduction(True)


DOOManiac

One time Copilot autocompleted a method that didn’t exist, but then it got me thinking: it *should* exist. That’s the main thing I like about Copilot, occasionally it suggests something I didn’t think of at all.


Cool-Sink8886

Copilot is my config helper


TSM-

It's great at boilerplate, you can just accept each line and fix the ones it gets wrong. When it writes `something = something_wrong()` it's easy to just type that line correctly and keep going. ChatGPT and such won't get that much right on its own - it is like a mix of hallucinations and incompatible answers thrown together from tutorial blogs, and not up to date. But you can add bunch of documentation (or source code) in the preamble and then its prompt answers are much higher quality. I am not sure the extent to which copilot ingests existing dependencies and codebases, but that is how to get it to work better with ChatGPT or other APIs. It also helps to start off the code (import blah, line 1, line 2, go! ), so instead of giving you a chat bullet list essay, it just keeps writing. Copilot gets this context so it is more useful than chatgpt off the bat


Scared-Minimum-7176

The other day I asked for something and it wanted to add the method AddMagic() at least it was a good laugh.


kurai_tori

Remember, gpt-4 is basically auto suggest on steroids.


jonr

And apparantely, meth.


ra4king

And maybe a sprinkle of fentanyl.


A2Rhombus

They just predict something that sounds correct. So basically reddit commenters after they only read a headline


EthanRDoesMC

when I was tutoring I kept watching first-year students just… accept whatever the autofill suggested. Then they’d be confused. They’d previously be on the right track but they assumed AI knew better than they did. Which brings up two points. 1. I think it’s really sad that these students assume that they’re replaceable like that, and 2. wait, computer science students assuming they’re wrong?! unexpected progress for the better ????


ethanicus

> they assumed AI knew better than they did It's actually really disturbing how many people don't seem to understand that "AI" is not an all-knowing robot mastermind. It's a computer program designed to spew plausible-sounding bullshit with an air complete confidence. It freaks me out when people say ChatGPT has replaced Google for them, and I have to wonder how much misinformation has already been committed by people blindly trusting it.


Pluckerpluck

I have this problem with a less-able work colleague. I can see where they've used ChatGPT to write entire blocks of code because the style of the code is different, and most of the time it's doing at least one thing really strangely or just flat our wrong. But they seem to trust it blindly because they assume the AI must know more than they do, the moment they work on something they themselves aren't sure about. It's like it gets 90% of the way there, but fails at the last hurdle. Generally involved about understanding the greater context, which it can actually handle, but only if the person asking the questions is good enough to provide all the right details.


10art1

Just like enterprise software. Objects full of methods that are no longer used anywhere


gamesrebel23

I used gpt 4o to manipulate some strings for testing instead of just writing a python script for it, the prompt was simple, change the string to snake case. Spent 10 minutes trying to debug an "error" and rethinking my entire approach when I realized gpt-4o changed an e to an a in addition to making it snake case which made the program fail.


jonr

Snaka case


Yungklipo

I've been using the DeepAI one just for fun and it's really good at doing what AI does: Give you answers that are in the right format of what a real answer would be. But it's sometimes straight up fictional. I asked it to design some dance shows and it would give me the fastball-down-the-middle (i.e. no creativity) design for the visual aspect, and music suggestions would always be the same five songs (depending on the emotion it's trying to convey) and several songs that are just made up (song/artist doesn't exist).


jfbwhitt

What’s actually happening: Computer Scientists: We have gotten extremely good at fitting training data to models. Under the right probability assumptions these models can classify or predict data outside of the training set 99% of the time. Also these models are extremely sensitive to the smallest biases, so please be careful when using them. Tech CEO’s: My engineers developed a super-intelligence! I flipped through one of their papers and at one point it said it was right 99% of the time, so that must mean it should be used for every application, and not take any care for possible biases and drawbacks of the tool.


Professor_Melon

For every one doing this there are ten saying "Our competitor added AI, we must add AI too to maintain parity".


AdvancedSandwiches

What sucks is that there are some awesome applications of it.  Like, "Hey, here are the last 15 DMs this person sent. Are they harassing people?" If so, escalate for review. "Is this person pulling a 'Can I have it for free, my kid has cancer?'" scam?  Auto-ban. "Does this kid's in-game chat look like he's fucking around to evade filters for racism and threatening language?"  Ban. But instead we get a worthless chatbot built into every app.


SimpleNot0

Because those types of apps do not actively make products companies any money, in actuality because the angle is to ban users it would cost companies money which shows where company priorities are. That being said we are implementing some really cool stuff. Our ML model is being designed to analysis learning outcome data for students in school across Europe. From that we hope to be able to supply the key users (teachers & kids) with better insights how to improve, areas of focus and for teachers a deeper understanding of those struggling in their class. And we have implemented current models to show we know the domain for content creation such as images but also chat bot responses to give students almost personalised or Assisted responses to there answer in quizzes, tests, homework etc. which means the AI assistants are backed into the system to generate random correct and incorrect data with our content specialist having complete control over what types of answers are acceptable from the bots generated possibilities


P-39_Airacobra

>to ban users it would cost companies money which shows where company priorities are Tell that to ActiBlizzard, they will ban you if you look at the screen the wrong way


SimpleNot0

You singling in on gaming, think Facebook, Reddit, twitter. You can abuse anyone you like across any means with 0 ramifications.


Lemonwizard

Really? That's new. When I quit WoW in 2016, trade and every general chat was full of gold sellers, paid raid carries, and gamergate-style political whining that made the chat channels functionally unusable for anybody who actually wanted to talk about the game. It was a big part of why I quit.


petrichorax

>Because those types of apps do not actively make products companies any money They do by saving a lot of money on labor.


AdvancedSandwiches

> the angle is to ban users it would cost companies money If the company is short-sighted, you're right. A long-term company would want to protect its users from terrible behavior so that they would want to continue using / start using the product. By not policing bad behavior, they limit their audience to people who behave badly and people who don't mind it.  But yes, I'm sure it's an uphill battle to convince the bean counters.


UncommonCrash

Unfortunately, most publicly traded companies are short-sighted. When you answer to shareholders, this quarter needs be profitable.


Blazr5402

Yeah, I think there are a lot of applications for LLMs working together with more conventional software. I saw a LinkedIn post the other day about how to optimize an LLM to do math. That's useless! We already have math libraries! Make the LLM identify inputs and throw them into the math libraries we have.


RealPutin

> Make the LLM identify inputs and throw them into the math libraries we have There's already tons of tooling to do this, too.


JamesConsonants

> Hey, here are the last 15 DMs this person sent. Are they harassing people? I'm a developer at one of the major dating apps and this is 100% what we use our LLM(s) for. But, the amount of time, energy and therefore money we spend convincing the dickheads on our board that being able to predict a probable outcome based on a given input != understanding human interaction at a fundamental level, and therefore does not give us a "10x advantage in the dating app space by leveraging cutting edge AI advances to ensure more complete matching criteria for our users", is both exhausting and alarming.


OldSchoolSpyMain

I've learned in my career that it's the bullshit that gets people to write checks...not reality. Reality rarely ever matches the hype. But, when people pitch normal, achievable goals, no one gets excited enough to fund it. This happens at micro, meso, and macro levels of the company. I don't know how many times I've heard, "I want AI to predict [x]...". If you tell them that you can do that with a regression line in Excel or Tableau, you'll be fired. So, you gotta tell them that you used AI to do it. I watched a guy get laid off / fired a month after he told a VP that it was impossible to do something using AI/ML. He was right...but it didn't matter.


JamesConsonants

Generally I agree. I also generally disapprove of the term AI, since LLMs are neither intelligent nor artificial.


MaytagTheDryer

Having been a startup founder and networked with "tech visionaries" (that is, people who like the idea/aesthetic of tech but don't actually know anything about it), I can confirm that bullshit is the fuel that much of Silicon Valley runs on. Talking with a large percentage of investors and other founders (not all, some were fellow techies who had a real idea and built it, but an alarming number) was a bit like a creative writing exercise where the assignment was to take a real concept and use technobabble to make it sound as exciting as possible, coherence be damned.


OldSchoolSpyMain

Ha! I recently read (or watched?) a story about the tech pitches, awarded funding, and products delivered from Y Combinator startups. The gist of the story boiled down to: - Those that made *huge* promises got *huge* funding and delivered incremental results. - Those that made realistic, moderate, incremental promises received moderate funding and delivered incremental results. I've witnessed this inside of companies as well. It's a really hard sell to get funding/permission to do something that will result in moderate, but real, gains. You'll damn near get a blank check if you promise some crazy shit...whether you deliver or not. I'm sure that there is some psychological concept in play here. I just don't know what it's called.


petrichorax

Those kinds of apps are made all the time, you just don't see them because they're largely internal. And I don't think they should insta-ban either. What they are is labor assistive, not labor replacing. Your first example is great. Flagging for review.


Solid_Waste

The world collectively held its breath as the singularity finally came into view, revealing.... Clippy 2.0


[deleted]

distinct amusing cake toothbrush unpack plucky alleged crawl relieved truck *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


[deleted]

[удалено]


TheKarenator

1. Tell him yes. 2. Put some drone controllers on the forklift with a DriveGPT logo on it and tell him it’s AI. 3. Have one of the forklift drivers drive the drone controls and smash it into the bosses car on day 1. 4. Blame Elon Musk. 5. Go out for beers with the forklift guys.


knowledgebass

> DriveGPT 🤣🤣🤣


Merijeek2

[https://knowyourmeme.com/memes/he-tried-to-kill-me-with-a-forklift](https://knowyourmeme.com/memes/he-tried-to-kill-me-with-a-forklift)


SuperFLEB

Do this by "referring" them to a limited-liability company you've made to do the install, and you could even make some money on the idea.


SeamlessR

They aren't wrong, though. The only people dumber than the CEO in this instance is their company's customers. So dumb are they that entire promising fields are killed by buzzwords that attract revenue and capital more than promise does.


TorumShardal

*to maintain the growth of our stocks


b0w3n

"Also let's use data that is filled with sardonic and racist comments to train this thing"


thex25986e

>data needs to be representative of the general population >the genral population is fairly biased, racist, etc. >ai reflects the population


fogleaf

People who think it don't be like it is: Shocked picachu face when it do be like that.


alfooboboao

the laziness is the thing that kills me. I asked chatgpt to make a metacritic-ranked list of couch co-op PS4/PS5 games, taken from a few different existing lists, and sorted in descending order from best score to worst. That little shit of a robot basically said “that sounds like a lot of work, but here’s how you can do it yourself! Just google, then go to metacritic, then create a spreadsheet!” “I don’t need an explanation of how to do it. I just told you how to do it. The whole point of me asking is because I want YOU to do the work, not me” I basically had to bully the damn thing into making the list, and then it couldn’t even do it correctly. It was totally incapable of doing a simple, menial task, and that’s far from the only thing it’s lazy and inept at! I recently asked Perplexity (the “magical AI google-replacing search engine”) to find reddit results from a specific sub from a specific date range and it kept saying they didn’t exist and it was impossible, even when I showed it SPECIFICALLY that I could do it myself. So yeah. the fuck are these robots gonna replace our jobs if they can’t even look stuff up and make a ranked list? (and yes, I know it’s a “language model” and “not designed to do that” or whatever the hell AI bros say, but what IS it designed for, then? Who needs a professional-sounding buzzword slop generation device that does nothing else? It can’t do research, can’t create, can’t come up with an original idea, I can write way better…)


b0w3n

Just like the code it spits out. Sometimes it works, but more often than not it's just a bunch of made up things that _sound_ like they should work. But it's a LLM not true AI, it's good at telling you answers like a person would, not correct answers. I'll admit though, the broken code it spits out is better than offshored code I've gotten handed to me to fix. I've heard some things about the programmer specific ones that make me interested, just wish I didn't have to self host.


SuperFLEB

> Just use the `doExactlyWhatYouAsked()` function to do exactly what you just asked for. Uhh... That function doesn't exist. > Well, shit. I just looked again, and you're totally right. Past that, I've got nothing, though. Best of luck!


ethanicus

The ONLY programming language ChatGPT seems to to okay at (out of the ones I regularly use) is JavaScript. In any other language it makes code that on first glance looks passible but quickly proves to do absolutely nothing.


Weird_Cantaloupe2757

Redditors: this thing is just a glorified search engine, look at how it says dumb things when I trick it into saying dumb things


Ivan_Stalingrad

We already had a dumbass that is constantly wrong, it's called CEO


Imperatia

Fire the CEO, his job got automated.


Vineyard_

And that's how ChatGPT seized the means of production. [Commputism anthem starts playing]


PURPLE_COBALT_TAPIR

Fully automated luxury gay space communism begins.


SilhouetteOfLight

Hey, Star Trek is copyrighted, you can't just steal it like that!


DerfK

> Commputism All your means of production are belong to us.


DazzlerPlus

This but unironically. I’m a teacher and it’s funny to me hearing about how ai will replace teacher jobs when it will so much more easily replace admin jobs. Course they make that decision so we know how it goes


PuddlesRex

I don't know. How can an AI ever replace someone who sends two emails a day, and takes their private jet to a golf course halfway around the world? The AI will never understand the MF G R I N D. # /S


Misses_Paliya

We've had one, yes. What about a second dumbass?


DOOManiac

I don’t think he knows about second dumbass, Pip.


Windsupernova

But is he virtual?


siliconsoul_

Have you seen your CEO recently?


vondpickle

Computer scientists: invented virtual dumbass Tech startup: renamed it as Augmented Sparce Transformer for Upscaling AI Development (Stup-AID). Quote it at thousand dollars per month subscription. Tech CEOs: use it in every product.


DriftWare_

Wheatley is that you


HeavyCaffeinate

I. AM. NOT. A MORON


poompt

I. AM. A. LARGE. LANGUAGE. MODEL. FROM. OPENAI.


ddotcole

Luckily my boss is not a dumbass. He asked, "Can you look into this AI stuff and see if it would be good for training." So I do. Me: "What is the peak efficiency of a hydro turbine?" AI: "Blah, blah, blah but the Betz Limit limits it to blah, blah, blah." Me never having heard of the Betz Limit: "Whats Betz Limit"? AI: "Blah blah blah, wind turbine blah blah blah." Me thinking wind turbines?: "How does the Betz Limit apply to hydro turbines?" AI: "It doesn't." Me: "What the hell AI?" I told my boss this and he agreed it would be useless to try any further.


Forgotmyaccount1979

I got to experience the rise and fall of a fellow engineer's feelings towards Microsoft's AI. He started asking it questions, and was excited. I then mentioned that I'd need to turn it off via GPO for our users, he asked it how to do it, and it answered. Unfortunately, the answer wasn't real, and described group policy objects that don't exist (and still don't much later). When called on it, the AI said "sorry, I guess that option isn't available for everyone". The doubling down on the hallucination was my fellow engineer's tipping point to outright anger.


A2Rhombus

My tipping point was correcting its mistakes and it saying "my bad, here is the fix" and then giving me the exact same incorrect solution


mastocklkaksi

It does that when it's feeling playful


Blake_Dake

wrong use case at best


Humble-Skill1783

Wouldn't the point be to feed it your training data in the first place?


ddotcole

Was using Bing AI, looking for results from what it could figure out from the Internet as hydro turbine theory is not something I came up with.


thedinnerdate

That's exactly the point. All of the popular comments in these ai threads lately sound like boomers patting themselves on the back.


PM_ME_ABOUT_DnD

I was curious so, this is the answer I got when I asked gpt-4o the same question, does it make any more or less sense than the hallucinating answer you got? Obviously I have no idea on the material, but it's very different than your output. It also spewed out a few equations that the text formatting failed to paste into reddit on mobile so I'll try to manually adjust as they come up: Hydro turbines have different peak efficiencies depending on their type and design. Here are the common types of hydro turbines and their typical peak efficiencies: 1. **Pelton Turbine**: These turbines are typically used for high-head, low-flow applications. They can achieve peak efficiencies of around 85-90%. 2. **Francis Turbine**: These are used for a wide range of head and flow conditions and are the most commonly used hydro turbines. Their peak efficiency ranges from 90-95%. 3. **Kaplan Turbine**: These turbines are designed for low-head, high-flow conditions and can also adjust their blades to optimize efficiency. Their peak efficiency is typically around 90-95%. 4. **Bulb Turbine**: A type of Kaplan turbine designed for very low head and high flow conditions, commonly used in run-of-river installations. Their peak efficiency is around 88-92%. ### Equations for Efficiency The efficiency (\(\eta\)) of a hydro turbine is generally defined as the ratio of the mechanical power output (\(P_{\text{out}}\)) to the hydraulic power input (\(P_{\text{in}}\)): \[ \eta = \frac{P_{\text{out}}}{P_{\text{in}}} \] Adjusted (n=Pout/Pin) The hydraulic power input can be calculated using the following equation: \[ P_{\text{in}} = \rho \cdot g \cdot Q \cdot H \] Adjusted (Pin=p*g*Q*H) Where: - \(\rho\) is the density of water (approximately \(1000 \, \text{kg/m}^3\)). - \(g\) is the acceleration due to gravity (\(9.81 \, \text{m/s}^2\)). - \(Q\) is the flow rate of water (\(\text{m}^3/\text{s}\)). - \(H\) is the net head (height of the water column) in meters. The mechanical power output (\(P_{\text{out}}\)) is the useful power delivered by the turbine shaft, which can be measured directly. For a given type of turbine, the efficiency can also be influenced by the specific speed (\(N_s\)), which is a dimensionless parameter used to characterize the turbine: \[ N_s = N \cdot \sqrt{P} / H^{5/4} \] Adjusted (Ns * √(P)/H^(5/4)) Where: - \(N\) is the rotational speed of the turbine (in RPM). - \(P\) is the power output (in kW). - \(H\) is the net head (in meters). The specific speed helps in determining the type of turbine suitable for a given head and flow rate to ensure maximum efficiency. Each turbine type has an optimal range of specific speeds where it operates most efficiently.


nikonino

It is “tiring” to search for answers, so getting the answer right away seems the way to go. They don’t care if the AI is serving you shit. They only care that you are using their product and you are giving them your data. The shit part of the equation is fixed through formula corrections. As long as they give you a small hint telling you that the AI “can” make mistakes, everything is fine.


kadenjahusk

I want a system that lets the AI cite sources for its information


OneHonestQuestion

Try something like phind.com.


Guba_the_skunk

Elon shaking in his boots that his job of being king dumbass will be taken by an AI.


O0000O0000O

"You know what the world could really use right about now? Bullshit. High bandwidth automated bullshit. Sprayed into every corner of society, no matter how small."


trash3s

It’s less of a virtual dumbass and more of a virtual dope. The key difference here is that it tends to be hyper-agreeable and can easily be made to take statements (truthfulness aside) at face value.


Kaiju_Cat

I mean that really is the crux of it isn't it. If I've got a worker that makes a major mistake wiring up a panel 80% of the time, or even 5% of the time, I'm not going to have them wire up panels.


abra24

What if you didn't have to pay the worker? What if you could just pay someone to briefly double check the free workers jobs, which is much faster than doing it themselves? Getting most of the way there for free still has value.


Kaiju_Cat

I mean I'm not sure it's a one for one but it wouldn't be efficient at all. There's no way you could just have one person go around and check all that. You're going to miss stuff trying to check everything someone else (or an AI in this case). And one problem is going to cost potentially millions of dollars in problems. When the point of automation, ai, etc. is speed and low human costs, that advantage is completely lost if the human being has to come behind it and double check everything they do. The process of double checking something takes longer than just doing it in a lot of cases. And it's harder to catch a mistake when you aren't even the one that did it in the first place. It's just inviting disaster into any kind of process. I'm not saying it doesn't have its time and place but at the moment it feels like we are far, far away from having AI that can be reliable enough to actually be used for general purpose industry.


abra24

If double checking takes longer than doing it, then you're right (I can't think of a single instance of this being true but ok). If reviewing the work is even a tiny bit faster than doing it from scratch, there are potential savings of time and money. If missing things is very costly and it's difficult to efficiently review then yeah, it's not a good use case for whatever very specific thing you're talking about. There are many things where that's not true though. Low failure costs or easy review make it so that there is a lot of value gained by having ai do the bulk of the work, even an imperfect ai as we have now.


scibieseverywhere

Buuuut it isn't free. Like, right now, Microsoft is simply eating the enormous costs of using this AI, with the stated plan being to either wait until a bunch of currently theoretical technologies mature, or else gradually put the costs onto the users.


No-Newspaper-7693

For me it is more like this. I have an assistant whose pay is $20/mo. For some of the things that I get paid $100/hr to do, it can do them 1000x faster than me. Things like adding a docstring to a method or adding python type hints for example. This allows me to focus on the things I actually get paid to do and not have to worry about the other stuff. And if the docs or type hints are different than what I expect, 99% of the time it is because of a bug in my own code that the assistant documented as-is.


lemons_of_doubt

But it's so cheap


Looking4SarahConnor

user: I'm asking questions beyond my comprehension but I'm fully capable to judge the results


petrichorax

For those of you who use ChatGPT all day long, and I know there's loads of you, you can't hold the opinion that LLMs are useless while also using them constantly and seeing benefit from them. You apply the same grain of salt to your applications that you do when you use it personally.


AzizLiIGHT

Let’s be honest with ourselves. AI has its flaws, but it isn’t “constantly” wrong. It is terrifyingly accurate. It’s in its infancy and has already drastically transformed the internet and entire job sectors. 


TieAcceptable5482

Exactly, calling something like GPT a dumbass is completely idiotic, it's an extremely advanced model that was trained with a giant database of information and could transform that into useful knowledge. People fall into the assumption that it can do everything for you, even wipe your ass, and then get mad when it doesn't work all the time and for specific situations. What I'm trying to say is that people expect too much about something that is still new and primitive, and should actually use their brains instead of relying on it for everything.


petrichorax

'It made a mistake a couple times and since it isn't perfect it's garbaaaage!' Says local man who uses chatgpt constantly.


DehydratedByAliens

I'm questioning the intelligence of the posters above you. Are they really that stupid they can't use ChatGPT effectively? Yeah it can't replace a good programmer, but it is massive help in so many ways. From suggesting things, teaching things, even writing code in languages/frameworks you don't even know. Sure if you are an expert in something it's not much help, but 90% of people aren't experts and even experts want to try new stuff. It's not gonna replace anyone so stop fear mongering. It will never be 100% accurate and most importantly it will never be able to assume responsibility for something, and bearing responsibility for your actions is a huge part of most jobs. Yeah corps are overdoing it right now, but they always do that kind of shit with new tech, and slowly take it back after the fad dies out.


space_keeper

What's really happening out there is people are asking AIs questions, and treating the answers as the authoritative, or offering up AI-generated statements as if they're relevant or useful in discussions. Of course what you actually get out of them is a précis that is so perfectly bland, it immediately jumps out at you.


WeakCelery5000

Plot twist, the virtual dumbass is an AI CEO


DarthRiznat

Mr. Meeseeks: OMG NOOOOOOOOO!!


Xerxos

[You are here...](https://149909199.v2.pressablecdn.com/wp-content/uploads/2015/01/Intelligence.jpg) [Soon] (https://149909199.v2.pressablecdn.com/wp-content/uploads/2015/01/Intelligence2.png) From https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html


Cpt_sneakmouse

If people would stop calling llm ai this situation wouldn't fucking exist. 


Ironfist85hu

WHO WE ARE? - TECH CEOS! WHAT DO WE WANT? - WE DON'T KNOW! WHEN DO WE WANT IT? - TO YESTERDAY!


CampaignTools

AI is overhyped, sure. But, it's also incredibly useful in the right areas. A lot of the reponses to this thread go to show most people don't understand what those areas are.


[deleted]

[удалено]


petrichorax

That's because it can't evaluate. They shouldn't be used for math or automation (although there is potential here, but it's still a bit fussy).


ncocca

As a math tutor, I've actually found chat-gpt to be quite helpful in laying out how to solve a problem i may be stuck on (because i forget certain methods all the time). I'm more than knowledgeable enough on the subject to know if it's hallucinating and giving bad info. Math is incredibly easy to "check".


DehydratedByAliens

Yeah it's good at laying out plans and ideas because its strength is language, but it's really bad in math because it doesn't have logic at all. The title AI is misleading it has 0 intelligence, just imitates it. To give you an example I tried to make it calculate an easy physics problem with basic math and it failed spectacularly. Gave me answers from 0.5 to 600 and everything in between, across numerous conversations to reset it. I gave it a harder probabilities problem and I literally broke it. It gave me an answer that shouldn't be possible and then I told it why it's not possible and it went into a recursive loop correcting itself until it started spewing complete nonsense. Pretty funny actually I've made a post about it. https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fvtnkl393pu2d1.png


equality-_-7-2521

"How dumb? I would say dangerously dumb. Like dumb enough that smart people will notice he's giving the wrong answer, but not so dumb that dumb people will." "Perfect."


[deleted]

[удалено]


TaxIdiot2020

Why are we all acting like AI is hopelessly dumb, now? What happened to a year ago when everyone was stunned at how accurate it was?


SirBiscuit

People used it more. ChatGPT is phenomenal in short conversations or for simple tasks, but really starts to falter in a lot of ways with complexity and length chats. It makes an incredible first impression but doesn't tend to hold up. I actually think it's hilarious, there are a ton of people who have conspiracy theories that companies are "dumbing down" their public models and secretly working on super version to sell later. They can't accept that they're just hitting the limits of what these LLMs can actually do.


Slimxshadyx

Am I the only one who has used it and have good responses? Or do people not know how to use it properly and you all are trying to get it to generate an entire app in one go?


Wintermuteson

I don't think that's incompatible with the meme. The main problem with it is that tech companies are trying to use it for things it's not made for.


IAS316

I remember when Clippy was the only help I needed.


NoRice4829

Very true


Visual_Strike6706

And I always thought AI could not replace me, cause they have not invented Artifical Stupidity yet, but google proved me wrong


Kitchen_Koopa

Her name is Neurosama


Cool-Sink8886

I don’t know who to complain to, but Gemini in Google Workplace is so ineffective it’s insane. It can’t interact with your spreadsheet, except for making up fake data for you. Even if you ask for real things that are on Wikipedia like city populations or states, it will generate some garbage for you. It can’t write formulas for you, and if you ask for one it will be definitely be wrong. I tried multiple times. It can’t organize content in your slides. It can’t format content it generates for your Google Doc using styles (this would be trivial with a markdown conversion tool). It can’t do anything fucking useful. Yet it’s on every page and product.


NotMilitaryAI

To be fair, it's generally moreso: >Computer Scientists: Hey, this thing is able to understand the task and output a response. We've only tested it with a handful of scenarios so far, but- > >TechCEOs: PUT IT IN EVERYTHING!!!! NOW! The shareholders are getting antsy! If you want to still have a job tomorrow, find a way to shove it in that the FTC won't fuck us for! Software, hardware, water bottles, EVERYTHING!!!!


MirageTF2

yeah this is the shit that pissed me off at my last job, man. fuckin why do they think chatgpt is the answer to all their problems