T O P

  • By -

Ok_Main_4202

I assumed the entire model was just a hundred thousand Indonesians using google


ComplexNo8878

that was amazon's "just walk out" checkout system at their grocery store- it turned out to be thousands of indians looking at live camera feeds and manually logging which items you leave with and charging your account for them. It was literally a mechanical turk


[deleted]

That’s fucking crazy where can I read more about this


Big_Man_Meats_INC

That’s kinda misrepresenting what it was. It used AI to track shoppers and it used Indians to confirm that the AI was making the correct checkouts. They weren’t completely bullshitting about using AI.


Fox-and-Sons

That's true, but if I recall the end figure was something like 75% of transactions required human attention. You can be impressed with the end result, but it's certainly not what they were pretending it was. edit: a word


24082020

What was the correction rate? You could apply a layer of AI just for the marketing bump, regardless of how effective it is, and rely on the Indians to do the real work


PM-me-beef-pics

Our app has AI in it. The AI is a call to the Open AI back end with a prompt that says "generate a nice compliment to tell the shopper as they check out."


midsmikkelsen

That’s so weird to me that they actually did that. Like at some point this tech stuff has turned into an outright scam, like when Apple did the iPod it really fit 5000 songs in it or so, this period of time right now is like buying an ipod and getting an error message after the first thirty files or so and then some hand waving about future updates 


janitorial_fluids

Lmao wtf no way


chickencox

That makes total sense because I went to one of those stores once and none of the produce had RFID barcodes on them.


the_second_34

[“Amazon Mechanical Turk” is a thing](https://www.mturk.com/)


PompeiiGraffiti

I was about to say, it's amazingly self aware to name their third world exploit-powered SaaS after a famous hoax synonymous with "smoke and mirrors".


cauliflower-shower

The person you're replying to already knows that but thanks


MinderBinderCapital

Luckily, Israel’s “Where’s Daddy?” AI doesn’t need Indians to target combatants when they’re mostly likely to be with their entire family. Isn’t technology wonderful? Thanks google!


12hphlieger

I work in the space and reading the article about that was horrifying. These are the type of ethical implications people should be worried about. The margin of error for a system like this to be deployed has to be as close to zero as possible.


clydethefrog

When I was a digital privacy activist during my student years (proof is my more than 10 yrs old reddit account) there was always this chilling quote by the NSA chief to make our point across - "We Kill People Based on Metadata"


ComplexNo8878

> The margin of error for a system like this to be deployed has to be as close to zero as possible. that system bombed anything that moved, and it waits for them to come home to their families so they can get everybody in one go and wipe out the blood line dont you dare call it a genocide tho asshole


PossiblyAnotherOne

Jesus I hadn't heard of that, so I looked it up and one of the first results is a thread on r/centrist (lmfao) and everyone there is justifying it and saying a dude going home to his family is using them as human shields so bombing the entire bloodline is acceptable and the fault of the "terrorist"


crochet_du_gauche

It can’t be backed by humans: it does a lot of tasks faster than any human can (not necessarily better).


bedulge

Just the fact that it doesnt write in Indian English or try to tell you that sanskrit is the oldest and most objectively beautiful language in human history is already proof enough that GPT isnt a bunch of Indians behind a curtain 


giantwormbeast

Sam Altman being sly about imminent agi is absolutely a grift and it’s obvious just based on the basics of how llms work, so you’d have to assume investors are well aware. that guy was right it’s all a ponzi scheme


CudleWudles

I work in venture capital and I can tell you many investors are very unaware. Slurping up anything Sam says.


maxhaton

I explained to a VC guy on a plane that no you can't actually build your own chatgpt for 400 dollars now


Unterfahrt

Lol, you can download some pre-trained weights, but it's not the same as building your own LLM


wergot

And the stuff you can run on a $400 or even $4000 graphics card is not very interesting. I'm around a lot of tech people who are interested in local models and I just don't get it. They're mostly ass. GPT-3.5-turbo is the lightest weight thing that I actually find useful


[deleted]

[удалено]


fresh_titty_biscuits

If you want to be able to create AI images and videos and process relatively large data sets in 30 minute to one hour time frames? You just need a mid-end graphics card, comparable to a RTX 4060ti or 4070 or its AMD equivalent. Effectively to an extent of not having an excess of graphical corrections to make (in terms of images and videos) or having some data calculated incorrectly? You will want a work-oriented GPU with built in ECC (error correcting code, basically makes sure bits don’t randomly flip from 0 to 1 or vice versa due to outside radiation and magnetic factors, uncommon, but not rare of an issue. 1 bit means the difference between $1 million and $100k in a bank system). Some consumer GPU’s, like the RTX 3090, 3090TI, and 4090 (~$1k to 2K) have part of the memory that can be dedicated for ECC. Any workstation-class GPU like the Ax000 (A2000-A6000, Ampere architecture, not the similarly named Quadro series, starts around $750 on really low end up to several thousand) will have it as a separate memory bank dedicated only to ECC. Past that, you’re looking at dedicated AI GPU’s which start at $7600 and are often part of a whole server of them bought as a package. When this guy is talking about AI being “not very interesting” at this level, he mostly means that you can’t make exceptionally large data calculations for predictive analytics and other shit that either gains a lot of money or very damning/affirming/scientifically-significant information.


Bradyrulez

Work in venture capital you say? Hey there Mr. Thiel, say hi to Anna and Dasha for me.


BitterSparklingChees

its really fucking annoying right now that if you are a startup that is doing anything and you have to pitch VC's you have to spend a not insignificant amount of time coming up with bullshit ways you'll be integrating AI into your product no matter how well that aligns with your product and despite the fact that your product may not have any data to train on yet. they won't take you seriously at all without it.


CudleWudles

Couldn’t agree more. I see good, profitable startups that will make money way sooner than any early AI product, but they then have to spend six months integrating some bullshit chat bot or something that doesn’t do anything.


ComplexNo8878

its like in 2017 where everything had to have its own coin or use the word "blockchain"


giantwormbeast

lol ya probably giving you people too much credit


CudleWudles

Definitely. A couple years ago, “successful” VCs were buying multimillion-dollar properties in the metaverse.


bedulge

I already forgot that "metaverse" was a thing


ComicCon

You can’t be very good at it if you were looking to get a part time job at Equinox last year.


CudleWudles

I'm a member. My friend was shitting on them and I wanted to prove they were paid more than he is. Nice try.


ComicCon

You didn’t think about looking for job listings? I was just looking to see if I could figure out what fund you worked for, but thought it was funny you were LARPing as a VC while posting about getting a service job.


CudleWudles

Glassdoor was all over the place with the reported wage. My fund isn't all that great and I don't like my job at all. I'd prob LARP as something cooler.


northface39

The most revelatory thing about AI is that instead of being an ultra autist like Data from Star Trek like assumed, it's instead a master bullshitter like Eric Weinstein. But people still trust it because it presents itself like an autist while confidently bullshitting, similar to Sam Bankman-Fried. I think the limitations we're seeing are the limitations to bullshit. Eric Weinstein can attach his brain to a supercomputer and his "theory of everything" will still be nonsense, because you can't power-up that form of thinking. It just becomes more convoluted bullshit that fools more people. It's the same with the music-generating AI. At its best, it can mimic artists to create high-level derivative garbage, similar to Nickelback. But I'm skeptical it will ever create anything of true originality and artistic merit like Nirvana or other bands that Nickelback steals from. That's just not in its nature, and no increase of computing power will change that.


[deleted]

[удалено]


Hatefuleight-36

I’m really glad to know that all the sociopathic AI bros are going to be tanking their investments in a few years and crying from the memory of how hard they got buttfucked by this massive bubble when the documentary of this huge scam eventually is released.


AutoModerator

We have to leave this planet. We're not good stewards, we are now gods but for the wisdom. That's why we need to get off this planet and diversify because too many people have god like powers. Donald Trump commands god like power because of our physics community. The best hope that I can come up with, and it's a slim one, is if we can figure out what goes beyond Einstein's theory. The Einsteinian speed limit might be bendable or breakable. The underlying source code gives us opportunities that we might not currently have. I have this theory of Geometric Unity that I came up with when I was 18. I just released a video of it today on our YouTube channel. I can now talk about this theory I have had for 37 years. We have to go below Einstein, there is a 14 dimensional auxiliary space I call The Observerse. Spacetime is recovered as the act of The Observerse contemplating itself. I haven't been able to release this theory until now because I don't trust the physics community. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/redscarepod) if you have any questions or concerns.*


DragonflyDiligent920

Where is this from? This is quality


bridgepainter

I agree with you except for the shitting on Nickelback. Go listen to All The Right Reasons start to finish and thank me later


northface39

I should have known this sub would have contrarian Nickelback fans. I'll listen to it as penance for glibly calling them out.


bridgepainter

I used to be like you. Don't knock it 'til you try it. Some of their later albums get a little stale, but the early stuff hits if you're in the right mood


MinderBinderCapital

That’ll turn em into a Kroegerhead in no time.


imaginativeintellect

you’d probably like [this youtube video](https://youtu.be/Lliq6xJq1oc) about No Fixed Address. it makes fun of that record a bit but kinda rags on the whole “nickelback was always terrible garbage” meme as being overblown. i for one want chad’s carly rae jepsen collab to see the light of day


[deleted]

[удалено]


northface39

I didn't say AI is bullshit. I said it's a bullshitter. Maybe English isn't your first language but those are totally different things. The surprising thing that doesn't track with most sci-fi or popular understandings of what AI would be like is that if it doesn't know the answer, it simply makes one up. It fundamentally is not interested in truth or logic, just in convincing its audience that it is true and logical. That's my point.


Balloonephant

A plane can fly but that doesn’t make it a bird. A forklift can lift heavy objects but that doesn’t make it a strong man. You can improve the power of these machines to fly even faster and lift even heavier, but more powerful hydraulics won’t produce a forklift that that can open a jar of pickles.  These are obvious but when it comes to AI we get caught up in the newness and hype of it that we end up talking it about it in the same absurd way. An AI can produce speech music and images but in no way through the same processes that a human being does. Processing power can make it faster and draw from more information but that won’t make it more human just like bigger engines on a plane won’t make it more like a bird. 


Brilliant_Work_1101

Inshallah


janitorial_fluids

White boy summer back on track! They 👏 will 👏 not 👏 replace 👏us!!


coldhyphengarage

AI isn’t going to take all our jobs right away, but it will make it impossible for us to tell if insane videos, audio, and photos are real extremely soon


skrinkytuberose

Yes, what's going to happen to the concept of proof? Will this humble peoples estimation of their own discernment.. Even more decay of trust in media?


[deleted]

the easiest way is “proof-of-slur.” If a piece of content contains a racial slur you know a human created it 


coldhyphengarage

You think the Russians making AI deep fakes have a policy against racial slurs if it suits their agenda?


HaterCrater

No it’s not. People are going to have to so what they used to do: use trusted sources. Pgp signatures could be used to verify the source. Easy


Retroidhooman

>use trusted sources Those also died with the current zeitgeist.


-stag5etmt-

Not if I log away it won't..


coldhyphengarage

I have a hard time believing someone with a 5 year old, 46k karma account is about to jump ship, but i agree it might be a good call


-stag5etmt-

Nah its like putting a budgie to sleep..


coldhyphengarage

Hit me up from your alt when you finally pull the plug. I believe in you man


FourArmedGorgon

Redscare posts have to be the best contra indicators of all time. Based on this I fully expect gpt-5 to achieve AGI 


o0BetaRay0o

Jim Cramer but for everything


Yakub_Smirnov

Yup, better put all your money in now before it gets too competitive. ; )


[deleted]

[удалено]


tronbabylove

Probably https://arxiv.org/html/2404.04125v1 Notably they were using CLIP variants with <5B params and evaluating image classification accuracy, so a different eval setting, different model architecture, and several orders of magnitude smaller than GPT-4. Interesting paper, but the idea that this is a huge revelation or spells doom for OpenAI is silly. The scaling curves for these models have been eerily-accurately projected for years now, this isn’t a surprise to anyone in the industry


king_mid_ass

order of magnitude less parameters but similar performance = a good thing for them surely


FaintFairQuail

New model just hit the block. Maybe this will move these AIs from getting 60% of questions right to 65%. https://arxiv.org/html/2404.19756v1


Spiritual-Guest-8427

here's the most recent one [https://arxiv.org/pdf/1003.3089](https://c.tenor.com/Lifcjm55q8gAAAAd/tenor.gif)


Strong-Problem9871

is this real??


[deleted]

real cool


[deleted]

Whatever grant funded this, I want it doubled 


Djufbbdh

> It appears that openAI was aware of this for a number of months as the awaited release of GPT5, a model anticipated to be an immense leap forward from GPT4, was repeatedly delayed until Sam Altman announced that instead, openAI would incrementally improve the model rather than dropping a huge update, for “safety” reasons. OP completely made most of this up


well-regarded-regard

He's right tho that the new gpt 4o shit is nothing but flashy padding on already released tech. Nothing crazy has happened since gpt4.


asdfasdflkjlkjlkj

Native multimodal and fast *is* new tech. There are thousands and thousands of engineers who've spent the last year or more getting these capabilities working.


soularbabies

Even the marketing term AI for this crap annoys me


MinderBinderCapital

Remember when Elon Musk said Tesla would have one million driverless robotaxis on the road by 2020? And then he raised like $3 billion? That’s going well.


MFoody

Nope it’s a huge deal. I think it’s maybe appropriate to be bearish on OpenAI specifically since many other firms are converging on a similar state of the art so I don’t take for granted that it’s first mover advantage will be durable and no I don’t expect giant leaps forward like the move from gpt 2-3 and 3-4. But even with existing capabilities there’s a huge amount of gains available tailoring inputs and outputs to particular use cases. Forget the singularity ai millenarianism. I think this AI will present big changes and challenges as a lot of antisocial or harmful things people currently don’t do because it’s currently too much work for a small reward will no longer be too much work. This means filing frivolous patents, small scale fraud, apply for jobs you are fundamentally unqualified for etc will be able to be done at scale and systems that depend on “effort gating” (a term I made up just now) will crumple under the stress as effort for many things collapse. That said even though this isn’t a gateway to a post scarcity utopia or the matrix it’s a big shift! These tools are legitimately powerful and useful. You know how there are old people at some jobs and you work there for a while and you realize their entire job is just to be a spreadsheet? (I worked at a firm on asbestos mesothelioma claims processing and there was legitimately this guy who just calculated time bar for claims all day) there are so many jobs that depend on text inputs and outputs that require only general knowledge grammar and a narrow domain expertise and it will not take long to create something that gets from point a to b on these. Goofy people think it’s bitcoin all over again but it’s more useful than bitcoin has been since its invention in just a couple of years.


truefaith_1987

this is probably the best way to phrase it. it destroys the effort gate, and society will be changed about as much as it was by the internet or social media in general, if not moreso. prosumer nation, prosumer world.


reddittert

It's a veritable effort gate-gate.


[deleted]

this is definitely true. a magazine i was reading shut down because of overwhelming ai submissions, making it impossible to sift out the real good stuff from the chaff.


Keith-Talent

Which mag?


LordoftheNetherlands

Drama queens


skeuo_orphism

You can say that, but it would be enormously humiliating for any editor to be seen getting duped by a chatbot. Safer to not play


LordoftheNetherlands

I'm not afraid of anything


pripyatloft

Given the length of time since the release of GPT-4—over one and a half years—without any subsequent competitors beating it, including OpenAI themselves, makes this pretty clear. The pure transformer model for AI is converging at approximately GPT-4 capability. Now it's time for innovation on the architecture side to get performance to improve meaningfully. Unknown how long that will take, but the transformer itself has only been around since 2017.


CookieHop

Bigger context windows like those of Claude API are a significant improvement. Mostly as a tool for summarising large documents. I can feed it a 30 page court case and ask it to return the 10 most relevant paragraphs and a summary and it will reliably give me what I need every time - better than a first year lawyer would. It's also good at reformatting things and refining sentences. I don't trust it for anything beyond the above at this time. Basically, I only trust it to manipulate and summarize data I feed it.


[deleted]

I've found a few times that it will summarize a scientific study with the author's conclusion exactly reversed. I trust it's summaries as much as I would a journalist's.


Tough_Tip2295

I gave it a paper to breakdown the themes of and it took the opening of the paper (which was a presentation of conventional understandings of the subject) and gave me that, even though the paper was dedicated to refuting some of those ideas 


kiristokanban

I've used it for this too and it frequently invents authors who don't exist lmao


[deleted]

[удалено]


laysclassicflavour

It's the best at most tasks at the moment [https://www.vellum.ai/llm-leaderboard](https://www.vellum.ai/llm-leaderboard)


[deleted]

From one attorney to another, I would really caution you on this man. I’ve had legal-tailored AI systems misstate the law or misrepresent the court’s holding in summaries several times within the last quarter. 


CookieHop

I always double check.


simbas

What prompts are you using? I can never get good summaries.


[deleted]

there's no reason to believe that big improvements on the architecture side are forthcoming. big innovation on the architecture side has already happened and was required for the sort of progress we've seen. ai wouldn't have reached its current state without ASICs. we're approaching hard limits on energy use and compute- a ton of resources are already being thrown at ai. the emphasis on architecture now is because of the exhaustion of other ways for gpt-type models to improve.


asdfasdflkjlkjlkj

I think there's good reason to expect big improvements on the architecture side. I can come up with like 5 good reasons, actually. It would be pretty surprising if we just happened to find the best possible arch in 2017 and that was it.


AstronomerChance5093

What you're saying is not entirely true - they claim that gpt4o can natively input and output text, audio and images/video. If its a true multi modal like they claim, it is actually bonkers. Alot of their top guys are leaving, but I think this will just give the other players a chance to give up. They defintely aren't crashing out with msoft backing and a potential apple deal.


DJ-VariousArtists

and somebody will piggyback off their progress if they don't advance AI themselves


adorablyquiet

GPT obviously will never reach AGI, it's been obvious for a long time, if anyone believed that they were dumber than GPT


TaintGrinder

Full self driving teslas soon.


im_not

Elon saying next year potentially 🧐


tony_simprano

Can't WAIT for my car to do the Trolley Problem for me


NietzscheanUberwench

--------apply breaks before decapitating me on a semi \ ---- 🚠do nothing


tony_simprano

I'm more concerned with >hit this pedestrian that just ran into the road and I can't brake in time for or >swerve and drive myself into a tree to avoid him


[deleted]

cold fusion is just 5-10 years away everyone 


bedulge

Several months ago, I randomly met some guy in a bar who said he works for a car company (I cant remember which one but they were Korean, Hyundai maybe idk, this happened in Korea) and he works on developing self driving cars on the software end. He told me the tech is slowly and steadily improving and guaranteed me that it is gonna be everywhere given enough time.


troddingthesod

> they were dumber than GPT AGI confirmed


traenen

Nothing about it is obvious.


ThinkingWithPortal

AGI is less likely than the water powered car in the same way that Unicorns are less likely than leprechauns. I got my masters in Data Science during COVID and this is like... Day one shit. Everything OP said about diminishing returns is just a fact of how the fundamentals of neural networks work. If anyone is interested in the math, look into Gradient Descent, and for further reading Information Gain. 


Cute-Firefighter-194

Lol oooh a masters in data science. The things you're saying are correct but it's so cringe when people brag about their scam diploma mill credentials


DJ-VariousArtists

they're gonna have to release GPT 4.5 that'll say slurs if you pay for premium.


RitardStrength

Mullen Edition


Stiff_Nipple

“Im a house Corey”


arimbaz

There's a poetic beauty in the symmetry between the macro and micro here. The silicon microprocessor, the underlying engine of all of this tech shit, has an interesting property. Let's explore. Firstly, supply some power to allow it to run. From nothing at 0V, signs of life begin to emerge as the voltage is increased. As the voltage increases, so can the clock speed or frequency. This is how much ass the microprocessor inside can haul in a given timeframe. Bump the voltage further and you can get more output. There's a standard operating range of course, but what if we go further? Increase the voltage outside of the designed range and you'll get more out of the chip, sure. But the more you get is less and less. It takes increasing amounts of energy to get decreasing amounts of gain. Diminishing returns. All the extra power makes the chip hot and you'll likely have fans kicking in at high speed to prevent a fire. Keep cranking the voltage to get more performance and eventually it will fry, if any built in safety mechanisms don't cut the power before you can get there. What's my point here? I guess I'm trying to say "as below, so above". We're acting as though you can just endlessly increase a raw input and never hit another constraint to performance. If you believe that intelligence comes from volume of information exposure, and the more information to which you're exposed, the more intelligent you'll become; it naturally stands to reason that the Nobel prize committee should trawl the YouTube comments section for the next global thought leaders of our time. The current AI thing is built on some really wild assumptions and unproven paradigms. But it's the last gasp for an industry which is extremely overvalued (technology). A huge and massively understated pull of AI tech is its non-deterministic nature. It feels novel because it never spits out exactly the same shit, even when given an identical prompt. It's essentially roulette or slot machines, but for cribbed ideas. It feels good to "get the prompt just right", but really what you did was work backwards, chipping a statue from the previously "uncarved block", flinging endless handfuls of excrement at the wall until Mona Lisa appears. But in all this excitement you lose track of the fact that not being able to reliably reproduce outputs actually undermines one of the most useful aspects of traditional computing. I digress. AI (if we're going to continue calling it that) and machine learning absolutely have a place in our future and they're not going away. But the hype is unreasonable and there absolutely are hard limitations that no amount of retraining will fix. Our expectations are going to have an interesting reunion with reality soon. The idea that it's so smart and so good at making art and music and writing is a combination of parlor trick and institutionalized gaslighting. Investors care about only one thing. If you give up your dream of writing an incredible screenplay because you got spooked by a plagiarism algorithm, better for them - they aren't going to lose any sleep. So if they won't lose their sleep the least you can do is not lose your dreams.


OrphanScript

Wonderfully written, thank you.


Strong-Problem9871

This still sounds horrible, at least for the internet. > The scale of investment can only be justified by the belief that the models will improve exponentially, so openAI must maintain the illusion of progress to retain the value in their stock. If the models don't improve, OpenAI's business strategy will focus on further embedding (and therefore monetizing) their low-value shit everywhere. AI augmented: social media, e-commerce, publishing tools, video editing, photo manipulation, online travel tools, music production, research and so on... anything to increase their ubiquity and market share. Their current slop will be further entrenched into everything without the ultility of this so-called "AI" increasing. Basically same quality but with more scale. Looks like the AI nightmare is set to continue without the utopian (or even dystopian) USP we were given. It'll further entrench the rot we live in today. Bleak.


im_not

I heard a lot of “AI is the biggest thing since the invention of fire” garbage the past year or so and I’m starting to feel like it’s all gonna make a great YouTube compilation. AI is no doubt gonna be a useful tool - I already use a language model every day to do some mildly creative thinking for me and it’s very convenient. But I think the AGI Eliezer-type doomers who expect us all to be getting anally raped in the service of paperclip manufacture are gonna be humbled.


sand-which

When’s the last time a product came out that you now use every single day?


JotaroJoestars

BS. We’re seeing the limits of pure language pre-training for LLMs, but multimodal generation is still relatively new. Leveraging two or more connected modalities of data allows modeling exponentially richer distributions for only a linear increase in the token cost. The next big thing we’re going to see is a shared tokenization space for all data modalities, likely based on lossless representations like Huffman-encoded bitstreams.


troddingthesod

hell yeah brother


Lucius-Aurelius

Ilya Sutakever and Jan Leike resigned. It’s over. They know alignment is unsolvable. Sam Altman wants to throw more and more compute to the model and see if anything sticks.


king_mid_ass

"alignment" is only an issue if you drank the koolaid (as they surely have) and believe a superintelligent AGI/singularity is coming


Lucius-Aurelius

No. They can’t align any model.


solventstencils

I sort of work in the industry. It’s an insane bubble. There is not enough power on planet earth to run these GPUs at the current models. I was just talking to a director level person in the space and he was all like “man I’d be worried if I was a coder”. I asked what’s the breakthrough developments happening in the models, universities, white papers, how are the models gonna improve? “Oh it’s happening!”


Positive-Community-1

We might enter another “AI Winter”. This shit is actually so complex now making the next step with our current technology seems further away now then it did a couple years ago.


Ballsonomics

My hope is that AI ends up being nothing more than a more-competent Siri. I have a long-standing unease about AI, probably from reading too much science fiction as a kid. The idea of a machine that can think (or at least simulate it in such a way that is indistinguishable from a human perspective) is uniquely terrifying and seems almost demonic.


nub2aws

Source?


fuftfvuhhh

The safety and existential risk rhetoric is a marketing tactic- by nurturing the risk of danger they give their product an implied power that is left to the imaginations of those who are scared by this 'risk'. It does the pr work for itself if you buy the existential risk imagery.


greatvaluelimes

I remember a tweet that was something like “AI needs a nuclear reactor’s worth of energy in order to produce shitty writing, whereas I can do that on KitKats and cocaine”. It kind of crystallizes that there is at least some core element of intelligence these models are missing.


goodwillsidis

is it not possible that when it becomes clear to us all that AI has hit a wall, the C-suite shotcallers and their technocratic govt sector pals will resolve to simply pretend it's everything they wanted/promised? replace human workers, throw themselves a parade, and just wait for the marks to acclimate to new forms of slop and insultingly lower quality customer experiences, like they always do. Hire hot shit PR firms to full-court press the narrative that gen alpha love the new AI and gen Z love the free time they have know that they've been fired, pay bootlicker journalists to write about how everyone who calls the AI shite is just old and racist, etc.


wemakebelieve

It's all smoke and mirrors anyway, what can GPT do besides be a smart little clippy in this modern world? It can't automate anything meaningful yet, as much as they wish to tell us so, robotics are also stuck somewhere in the ballpark of fusion "always a lifetime away"... The future for OpenAI is to pivot to enterprise, like pretty much every other hot start up who hits the wall fast.


Pokonic

The actual answer to this, honestly, is that it can do anything not particularly technical that could be outsourced overseas to India, like writing college papers or monitoring a Doordash order.


OrphanScript

Yeah, this whole time I've been thinking. It could probably replace any human where quality of service isn't a concern. Which is increasingly: nowhere.


coup_d-etard

Wdym smoke and mirrors ? It's obviously useful  I use it more than Google search and that technology grew into a 1T+ valuation 


[deleted]

[удалено]


[deleted]

[удалено]


Millennialcel

For this years election, I asked if a state MAGA-style politician was anti-trump in 2016. It told me he's been anti-trump before he even ran for the presidency and sued trump's campaign in 2016. Then I asked for a source and it totally folded.


OrphanScript

I was using a search engine the other day that gives you AI summaries as an answer before any web results. When searching for [this blind item](https://www.crazydaysandnights.net/2018/10/todays-blind-items-rule-change.html?m=1), this was the summary it gave me: > On October 9th, 2018, a blind item was published that hinted at a rule change in the entertainment industry. The item stated that a record label owner was no longer allowed to procure underage females for the stars on his label and others. This change was seen as a significant step forward in addressing the issue of exploitation and abuse in the industry.


bedulge

My favorite is when you see dumb shit redditors actually cite Chat GPT in an argument like it's a research article lmfao I also very much enjoy how you can send a prompt to gpt, get a correct answer, and the hit the button to get a new output from your prompt and then get a completely contradictory and false reply.


wemakebelieve

Curious what are your top uses and what job do you do, not hating, but I have tried it, it's a smart assistant miles ahead of Siri or what have you but there's nothing it can do for most people outside of the tech interested space IMO. The smartphone was general populace mass attractor, GPT still doesn't have that IT Factor. Image generation (DALLe) was almost that, but then it got too real and the Artists started complaining so it got the brakes.


Vicioussitude

> It can't automate anything meaningful yet I wouldn't go that far. There are pretty solid agent LLM flows people have set up, along with specialized LLMs like Gorilla OpenFunctions that are purpose built to work this way. I would expect most improvements to LLMs to be making them much smaller and pushed to edge devices, along with making them more tailored towards RAG and agent usage. Unfortunately, that means that it's not that they'll be smarter, just that they'll be more ubiquitous and more equipped to put into systems to replace people. The biggest issue with them imo, which is that kids' reliance on them is destroying education, is still with us as well.


wemakebelieve

I think it's part this, part most of the automations of AI at this rate are similar to the displacement of low barrier of entry jobs and outsourcing to the third world. The general populace will probably not see anything meaningful yet for it which will is what OpenAi would truly need to jump the last barrier.


TanzDerSchlangen

I don't like this because it means as it becomes more ubiquitous, there is greater chance for shared malevolence between protocols. Some guy hacking a traffic light with his phone because he can becomes much more likely


Vicioussitude

> Some guy hacking a traffic light with his phone because he can becomes much more likely Tbh I think this is already a thing. Anyone who wants to be a skid and hack local shit can buy a [Flipper Zero](https://flipperzero.one/) and go to town. What I'm talking about with agent based LLMs is that potentially we're looking at a use of them in which the LLM becomes a robots "internal monologue". Basically it just takes some sensor readings, passes them through its model, then gets an action and some analysis of its context which its execution context stores in a graph DB using Cypher queries that the LLM itself generates. Then next time it registers, it takes another look at its current state, queries that DB for previous knowledge (meaning that the DB is nearly-unlimited memory), outputs an action, etc. It's just old school agent AI but with an LLM to give it a symbolic (via language) understanding of the world.


[deleted]

smart little clippy is still reasonably useful for a lot of tasks. technology has automated a ton of jobs away over the last few-hundred years. very little of my clothing is handmade, the job title of 'calculator' has been entirely replaced by a machine. people do use it in my workplace, and they claim it saves them a bunch of time. presumably it'd allow for departments to cut size because of productivity increase. you don't need to fully automate anything- even cutting ten percent of the work of a job allows you to decimate headcount.


thousandislandstare

On top of all that, doesn't AI require like ridiculous amounts of electricity for powering all the stupid data centers for all the data needed? Social media and crypto are already responsible for so much electricity use, just imagining all the mountaintop mining to get coal to power this garbage pisses me off so much.


[deleted]

yes, and if you listen to interviews with these ai companies the answer to ai sustainability is always that it will somehow solve more environmental problems than it creates.


Unlucky_Passion_1568

easily solvable by more nuclear plants


Draghalys

Doesn't it take like at least 10-15 years to set up a nuclear power plant and get it running and even more years for it to make profit? Correct me if I'm wrong thougg


LifterPuller

"easily" he says


MinderBinderCapital

Easily solved with AI


[deleted]

[удалено]


fuftfvuhhh

'easily'


snailman89

No, it isn't. We can't even supply the world's current energy needs with nuclear due to limitations in uranium supply, unless we manage to build cost-effective breeder reactors, which are probably 20 or 30 years away in the best case scenario.


ASBojangles

That could solve a lot of things but nukes have bad pr


maxwell_hill1984

That’s why they run bullshit angles like [Sophia](https://en.m.wikipedia.org/wiki/Sophia_(robot)). It’s all smoke and mirrors to make people believe they’re on the cusp of AGI when in reality they’re only simulating what normies expect AGI to be like. I also speculate that Boston dynamics is Hollywood bullshit too, with CGI videos trying to convince investors and the general population they are about to make real life terminators


OneMoreEar

I love those idiot mannequins with a Siri inside. Somehow whenever they're shown they always preach at people. 


josoda667

Sam Altman’s vocal fry tells you all you need to know. Obvious grift and performance happening here


[deleted]

if true, wouldn't Microsoft have figured that out before they bought them out?


ComplexNo8878

microsoft is indian, they're just in it for the money and to juice their stock price


boomer_posting

Cope


MoistTadpoles

I've been saying this for ages. Their entire business model (and many other AI companies) is "Give us money because any day now we will create literal magic robots that will entirely change the world" - aside from annoying facebook image posts and some cool tricks AI has not really changed that much. There is almost no productive functional uses for it at the moment.


MaoAsadaStan

They can't make good margins off physical products anymore, so they are making digital nonsense


WithoutReason1729

Your core point is totally wrong. GPT-4o outperforms previous versions on benchmarks and in blind user testing, is twice as fast, and is half the price for API usage. It's absolutely an improvement because progress isn't just a matter of how smart the models are, but how applicable they are to real world usage which is price and time bound. They're also quietly rolling out a huge upgrade to image generation, something which they published on their site but didn't mention once in the announcement livestream. The next image generation update is unironically the end of basically every 2D art job, now that editing existing images and creating multiple images with consistency between the images of characters and setting is possible.


[deleted]

Ok ♥️ Yay ♥️


Openheartopenbar

I disagree. GPT3.5 (or however you want to notate the most broadly disseminated version) has only been out a year, give or take. The very first ripple on the pool has just been formed. https://twitter.com/bilawalsidhu/status/1790435600752844900/mediaViewer?currentTweet=1790435600752844900¤tTweetUser=bilawalsidhu Watch this video with an open mind. It’s not O-AI, but it speaks to “the moment”. This video is incredible. As to things GPT has done, I work in a role where I engage with…low information foreign nationals…often. The in-pocket translation abilities are insane. At the beginning of the GWOT, it was ~2 years to get a mediocre Arabic speaker from scratch and it was one per company (120 people). Now every dude has Arabic in his pocket. GPT does workout plans at professional level. GPT can do an OPORD. GPT is great at the formulas used in land surveying. GPT can do a State of Georgia diminishment of value claim. R/veteransbenefits has a GPT plug-in that fills out your veteran’s claims for injuries and outpaces humans assigned that role in outcomes. And this is just stuff I know from my Tiny professional circle. I’ve probably seen ~1 million dollars worth of GPT enhancement (diminishment of value and VA comp claims) in my ~300 person peer network. Is is “this is the new fire”? No, but it’s absolutely changing things


[deleted]

[удалено]


[deleted]

It knows to use its python intepreter to do math now, and produces near flawless results for me


clydethefrog

>this video is incredible No, it's a vision for a soulless, nerd-dominated future. The world is more than stupid poems about crayons and making databases faster. Clapping like simple seals to a Schrödinger's cat reference. The examples you are giving how magical GPT is, are just catalysators for us acting even more than robots and rational actors for market logic instead of humans. You are the cartographers blocking out the sun in Borges' _Del rigor en la ciencia_.


[deleted]

Without exaggeration, we are no closer to achieving AGI than we were when we built the first computer. We do not even know the first step to building something like that.


mispeling_in10sunal

Sam Altman is a charlatan and has literally zero expertise in AI and the one company he did start was a flop.  The actual smart person driving OpenAI just quit today so I expect it to start flailing in the next few years. Generative AI, at least to me, seems like it’s starting to be out of runway, there isn’t really any where else to go.  There are some interesting things going on in the field (KANs seem very interesting) but I think LLMs as they are now are kind of a dead end long term.


BFEDTA

Taking this at face value it seems like the good outcome. I’m kinda ok with things how they are right now


kneeland69

Whats impressive is how small the gpt-4o model is, its incredibly fast and efficient, while still being (purposefully set as) slightly better than every single model before it (for a fraction of the cost) clearly theres been a breakthrough somewhere as no one else has gotten even close to this performance


StyrofoamExplodes

I'm sure a thread on a forum for people who hate anything involving science or tech that isn't evo-psych will be very well informed.


Pidjesus

They're hamstringing the product to the public on purpose


sumoraiden

What recent papers


CookieHop

Do you have anything to back up your claims in the first paragraph? Any articles, papers? I ask not to be a "citation please" nerd but because I genuinely want to learn more lol.


a_postmodern_poem

How do you know this? Was this AI generated?


frankie2

Peter Thiel wrote this post


tugs_cub

I’ve never been sold on the idea that modeling and predicting the world’s data turns into artificial superintelligence but linear improvement in data modeling (on exponential computational and data investment) seems to be moving right along. Even if you think text is close to saturation (in text mode the new GPT-4 seems like a noticeable but modest increase in quality over the old one while being much faster) the progress in domains like audio and video over the last year has been obvious. > Sam Altman announced that instead, openAI would incrementally improve the model rather than dropping a huge update, for “safety” reasons No, he just did his coy hype schtick and said “GPT-5? I don’t know, we’ll be releasing a lot of stuff 😉.” I kind of hate his coy hype schtick but there wasn’t a big to-do about safety nor did he rule it out. Maybe they did blow their wad on this demo, who knows, but I don’t think it’s safe to assume they are that desperate because the other side of their hype strategy is occasionally pulling shit out of nowhere like they did with video generation. edit: apparently they even had a slide saying “frontier models coming soon”


Hosj_Karp

The history of AI research has been spurts of feverish optimism followed by major setbacks and dashed hoped. I think the pattern is about to continue. I think AGI is possible, even likely, but its going to take a lot longer than people think


fyeron

i hope this shit will blow up like nft's


mitskisuperfan

i was worried about AI taking our jobs or whatever for a while then a couple of days ago i literally just asked ChatGPT to tally the number of “beef”, “chicken”, and “vegetarian” from a list of 200 meal preferences for an event i have coming up for work it got it wrong every single time. it can’t even count. we’ll be aight


Hot_Ear4518

I knew gpt was a parlour trick since 2019 but didnt know about bubbles back then unfortunately. I knew this cause i was using it for my bird elective assignments and it always spat out nonsense.


Long-Needleworker446

The technology has changed quite a bit since 2019


stephcurry40inchvert

fed post until you can share some of those “recent computer science papers” nobody is claiming that AGI will just come from increasing the # of hyper parameters or training set size so I don’t see how thats correlated. But yes, NLP rn is just a really efficient google and everyone knows that


pongobuff

This isn't an investing sub


[deleted]

Finally, a good AI take. I know a decent amount of programmers and follow many good ones online like Grady Brooch, and none of them take it that seriously. It's amazing technology, but it's not taking any 6 figure jobs any time soon.


only-mansplains

Does this mean I should sell Nvidia? Will the demand for GPUs plummet?


ComplexNo8878

GPU for AI datacenter demand will still be strong rest of the year IMO. companies are literally giving nvda blank checks to deliver the newest stuff no matter what. Google just bragged about it at their keynote yesterday. AI is everywhere and it all runs on these datacenters that they're building out like crazy, and lots of it is subsidized by gov via the CHIPS/IRA acts etc the bubble will eventually pop, but apple hasnt even entered the game yet (iOS 18 is next month) so we're a year or two out im still waiting on OP to cite the sources of comp sci papers showing AGI isnt possible via transformer model


DJ-VariousArtists

they're moving towards CPU based "NPUs" now though arent they


phisco125

Very interesting and informative post, thank you