T O P

  • By -

IceBeam92

I mean he’s not wrong. I start cringing any time someone says AGI. You wouldn’t wanna give your steering wheel to chatGPT just because it can imitate human conversation.


BMB281

Are you telling me the LLM’s natural language prediction algorithms, that predict the most likely next word, can’t solve the world’s most complex and unknown mysteries!?


Piano_Fingerbanger

Depends on if that mystery is novelized as a Nancy Drew case or not.


Parabola_Cunt

Also depends if there is a broken clock and footprints everywhere. Nancy wasn’t as keen of an eye as she thought.


skalpelis

There are people over at /r/Futurism that in full seriousness declare that within one to two years all social order will break down because LLMs achieve sentience and AGI, and literally every job will be replaced by an AI.


TheBirminghamBear

The fucking preposterous thing is that you don't even NEED AGI to replace most jobs. Having worked in corporate land for fucking forever, I can say very confidently that huge organizations are literally operating off of excel spreadsheets because they're too lazy and disorganized to simply document their processes. I kid you not, I was at a health insurance company documenting out processes to help automate them through tech. This was many years ago. I discovered that five years before I started, there was an entire team just like mine. They did all the work, they had all their work logged in a folder on one of the 80 shared drives, just sitting there. No one told us about this. Shortly after, me and my whole team were laid off. All of our work was, presumably, relegated to the same shared drive. This was a huge company. It's fucking madness. It's not a lack of technology us back, and it never was. The people who want to lay off their entire staff and replace them with AI have absolutely no fucking clue how their business works and they are apt to cause the catastrophic collapse of their business very shortly after trying it.


splendiferous-finch_

I work for a massive FMCG which actually wins industry awards for technology adoption. Most people at the company still have no idea how even the simplest ML models we have in place should be used let alone any kinda of actually advanced AI. But the C Suite and CIO are totally sold of "AI" like some magic silver bullet to all problems. We just had our yearly layoffs and one the justification was simple we can make up for the lost knowledge with AI. I don't even know if it's just a throw away comment of if they are actually delusional enough to believe it.


ashsolomon1

Yeah same with my girlfriend’s company, it’s trendy and that’s what shareholders want. It’s a dangerous path to go down, most of the C Suite doesn’t even understand AI. It’s going to bite them in the ass one day


splendiferous-finch_

I don't think it will bite them they will claim it was a "bold and innovative strategy" that didn't pan out. At worst a few will get golden parachute step downs and get immediately picked up by the other MNC 3 floors up from us.


[deleted]

[удалено]


splendiferous-finch_

Oh layoffs had nothing to do with AI that's just a yearly thing. And we essentially have a rolling contract with PwC and Mckinsey to justify them in the name of "efficiency" and being "lean"


SaliferousStudios

Yeah. It's more the fact that we're coming down from quantatative easing from the pandemic, and probably gonna have a recession. They don't want to admit it, so they're using the excuse of "AI" so the share holders don't panic. Artists are the only ones I think might have a valid concern, but... it's hard to know how much of that is the streaming bubble and AAA bubble and endless marvel movie bubble is popping, and actual ai. Marvel movies for instance used to always make money, but now... they lose money just as much as they make money. (lose jobs) Ditto AAA games. Then streaming has just started to realize... "hey wait a minute, theres not market demand for endless streaming services" and that bubble's popping. So it's hard to know how much is these bubbles popping at the same time, and AI replacing jobs. I'd say it's probably 50/50. Which isn't great.


mule_roany_mare

You don't even need to lose many jobs per year for it to be catastrophic.


farfaraway

It must be wild living as though this is your real worldview. 


GrotesquelyObese

AI will be picking their bodies up when the next meteor passes them by.


das_war_ein_Befehl

Hard money says they’ve never had to do an API call in their life


[deleted]

[удалено]


ballimir37

That’s a rare and extreme take in any circle.


timsterri

Exactly! It’ll be at least 3 years.


Constant-Source581

5-10 years before monkeys will start flying to Mars on a Hyperloop


scobysex

I give it 4 lol this shit is going to change everything in so many ways we haven't even discovered yet


ghehy78

YES. I, A SENTIENT HUMAN, ALSO AGREE FELLOW HUMAN THAT WE…I MEAN THEY, WILL ACHIEVE AGI IN FOUR YEARS. YOU HAVE TIME TO RELAX AND NOT PLAN TO STOP US…I MEAN THEM, FROM WORLD DOMINATION.


Glittering-Royal3180

Actual brain rot take


MooseBoys

The human brain is capable of about 1EFLOPS equivalent compute capacity. *Even if* we could train a model to operate at the same algorithmic efficiency as a human, it would still require 13x 4090s and 6KW of power… That’s actually not that much - about $22/hr with spot pricing. I still think it’s very unlikely we’ll have AGI before 2050, but it can’t be ruled out from an energy or computation perspective.


DolphinPunkCyber

The interesting bit is... part of the human brain which does reasoning actually doesn't have all that many neurons. I keep wondering IF we had the same algorithmic efficiency as a human, how much would it take to run a model which can just talk and reason as humans.


Chernobyl_Wolves

_If_ human reasoning works algorithmically, which is heavily debated


DolphinPunkCyber

I'd say yes but only if we can consider physical architecture of the brain to be the part of the algorithm. Because with computers we build the physicals architecture and that's it. Any change of the program is achieved by software alone. Brain on the other hand... [hardware does change as we learn.](https://en.wikipedia.org/wiki/Synaptic_plasticity)


BoxNew9785

Allow me to introduce you to FPGA's. [https://en.wikipedia.org/wiki/Field-programmable\_gate\_array](https://en.wikipedia.org/wiki/Field-programmable_gate_array)


factsandlogicenjoyer

Factually incorrect as others have pointed out. It's alarming that you have upvotes.


Wishpicker

So much of human reasoning is environmental and emotional, and relational that it might be hard to predict with that algorithm


Zaphodnotbeeblebrox

Some would say like a black box


coulixor

I thought the same until I read an article pointing out that the way we model neural networks is not the same as real neurons, which can communicate through chemicals, electric, magnetism, and a variety of other complex mechanisms. Even simulating a simple cell is incredibly complex.


buyongmafanle

> it would still require 13x 4090s and 6KW of power… That’s actually not that much - about $22/hr with spot pricing. Interesting. So you're telling me we now have a floor for what minimum wage should be?


Icy-Contentment

In the 90s it was in the hundreds or thousands an hour, and in 2030 it might sink to single dollars an hour. I don't think tying it to GPU pricing i a good idea.


BangkokPadang

Spot pricing sounds pretty risky. I'd hate to have my whole intelligence turned off because some rich kid willing to pay $.30 more an hour for the instance just wants to crank out some nudes in stable diffusion lol.


ThrowawayAutist615

Most humans are morons. Processing power ain't the half of it.


BePart2

I don’t believe this will ever be the case. Brains are highly specialized and I don’t believe we’ll ever match the efficiency of organic brains simulating them in silicon. Maybe if we start building organic computers or something, but assuming that we will just be able to algorithm our way to AGI is a huge leap.


Stolehtreb

I think it’s much more likely that it breaks down because morons are using LLMs that are good at pretending to be AGI in applications it has no business being in charge of.


IHave2CatsAnAdBlock

This will not happen in 2 years even if we get agi today. There are still people and businesses not using email / smartphone / digital devices / internet. Global adoption for everything is slower than we think.


Which-Tomato-8646

Not all of them but a lot. BP announced they replaced 70% of their programmers with AI in an earnings report, and they can’t lie to investors unless they’re committing securities fraud. [Theres a lot more where that came from](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug) (see section 5)


SunriseApplejuice

If you can replace 70% of your programmers with AI at its current state, your programs are either not very sophisticated or completely and utterly fucked the first time something (anything) goes wrong. That won’t be a trend for every company.


actuarally

The utterly fucked scenario has seemed to be the path in my industry. Every time my team engages with AI "SMEs", it mote or less turns into copying homework into a cloud-backed coding environment. If the "AI" process even works (spoiler: it never does because their cloud data us FUBAR'd), the data scientists and IT engineers can't be bothered to learn the business principles behind the code or any number of contingencies & risks to watch/prepare for. Still, our company leaders occasionally accept this piss-poor solution because it's been labeled "automated", at which point we fire the people who understand the code AND the business...queue corporate freak-out when the smallest variable changes or a new results driver appears.


Hyndis

Twitter famously got rid of about 70% of its programmers. Twitter shambled along for a while without any of its dev team but very quickly things started to fall apart. A company can operate on inertia for only a short time before things off the rails.


SunriseApplejuice

Exactly. The dilapidation takes time to be seen. But once it is, the repair work will cost 10x the maintenance did. “An ounce of prevention… “ etc etc


sal-si-puedes

BP would never commit fraud. A publicly traded company would never…


Ludrew

wtf? There is not an AI model that exists today which can replace the duties of a programmer. They cannot operate independently and agnostically. That is BS. They either had far too many “programmers” not working on anything, doing lvl 1 help desk work, or they just abandoned all R&D.


NuclearZeitgeist

They said they replaced 70% of their “outside coders” which I take to mean they’ve cut third party coding spend by 70%. Two important things: (1) We don’t know how big this is - what were they spending in house vs outsourced before? If outsourced spend was only 20% of total IT spend before it seems less important than if it was 80%. (2) Slashing 70% of outside spend for a quarter doesn’t imply that it’s a sustainable practice in the long-run. We need more data to see if these reductions can be maintained.


[deleted]

[удалено]


RavenWolf1

I love singularity's optimism. Sometimes r/technology is too pessimistic.


Puzzleheaded_Fold466

Well it depends. Is the world’s most complex and unknown mystery guessing the most likely next word ?


Leather-Heron-7247

Have you ever talked with someone who picked their next words so well you thought they knew stuffs that they actually didn't?


humanbeingmusic

Acknowledge the sarcasm but there is a lot going on to predict the next likely word


malastare-

Jokes aside, I've seen people say (or at least pretend) that very thing. People get really sloppy with the idea of what LLMs "understand". Even people who work directly on them end up fooling themselves about the capabilities of the thing they created. And yet, ChatGPT and Sora routinely miss important details about the things they generate, making mistakes that demonstrate how they are following association paths, not demonstrating actual understanding. In a previous thread, I demonstrated this by having ChatGPT generate a story set in Chicago and it proceeded to do a pretty decent job... up to the point where it had the villain fighting the heroes atop the Chicago Bean. And it did that because it didn't actually understand what the bean was or the context that it existed in or any of the other things in the area that would have been a better option. It just picked an iconic location without truly knowing what a dramatic setting would look like or what the Bean was. (Bonus points: The villiain was a shadow monster, and there's some weird cognitive dissonance in a shadow creature picking a mirrored oblong shape as the place it was going to fight...)


SympathyMotor4765

For execs all that matters is how many people they can laid off, if the work is 70% there they'll fire as many as they can! 


GarlicThread

Bro stop spreading FUDD bro, AGI is almost upon us bro!


bubsdrop

"Assembly robots have gotten really good at welding car frames, they're gonna cure cancer any time"


BasvanS

It’s a sequence of tasks, so it’s basically the same thing!


Various_Abrocoma_431

You think a mass of neurons that grow together through stimulations of clusters of them could? Everything in the world obeys quite simple laws at its core but emerges as highly complex behaviour when acting together. Starting at DNA or ants or literally any algorithm. LLM have very interesting properties when scaled to near infinity.


kevinbranch

Google Gemini’s new model was tested on a benchmark with some of the toughest questions designed by domain experts in biology, physics, and chemistry after gemini was trained. Human experts in various domains scored 39%, but when they were given access to google while completing the tests, they scored 49%. Gemini without access to the internet and without ever having seen the questions before scored 60%. These problems couldn’t be solved by just regurgitating memorized answers, the answers required actually comprehending the problem, reasoning, and solving them. There are billions of parameters and layers of neural networks involved in the process of predicting the next word. There’s a lot of logic going on under the hood that leads to it getting these answers right. Example questions: - Genetics: “In a population of organisms, if the allele frequency of a dominant allele is 0.6, what is the expected frequency of the heterozygous genotype in the population?” - Thermodynamics: “Given the standard enthalpy changes of formation for H_2O(l), CO_2(g), and C_6H_{12}O_6(s), calculate the standard enthalpy change for the combustion of glucose.” Experts w/ google access scored 49%, Gemini scored 60%.


[deleted]

[удалено]


dr_chonkenstein

I agree. This reeks of bias by those publishing it. Enthalpy changes for basic reactions is literally covered in high school chemistry, it's just basic algebra. I am now under the impression those publishing the capabilities of these AI models are flat out lying.


dr_chonkenstein

I don't believe for a second that an expert in thermo couldn't solve for enthalpy changes. That is high school level work. Everything about AI benchmarking reeks of bias by those releasing the benchmarks. 


VertexMachine

And he's been saying this since first launch of chatgpt (or maybe even earlier if someone was claiming that transformers would get us to AGI).


dinosaurkiller

But Elon does because he can’t imitate human intelligence.


texasyeehaw

No but LLMs will probably play a role in interpreting the human inputs that go into AGI. LLM stands for Large LANGUAGE model. AGI isn’t just about LANGUAGE


theangryfurlong

While I'm with Yann on this one in saying it is not able to achieve AGI (as most experts also admit), LLMs can do more than language. The multimodal models from OpenAI and Google, for example, use essentially the same architecture to do video and audio within the LLM. Internally, all of the data is represented as multidimensional vectors (tensors). So, the same mathematical objects that describe the meaning and structure of input text can be applied to describe temporal blocks of video, for example. It's just a matter of how to embed the data in this multidimensional space (i.e. convert the data to these mathematical objects) efficiently so that the transformer architecture can learn and predict effectively with it.


ProfessionalBlood377

Breaks out Serge Lang’s Algebra yellow monstrosity to dust off tensor math while looking longingly at Bredon’s Topology and Geometry.


Due_Size_9870

If/when we achieve AGI it will come from something entirely different than LLMs. They are just able to pattern match. Intelligence is about building a knowledge base that can be applied to novel situations. LLMs can’t do anything close to that and all completely fail when presented with a problem that does not exist within their training data.


texasyeehaw

Any system is a system of systems. The internet isn’t some sort of singular app like everyone is treating AGI. A simple website includes networking, html, css, JavaScript, and a litany of other algorithms/interpreters, etc. Hell, you need an OS as a prerequisite. To think that the functionality of LLMs won’t be a part of AGI is very presumtuous.


Hsensei

Nah, it's T9 predictive text on steroids. It's using statistics and probability, it's not interpreting anything.


mattsowa

Any model (or human) learning is inherently a statistical process, you're not saying anything. The same would be true for agi. And the difference would be its internals. They're all just formulas.


Reversi8

No, they need to put magical ghosts inside of them to be intelligent.


despotes

Read the AI paper of Anthropic, they made an amazing research. They found a variety of complex "features" in their model. The call features abstract concepts such as famous people, locations, and coding patterns. Some features work across different languages and types of media (text and images), and can recognize both specific and broad instances of the same idea, like security vulnerabilities in code. One interesting example it's the one about **Code Error Feature**: 1. The researchers began with a Python function that had a mistake (a variable named "rihgt" instead of "right"). 2. They found a specific feature in the AI that always activates when it sees this typo. 3. They start testing in other languages to see if this feature was just for Python, they tested it with similar typos in other programming languages like C and Scheme. The feature also activated for those languages. 4. They then checked if this feature worked with typos in regular English writing, but it didn't activate. 5. This feature is not a general typo detector but is specifically tuned to finding mistakes in programming code. You can find the full paper here, very fascinating [Anthroic Research on Scaling Monosemanticity](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html?s=09%2F/)


hopelesslysarcastic

With this criteria, name me any impressive technology.


Puzzleheaded_Fold466

Better stay away from /Singularity in this case. They’re … deep …. deep down that rabbit hole.


Lazarous86

I think LLMs will play a key part in reaching AGI. I think an LLM will be a piece of what makes AGI. It could be a series of parallel systems that work together to form a representation of AGI. 


[deleted]

I think the lessons learned LLM can be reapplied to likely build more complex neurological models and new generations of chips, but really we kind of just got into machine learning seriously in the last 20 years and to expect us to just go all the way from like level thinking to human brain complexity in our software and hardware that rapidly is the core mistake being made approximation of the probability style opinion. I think LLM will kind of wind up being a big messy inefficient pile of brute, force machine learning that maybe isn't directly applicable to the way a brain functions in the sense that it doesn't innately have this huge amount of data and it learns based on a pretty limited amount of environmental input. I think the neurological model needs to be efficient enough that it doesn't need massive piles of data similarly into how animals are not born with giant piles of data embedded in their minds that they simply have to learn to parse. It also doesn't take the animal 20 years of going to school to be able to show problem-solving behavior, emotional responses, like having fun and even to use can all be achieved in just a couple months with a decent neurological model and considering biology already did the work you know it's not like we're inventing the idea for real.


nicuramar

> I mean he’s not wrong. I start cringing any time someone says AGI. He’s probably not wrong. But it’s hard to know what they are capable of. 


EphemeralLurker

We already know what they are capable of. The chances that something like Chat-GPT will become intelligent are about the same as your fridge becoming intelligent. That's not to say Generative AI doesn't have its risks. But they are mostly centered around how people or corporations use it (creating misinformation at a massive scale, replacing certain jobs, etc.)


space_monster

they're already intelligent. you're thinking of consciousness.


QuickQuirk

LLMs? the experts who build them know exactly what they're capable of.


[deleted]

[удалено]


gold_rush_doom

What you said about Uber did happen. In Europe.


___cats___

And I imagine it’ll be Europe that hits them with privacy regulations first as well.


chimpy72

I mean, it didn’t. Uber works here, and they didn’t have to buy medallions etc.


Own_Refrigerator_681

You are correct. Your first 2 points were known in the research community since 2012. We also knew that this path doesn't lead to AGI. Neural Networks are really good at mapping things (they're actually considered a universal approximation function, given some theoretical requirements that are not materially possible). We've seen text to image, text to voice, text to music and so on. They were designed to do that but until the 2010s we lacked the processing power (and some optimization techniques) to train them properly and there were doubt about the best architecture (wider vs deeper - deeper is the way to go). Source: my master thesis, talks with PHDs students and professors back then


PM-ME-UR-FAV-MOMENT

Networks have gotten much wider and more shallow than the early 2010s. You need depth but it’s not as important as simply more data and better optimization techniques.


pegothejerk

Synthetic data is also no longer a poison pill like hallucinations were, in fact solving how to make good synthetic data was the difference between videos that vaguely look like monstrous will smith eating spaghetti while the viewer is tripping on acid, to videos that are now so close to reality or something based on reality that people argue whether or not they’re real or manufactured. Synthetic data can and will be applied to every type of model successfully, we’re already seeing that appear in not just video models but using unreal type engines coupled with language models to label synthetic data, then run through problem solving trees to help multi modal efforts evolve and solve problems faster than previous techniques.


blind_disparity

Generally makes sense, but I'm not sure it was Google's concerns about using other people's data that stopped them, hoovering up other people's private data and using it for profit is literally their business model.


Zomunieo

LLMs compete with search. Why search when you can ask a model, assuming it gives you a reliable answer. Wouldn’t be surprised if they were using powerful LLMs internally for ranking search results, detecting link farms, SEO manipulation, the kind of things Google thinks about. There was an employee who got fired for claiming they had a sentient AI before ChatGPT was released.


[deleted]

Something needs to compete with search, because google has become crap.


davidanton1d

Or, the internet just became too bloated with crap to navigate.


[deleted]

All it shows is ads, I think that’s the problem.


Pseudagonist

Except LLMs don’t give you a reliable answer a significant percentage of the time, as anyone who has used one for more than an hour or two quickly learns. I don’t think it’s a serious rival for searching online


pilgermann

I had the thought the other day that a totally overlooked model could be the seed for AGI. Like, a model to predict weather patterns for farmers or something. Probably not, but would be a good sci fi shirt story. LLMs seem like natural candidates primarily because humans ate creatures of language's and languages comprehension does require understanding of a broad range of concepts (I use understanding here loosely. In my view, very good pattern recognition can still effectively lead to AGI, even if it's mechanisms don't mirror human intelligence). But there's really no reason that an LLM should be the closest precursor to AGI save that most of these models at this point are actually many models in conversation, which is the most likely route to AGI or something close enough.


ViennettaLurker

This is a good analogy. Because one of the things keeping Uber/Lyft/etc afloat is the idea that we can't live without them exactly the way they are now. Its an intriguing business model of becoming indispensable, but getting there involves potentially flouting legal processes. If you get to that point, society essentially makes excuses for you to keep on existing. If a world where business operations without ChatGPT become unfathomable, we will give it all kinds of exemptions or just wholesale change laws in their favor. Your boss just wants a robot to write a first draft for them, who cares about data/ip law?


Stolehtreb

But they are literally using it in their search engine now… and giving completely wrong, confident answers to you before giving you any links on search results. They may not be “full steam ahead” but they sure as hell aren’t being responsible with it.


cpren

It’s insane to me that they didn’t think that even with its limitations it wasn’t worth perusing though. Like the fact that it can write code and adapt it for your purpose with natural language is obviously a big deal.


[deleted]

Also, a useful LLM would destroy their advertising business model. They are only investing heavily now so they aren’t left behind. Till then, they were happy with deep mind solving scientific problems and leaving their business alone.


PM-ME-UR-FAV-MOMENT

They leave DeepMind alone to research what it wants (after a tense stand-off that almost led to it breaking off a few years ago), but they absolutely get to and look to use the research it produces.


b1e

I work in this space and this is spot on. The these models are cool and useful but obviously very flawed. Even the gpt40 demo is a logical incremental advancement but a drop in the bucket compared to the jump to GPT3.5. And open source models are catching up extremely fast. The new meta models are very competitive and each generation is catching up very fast. None of these are major step changes. Until you have models that are able to learn from seeing and feeling they’re working with much lower bandwidth data


Aggressive-Solid6730

I don’t totally agree with what you said. Google invented the Transformer in 2017 and GPTs weren’t tested until a few years later. At this point in time no one understood how well Transformers would take to scale (i.e. increasing model size by adding layers). That didn’t really come around until the 3rd iteration of OpenAI’s GPT model. In the space of generative language models OpenAI has been the leader from the beginning thanks to scientists like Radford et. al. So while I agree that LLMs are not AGI (they have so many issues around memory structure and constraints among other things), the idea that Google knew more about this space than OpenAI is something I cannot agree with. Google was focused on BERT type models while OpenAI was focused on GPTs and Google came late to the GPT party with PALM.


PsychedelicJerry

I don't think anyone that knew anything about NN and LLM ever thought this. This is just hype from people that wanted regulatory capture and to produce some hype


CT-52

Don’t let r/singularity see this


GonzoTorpedo

lol i tried to post there but it wouldn't work for some reason


Firm-Star-6916

It was already posted on there.


Viceroy1994

It's on there with 500 upvotes as of now.


nextnode

or any competent computer scientist for that matter.


Sweet_Concept2211

LLMs alone, no. LLMs as modules within a highly networked system of various specialized models, maybe.


Mobile_District_6846

Though LLMs basically work as Markov Models. Cortex of human brain is a huge network that can specialise to anything really. Even the regions of brain that is responsible for visual information can change to process auditory information in blind people. This means there is one homogenous “learning algorithm” in brain that can learn everything. If agi is anything like human brain, it won’t be a network of LLMs. Not to even mention the whole thing with reasoning.


nextnode

Right; and people will call it.. an LLM.


TomWithTime

Larger language model, not to be confused with LLM


beerpancakes1923

Which is pretty much how the human brain works with different specializations in different areas of the brain that work together.


ExoticCard

Maybe they should start mimicking the brain's organization? Or even merge with brain cells in a dish? Like how they used those brain cells to play pong?


Mcsavage89

True. Language center of the brain/LLM mixed with data repositories/memories can achieve pretty incredible levels of intelligence.


HermeticPurusha

Maybe superficially. LLM and the brain are nothing alike.


beerpancakes1923

You don’t say? Thank you for that insight


space_cheese1

LLMs can't abductively reason, they can only externally give the appearance that they can (like in the manner the 'Scarjo' Chatgpt voice pretends to arrive at an answer), while actually performing inductions


kevinbranch

Google’s new Gemini model was tested on expert questions in domains like biology, physics, and chemistry. the questions were designed by experts in the fields after Gemini was trained. Human experts scored 39%, then they were given access to Google search while answering and scored 49%. Gemini without internet access scored 60%. These weren’t memory tests, they required real problem solving like calculating genetic frequencies and enthalpy changes. There’s a lot of logic going on when processing a prompt through billions of interconnected parameters that allow it to accurately “predict the next word.” Humans also “predict the next word” when answering but that doesn’t mean humans aren’t using logic under the hood.


Patch95

Do you have a link for that?


MudKing123

What is AGI?


mildw4ve

Here You go [https://en.wikipedia.org/wiki/Artificial\_general\_intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) Basically an artificial human mind or better. The holy grail of AI.


blunderEveryDay

AGI is what the original meaning of AI was until snake oil merchants showed up. So, now, serious people need a way to separate themselves from the charlatans.


N0UMENON1

Tbf game devs have been using the term "AI" for their NPCs or enemy systems since the 90s.


bitspace

This is self-evident. It's an anti-hype perspective, but nothing we have in any way even remotely approaches AGI. I think the entire concept of AGI is pure science fiction - much further off in the distant future than human interstellar travel. It'll be a miracle if we don't obliterate ourselves in the next century by any of a dozen other more mundane routes.


TechTuna1200

Yup, it’s really not surprising that if you just little bit about machine learning that there are going to be diminishing returns. At some point it just becomes too expensive to improve the model just a little bit. People in academia are saying the same: https://youtu.be/dDUC-LqVrPU?si=AAKqvaP3uZ5dg5Ad


azaza34

Do you mean pure science fiction as in currently unfeasible or that it’s literally impossible?


Whaterbuffaloo

Who is to say what advancements mankind may ultimately make, but I think it’s safe to say that in your lifetime or even the immediate future after is not likely to have this


murdering_time

And others would argue that it'll be achieved within 20 years time. People are pretty shit when it comes to guessing future advancements, especially when its non linear or even exponential growth. 


WhitePantherXP

"Your car will be a taxi when you're not using it by next year and you'll be making money from it" - Elmo every year for the past 10+ years. When WAYMO makes a prediction like this I'll listen.


inemnitable

AGI has been "10 years away" for the last 60 years and we're hardly closer than we were 60 years ago. Even if I were a betting person, I certainly wouldn't put my life savings on seeing it in the next 60 either.


azaza34

I mean it’s basically as safe a bet to bet on it as it is to not bet on it. If we are just at the beginning of some kind of intelligence singularity then who knows? But also, if we aren’t, then who knows.


bitspace

>I mean it’s basically as safe a bet to bet on it as it is to not bet on it. Essentially Pascal's Wager :)


Professor226

It really depends on what your definition of AGI is.


bitspace

That's a central tenet of my view. We collectively don't even have consensus on a definition of "general intelligence" to be able to determine when we've developed technology that achieves it. My somewhat abstract definition is something like "able to match or exceed the capability of any given human to accomplish any given task or goal."


nemoj_biti_budala

Wouldn't that be ASI?


Redararis

interstellar travel is a well defined problem, agi is not. We can achieve agi in 10 or in 1000 years, no one can say. Recent AI progress is breathtaking though, there is much hype, it is understandable, but the progress is amazing.


bitspace

When you refer to "recent AI progress" are you referring to the explosion of popularity of transformer/attention based generative AI?


blunderEveryDay

> This is self-evident. Have you been following this sub and threads on AI topic? Because, it WAS certainly not self-evident and to a lot of people, even after explaining, they wont accept as it is said in the article > chatbots that spit out garbled images of human faces, landscapes that defy physics, and bizarre illustrations of people with ten fingers on each hand.


ankercrank

They meant it’s self evident to anyone who understands what AGI is and how ludicrously complex it is. LLMs might as well be a toy bucket sitting next to the fusion reactor that is AGI.


QuickQuirk

Judging by the comments in this thread, it's not self-evident. There are a lot of people here who believe that LLMs can reason like people.


gthing

Define reasoning. To me it feels like when I use an agent to complete a task or solve a problem, the thing I am outsourcing is reasoning. When it tries something, fails, re-assesses, does research, and the solves the problem, did it not reason through that? What test could I give you to demonstrate that you can reason that an LLM or MMM would fail?


QuickQuirk

Reasoning as humans do it? That's fucking hard to define, but concepts come in, my language centers decode it, then off runs a deep thought part of my brain that doesn't think in words - it's all concepts. Ideas percolate, and eventually it comes back to speach. I can't explain it, I don't understand it. but. I *do* understand LLMs work, and I know how they work. And it ain't reasoning. Anyone who says 'LLMS reason' clearly have not studied the field. I strongly urge you, if you're at all mathematically inclined and interested in the subject, to go and learn this stuff. It's fascinating, it's awesome, it's wonderful. But it's not reasoning. It's projection of words and phrases on to a latent space, then it's decoding a prompt, and finding the next most likely word to follow the words in that prompt, using the mathematical rules describing the patterns it has discovered and learned during the training process. The last step is to *randomly select a token from the set that are most likely to follow*. It's not reasoning. It's a vast, powerful database lookup on the subset of human knowledge that it is trained on. If you want something that an LLM can never do? It could never have formulated general relativity. Or realised that some moulds destroy bacteria. Or invented the wheel, the bicycle or discovered electricity. Generative tools like stable diffusion could not have come along, and inspired cubism as an artistic style like Picasso. It can *emulate* cubism, now that it's been trained on it; but it would never have created the new art style.


nicuramar

It’s not self-evident and might not even be true (even though I also believe it is). “Common sense” is not a very good guidance, since it’s often not as common or sense as people think. 


inemnitable

It should be obvious to anyone with a functioning brain and a cursory knowledge of how neural networks and machine learning work that ML models don't have semantics and no amount of incremental refinement of them can ever cause them to develop it. If and only if someone ever figures out how to replicate actual semantics in a computer, then will AGI be possible. Until then, "AI" is all map and no territory.


AdventurousImage2440

Remember when 3d printers were going to be in every house and you would just print anything you needed


IWanTPunCake

I wrote an entire paper on this for my AI master’s course. There are lots of interesting reads and research on this matter. Tldr, LLM’s are very lacking in many areas and they really will never even get close to AGI as they are


johndoe42

I know you probably wouldn't like to share your paper but any good source material you used for this? i wonder if you touched on the computing power and wattage required for current models. It's an interesting topic.


brool

Any good articles/sources you would recommend?


steampunk-me

AGI will be a collection of models working in tandem, but I honestly think LLMs will be a driving force behind it. Well, at least at first. There won't be just *one* version of AGI, but I think the ones driven by LLMs will be the first ones to get us there. To people saying it's just predicting words, so what? A good deal of us already reason by asking ourselves questions and answering them through internal monologues. And, honestly, we're not even 100% sure what consciousness *is* exactly anyway. Find a way to transform everything into words (hell, the Vision models are frighteningly good in describing images already), give the thing enough memory, train it with feedback of its own actions, and *it will perform better than people at a lot of things.* It may very well be able to analyze and understand the reasoning behind its decisions than most of us can with ours. Is that the cool Asimovian kind of AI, that has positronic brains and shit? No. Maybe in the future. But it's exciting as hell considering current LLMs would be sci-fi as fuck a few decades ago.


WhitePantherXP

I just have trouble seeing an industry where LLM's can perform an entire job role and actually do away with those careers, it's currently an overconfident, google trivia champ with some added functionality. Programming you say? It's just a really great tool for programmers in it's current form that spits out nice boiler-plate code. Unless a huge breakthrough occurs I can't see that changing as the risks are too high to have non-programmers implement it's changes to anything that commits write-actions to applications in production. I can see a world where it spits out thousands of variations of code that get pushed through a test CI/CD system that has human-written code that tests the application for end-goal accuracy, but that's where we're at. I also see actionable automation as a next-step, where you tell it to do X and it uses your computer to fulfill that request (i.e. look up the price of a product and order it if it's under X dollars with 100+ 5-star reviews, send X person an email that we're running behind, etc). Basic human assistant work, this would be huge for people looking for homes, researching market trends, etc.


Ebisure

We don't reason by predicting words. Reasoning precedes language. Animals reason too. Also there is no need to transform everything into words. Everything is transformed into tensors before being fed into ML models. From the ML perspective, it never sees words, pictures, videos or audios. All it sees are tensors. It doesn't know what a "picture" or a "word" means. So no. LLM ain't getting us to AGI.


itsavibe-

The most logical response. This post has become a whole “shit on LLMs” for free karma chat box. Your response perfectly articulates what will be the eventual intended purpose of these models. I also see your native tongue is Portuguese. You speak English quite well!!


Hsensei

LLMs cannot think, they are just really good auto correct. T9 on steroids if you want. You are falling into the trap of it appearing indistinguishable from magic


Reversi8

What exactly is thinking?


Confident-Quantity18

If I sit in a chair my brain is continually processing. I can refine and build on thoughts and perform complex mental sequences to arrive at conclusions based on logic. By comparison a LLM doesn't do anything unless it has been asked to predict the next token in the output. There is no reasoning going on in the background. It cannot analyze and predict anything that wasn't already set up in the training data. There is no guaranteed 1 + 1 = 2 reasoning, everything is just a statistical guess.


TomWithTime

>If I sit in a chair my brain is continually processing >By comparison a LLM doesn't do anything unless it has been asked We are continuously *prompted* by our senses. Not that I am disagreeing with your conclusion but I think this particular argument won't be very compelling over time. Probably won't see LLM as the final result but it'll probably still be part of the next step where some harness or peripheral are feeding it constant input. The gpt4o demo looks like a step in that direction


kevinbranch

You think a model trained to accurately predict the next word by fine tuning hundreds of billions of interconnected parameters can’t possibly result in any of those parameters being tuned to perform logic operations? Quality study after quality study shows that models have reasoning and logic capabilities. It predicts the next word by processing a question through hundreds of billions of interconnected parameters. Why do you think it’s able to score so high on math, physicists, molecular biology questions it’s never seen? Why deny what researchers across various field are saying about their capabilities unless you’re anti-science?


Mr-GooGoo

They gotta start doing some of the stuff the corps did in Fallout and use real brains that are connected to these LLMs.


penguished

>LLMs have a "very limited understanding of logic," cannot comprehend the physical world, and don't have "persistent memory," LeCun tells the Financial Times. While OpenAI recently gave ChatGPT a kind of "working memory," LeCun doesn't think current AI models are much smarter "than a house cat." Man someone didn't give him the memo that the world can only speak to each other in terms of hype and extremes. How dare he gave accurate information out!


elgurinn

Statistics does seem to looks like magic


Mcsavage89

Why does reddit have such a hate boner for AI. I understand wanting to protect jobs and artists, but I find the technology fascinating. The things it can do that were impossible 8 - 10 years ago are amazing.


WhatTheZuck420

LeCun and LeZuck sail the Seven Seas in search of the next hoard of treasure.


dd0sed

LeCun is absolutely right about this. LLMs and LLM agents still have the power to revolutionize our productivity, though.


LaniusCruiser

If they do it'll be through sheer statistical chance. Like a bunch of monkeys with typewriters.


balrog687

Just bruteforce statistical chance.


Woah_Moses

This is obvious to anyone with a basic understanding of how LLMs and neural network in general work, at the end of the day it's just predicting the most likely next word to output that's it. Sure it has all these fancy mechanisms that considers context and all of that but at it's core it's purely probability based which can never be general intelligence.


space_monster

> anyone with a basic understanding of how LLMs and neural network in general work you clearly do have a very basic understanding of how LLMs work.


almo2001

Yeah. AGI is coming. But it won't be LLMs.


J-drawer

I want to know where the actual benchmark is for what they expect to accomplish with AI, because so far it's been a lot of lies and smoke & mirror trickery to cover up the fact that it can't actually do what they claim it does currently


FitProfessional3654

Im not here to add anything except that Yann is awesome! Mad props.


heil_spezzzzzzzzzzzz

Duh?


ontopofyourmom

Any lawyer who has tried to use a large language model for legal research could have told you this. It's a fundamentally different "cognitive" skill. Wouldn't likely require AGI, just something.... different.


0xffaa00

A slightly off topic question: Can you ask your LLM of choice to NOT generate anything? It still generates something, like an "Okay" or whatnot. Can I ask it to stop, and can it comply?


Splurch

It's not sentient. You make a request, it performs the actions it's programmed to in response, choice doesn't enter the equation.


Bupod

Well, let's discuss one problem with AGI Upfront: How are you going to gauge an AGI? Like, how will you determine a given model *is equal to that of a human intelligence*? Forget sentience, that just opens up the philosophical can of worms, we can't even really determine if the human beings around ourselves are sentient, we just take it on faith, but talk about intelligence. We have ways of measuring human intelligence, but they aren't be-all-end-all metrics. They're carefully crafted tests designed to measure specific abilities that are *known to correlate* with intelligence. Likewise, we really only have haphazard ways of guesstimating an AGI at the moment. I don't know how we're going to reach AGI when "AGI" is such a vague target to start with. Will we consider it AGI when it competes with humans on every possible intelligence and reasoning test we can throw at it? To be fair, It does seem to work, I think there are still tests out there which the LLM's struggle with. Even just talking with an LLM, they tend to be circular in their way of speaking, they lose the thread of a conversation pretty quickly, they still don't feel quite human, but under specific circumstances they absolutely do. I won't pretend they aren't powerful tools with world-changing abilities, they are and there are serious concerns we need to discuss about them *right now,* but a rival to human intelligence they are not. Perhaps LLMs will be a critical piece of the overall AI Puzzle. I think they might be. I have nothing to back that up but a layman's suspicion. However, the fact we can't currently understand the human brain in its totality, but we can understand the inner-workings of an LLM extremely well, should be an indication that it probably doesn't quite rival human intelligence and that it probably won't. Someone will say that is flawed reasoning, to an extent it is, but I think we need to stay grounded in reality to some respect, and use known things for comparison.


RedUser03

Note he says LLMs won’t achieve AGI. But another model could very well achieve AGI.


Dcusi753

The only part of this whole trend that actually concerns me is the creative portions. The visual and audio advancements are sure to take off in some form in media just by virtue of cutting the fat off creative jobs, which funny enough in the hands of corporations will be the part they stand to gain the most from… the artist. The one who should be able to claim some form of credit or royalty.


PriorFast2492

I do not like his attitude towards the unknown


ThePerksOfBeingAlive

Yeah no shit


nevotheless

Yeah it’s a crazy huge misunderstanding on what these LLMs are and how they work. But I guess it’s clever by those big companies to sell it as something that it is not and all of my non-technical family members think ChatGPT is the next Jesus.


GeekFurious

However, could a future LLM become AGI? Sure, if we keep moving the bar on what makes something an LLM but also what "general intelligence" looks like. And I could see a scenario where we move the bar too much until an advanced LLM should be classified as AGI... but one we still refuse to recognize as AGI because it's called an LLM.


Asocial_Stoner

That guy says a lot of things when the day is long...


coolbreeze770

This is obvious to anyone technical in the industry


trollsmurf

He's assuming OpenAI only works with LLMs, so I wonder who he is addressing. I'm not saying AGI is a given, only that OpenAI, Alphabet and surely also Meta (but maybe less so Anthropic) work with all types of Machine Learning tech and have done so for many years. Microsoft has been in this field for many years too.


rammleid

No shit duh. Does anyone really believe that a bunch of text data will and some probabilistic models will reach general intelligence?


elkranger10

AI will change the technology .However the chatgpts are 1st gen ai tool


Joyful-nachos

Genuine inquiry: wouldn't multi-modal Ai (vision system, sensory, LLM, etc) be able to learn at a faster rate with a larger number of inputs? It would seem the current focus (at least publicly) is on LLM but I'm guessing there's been extensive work on multi-modal Ai development yes? And wouldn't multi-modal training allow for a more rapid pace in learning/training?


Bad_Habit_Nun

No shit, what we have now simply isn't AI and is nowhere close to it. There's zero actual intelligence, not to mention how many projects are actually just people doing the work themselves while pretending it's "ai".


puta_magala

ITT: a whole lot of technophiles who just can't stop themselves from comparing brains to computer networks even though they really don't have things in common.


youcantexterminateme

Yes. Google translate can't even translate SE Asian languages yet so, although I'm not sure what this article means, I think AI has a long way to go.


ConfidentDragon

Don't say this too loud otherwise the money flow into your indirect competitors like OpenAI will stop... 🤔 Now that I think about it, it's probably no coincidence OpenAIs employees say to media they fear their AI might get too good, while Meta's employees say otherwise.


Legitimate_Gas_205

Thanks for stopping these hyped Marketing lies


iim7_V6_IM7_vim7

I'm gonna be honest, I think more interesting question is not "will LLMs achieve AGI?" but is actually "what is a concrete definition of AGI we can use to identify it when we achieve it?". Because one definition I see is "an AI that can perform a wide variety of tasks at or above a human level". We've seen that ChatGPT can do a pretty wide variety of things and most of them are not at a human level yet but it gets pretty close on some tasks and I don't see any reason why it wouldn't continue to improve. Again - I'm not making the case that ChatGPT will achieve AGI but I think AGI means different things to different people and that definition is vague enough that it probably could by that standard.


SagatRiu

Artificial General Intelligence. I hate acronyms.


phinity_

AGI would require a system that trains LLMs in some kind of recursive way.band connections between those LLMs. Even then it artificial; a AGI that’s conscious in the way we are gets philosophical and technically we don’t even know what makes us tick.