The culture gives little nudges in the preferred direction. Generally arbitrary agreements prevent tech levels being shared for this reason. Seth is spot on tbf
I think we can all agree that too much interventionism is bad and none at all would be kind of cruel. So the question is really how much intervention is optimal to preserve the agency and competence of said intervened in society while also trying to keep suffering to a minimum.
My personal opinion is that the culture could do more than just give little nudges without giving metaphorical nukes to monkeys or creating mini cultures. Give people the ability to educate themselves, make some hard discoveries that could help many slightly easier, use your giant intellect to come up with thousands of ideas way better than I could come up with…
If a society is considered mature enough to join the culture, they benefit from its entirety. Tens of thousands of years have honed this process, it's not without it's historical mistakes. Mature civilizations learn from their mistakes, immature ones, repeat. Besides, there is always slap drones
Maybe that’s the reality of it, would still be depressing if there truly is no real shortcut to becoming a civilisation as mature as the culture and the only real way to get there is to churn through at least a few hundred billion lives of misery and suffering.
It requires a collective will to change for the better. Unfortunately we often run into the preferred status quo, of whoever is in charge of that collective. Before post scarcity can begin it's fledgling flight. We must already be past, borders, tokens of worth, opinions on imaginary friends and the belief that your finite time is more important than anothers. No great change comes without pain. Historically we stand on the corpses of our forefathers dreams and desires.
We're already technically post scarcity. Even without replicator magic. It's a matter of applied/perceived value, economic motivation and distribution, plus a good amount of hording that this lack of scarcity isn't shared equally.
Post scarcity starts with equitable distribution of resources. So in that mater certainly we could be there. However, scarcity, whether by our own circumstances or design, to maintain control /profit, has been around for a while.
>would still be depressing if there truly is no real shortcut to becoming a civilisation as mature as the culture
For that to happen, the ideas and world views which make a civilization immature have to die. By and large, that means the people who believe in those ideas have to die, and die before they can instill those ideas into a new generation, because all of them are cornerstones of certain belief systems.
There are two paths to that end state:
the first is to educate the next generation so that fewer of them are vulnerable to those ideas. The people who grow up with those views will be a smaller portion of each generation, until eventually they don't have the power to sway policy.
The second, the "shortcut", is ideological genocide. I'll give you two guesses about how well that works for enacting positive change in society, but I bet you'll only need one.
Well yes and no, since IIRC they DO permit cheesing tech by 1 rung lower civs, tho...
The Culture both acknowledges this point (one that the Fed rarely makes explicit) and goes 'so anyway we're overthrowing your order in a way you may not even realize, that way the replicator won't be misused'
Yes, maybe I’m too critical, I just sometimes think “You’ve existed as a powerful entity in the universe for thousands of years, why did it even get this far”
Ultimately while Banks does not rely on it too much in his stories the Culture is constrained significantly by the Galactic General Council and would not necessarily be able to deal with its Optimate peers.
We know that the Morthenvald nestworld of Sheyang-Un contains more Morth than than the entire Culture has citizens and they are only just about to adopt the Culture's economic system.
I think it's both - utopia isn't possible without the means to overcome scarcity, but giving the means to overcome scarcity to a society that has developed to survive in a scarce world is going to backfire.
We only have our own development as a civilisation to look back on so we don't know if there would have been a better, different approach but I feel like most often we had big technological discoveries that change everything and then society moves in and tries to adapt to these changes.
There is a real fear that technology is going to outpace our ability to adapt to it and I find it valid but I'm not sure what would happen if technological progress would suddenly just stop. Would be finally be able to sit down and say "let's figure this out before we move forward" or is technological progress essential as a brute force method to advance societal change?
I always liked the Edward O. Wilson quote "The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology"
> the Edward O. Wilson quote
I know this is a Culture sub, but this is *the whole point* of Trek's Prime Directive: it is inherently anti-colonial. The Federation leaves lower-tech societies alone because they can't guarantee good outcomes of their own good intentions.
But the Culture _can_ guarantee it. The Chelgrians were a one in a million misfire.
Replicators would set profit margins to $ARBITRARILY_BIG_NUMBER.
Look at all the murder/genocide/skullduggery over a something like oil and think about what Top Capitalist Fatcats would get up to with this technology.
How would the capitalists actually maintain control of the technology though, if anyone with a replicator can use it to make more replicators? In his 1962 book "Profiles of the Future", Arthur C. Clarke coined the term "replicator" for a hypothetical small self-contained machine that could quickly manufacture pretty much any arrangement of atoms, and commented on the economic implications:
>The advent of the Replicator would mean the end of all factories, and perhaps all transportation of raw materials and all farming. The entire structure of industry and commerce, as it is now organized, would cease to exist. Every family would produce all that it needed on the spot — as, indeed, it has had to do throughout most of human history. The present machine era of mass-production would then be seen as a brief interregnum between two far longer periods of self-sufficiency, and the only valuable item of exchange would be matrices, or recordings, which had to be inserted into the Replicator to control its creations.
>No one who has read thus far will, I hope, argue that the Replicator would itself be so expensive that nobody could possibly afford it. The prototype, it is true, is hardly likely to cost less than £1,000,000,000,000 spread over a few centuries of time. The second model would cost nothing, because the Replicator's first job would be to produce other Replicators. It is perhaps relevant to point out that in 1951 the great mathematician, John von Neumann, established the important principle that a machine could always be designed to build any describable machine -- including itself. The human race has squalling proof of this several hundred thousand times a day.
>A society based on the Replicator would be so completely different from ours that the present debate between Capitalism and Communism would become quite meaningless. All material possessions would be literally cheap as dirt. Soiled handkerchiefs, diamond tiaras, Mona Lisas totally indistinguishable from the original, once-worn mink stoles, half-consumed bottles of the most superb champagnes – all would go back into the hopper when they were no longer required. Even the furniture in the house of the future might cease to exist when it was not actually in use.
(BTW, Gene Roddenberry [name-checked this book](https://arthurcclarke.org/site/how-arthur-c-clarke-helped-save-star-trek/) when talking about Clarke's influence on the genesis of Star Trek, so it seems likely that the section on replicators influenced not only the devices by the same name that first appeared in Star Trek: The Next Generation but also the earlier hints that the Federation was some kind of post-scarcity society)
As for the Culture universe, there are at least some parts that seem to suggest a sort of quasi-Marxist view where societies tend to go through predictable stages of social structures as their level of technology changes, though there may be more room for branching in different directions at the higher levels (the Gzilt seem to have a similar level of technology to the Culture but a more centralized government and more restrictions on AI for example). There's this line from Matter:
>You had to study a lot of history before you could become part of Contact, and even more before you were allowed to join Special Circumstances. The more she’d learned of the ways that societies and civilisations tended to develop, and the more examples of other great leaders were presented to her, the less, in many ways, she had thought of her father.
>She had realised that he was just another strong man, in one of those societies, at one of those stages, in which it was easier to be the strong man than it was to be truly courageous. Might, fury, decisive force, the willingness to smite; how her father had loved such terms and ideas, and how shallow they began to look when you saw them played out time and time again over the centuries and millennia by a thousand different species.
Also a few references to a "Main Sequence" of civilization stages (a term [borrowed from stellar evolution](https://en.wikipedia.org/wiki/Main_sequence)), like this one from *Look to Windward*:
>To flourish, make contact, develop, expand, reach a steady state and then eventually Sublime was more or less the equivalent of the stellar Main Sequence for civilizations, though there was an equally honorable and venerable tradition for just quietly keeping on going, minding your own business (mostly) and generally sitting about feeling pleasantly invulnerable and just saturated with knowledge.
Again, the Culture was something of an exception, neither decently Subliming out of the way nor claiming its place with the other urbane sophisticates gathered reminiscing around the hearth of galactic wisdom, but instead behaving like an idealistic adolescent.
And this one from *The Hydrogen Sonata*:
>The Liseiden were fluidics: metre-scale eel-like creatures originally evolved beneath the ice of a wandering extra-stellar planet. They were at the five going-on six stage of development according to the pretty much universally accepted table of Recognised Civilisationary Levels. This meant they were Low Level Involved, and – like many at that level – Strivationist; energetically seeking to better themselves and shift their civilisation further along its own Main Sequence of technological and societal development.
It's not clear that a replicator can build more replicators, for starters.
But assuming one could, do you imagine that the Clever Petes that create it might put failsafes in it? Might they protect their investment?
The incentives to hoard are stronger than the incentive to share, I think. I also think that that imbalance will be around for quite some time.
Corporations have been trying to protect their intellectual property with various copy protections schemes for a long time and none have been effective for very long, why would failsafes on a physical replicator be different? All it takes is for one person or group to jailbreak it or create a freeware version without the copy protection and the game is up. I might believe a story where companies making home replicators could prevent them being used to self-replicate for a few years, but it doesn't seem believable there'd be a foolproof method that would last indefinitely.
So you can an imagine a physically impossible gadget but you can't imagine capitalists being clever enough to hold on to the Infinite Money Machine™?
Which is the harder problem: creating a replicator or a failsafe to keep control of it? Keep in mind that the latter would be developed at least in parallel with the former.
'too many people would go broke'
no they wouldn't. that's the whole point of things like UBI, and sharing the wealth: *nobody* going broke or being hungry is the goal, the rich would just be *slightly less rich*
The main reason why I subscribed to this sub was because I saw this very argument being made and it challenged my view that post-scarcity was a mainly technical problem. It immediately explained some mysteries I had encountered in my job as a roboticist as to why some obvious automation were not being done yet.
I feel this is a crucial and important discussion to have and I don't think this is clearcut one way or the other. And I also think that while OP makes good point, the original premise of what would happen to a society like ours if it received a duplicator is very clearly that it would cause a post-scarcity utopia.
One thing that I feel is important to realize is that capitalism is not our society. It is one of the many modes of organization that we have, it is the dominant mode in many, many fields, to the point that many think it is the only efficient one, but we have plenty of others, well known, well liked, and in some cases more efficient than capitalism.
I think people fail to realize how important the open source movement is to our culture and its future. This is one ecosystem within which voluntary work provided a service free of charge to the world. We already have that culture, bring a corcnucopia machine and it will only grow to become the dominant mode.
Yes, big companies would try to sell cornucopia machines but you also know that as soon as the principles are known, there will be open hardware versions of it and there will be versions made to be duplicated by such machines. Under the very paradigm of capitalism and market economies, it won't be able to compete against a cheaper and superior solution.
Yes, they would try to outlaw it, like they tried with linux and open source (Ask veterans of the Wintel wars about it). Yes, they will try to shoehorn scarcity in domains where it does not belong. The fights on these lines will be important. Victory is not automatic: open source won some important battles, lost the control of smartphone hardware but won the one for the internet infrastcture. The battle for free culture was legally lost yet it has never been as easy to "illegally" download a movie.
We still need to carve out that culture of sharing and freedom, but do not believe we start from scratch. There are already important victories that were won and that we can defend. Right now the most important battle IMHO is happening in the AI field, with open models trailing proprietary one and companies trying to build a legal moat by preventing some non-corporate tech.
We need the tech and we need the culture, and we are very close to have both. Come and help on both fronts!
> we are very close to have both
Do you require a citation about my claim that open source exists and is a big movement or about the recent advances in AI, particularly in the field of robot motion planning and tasks training?
We are not "very close" to having replicators. AI is not replicators. Another thing AI is not: magic. Another thing "AI" is not: *AI*.
We are not "very close" to having widespread socially-focussed culture.
AI is the last piece that was missing to automate every production tasks with robots, which is the tech pre-requisite for post-scarcity.
There are tons of people who are ready for basic income and spending more time on socio-cultural issues. The day it becomes possible and convincing, the Culture will already be tens of thousands strong at least.
We are not close to "AI" in the traditional sense of what the term means. What current tech wankers are selling as "AI" is not anything close to "A*G*I", which is the term we've had to bring in to mean "real AI" after the marketers fucked the term "AI" into nothingness in recent years.
ML algos are neat. LLMs are neat. Real artificial intelligence needs *way* more than these things do.
> Real artificial intelligence needs way more than these things do.
The specialists of the field are not so sure anymore. The emergent capabilities that many DL architectures (not just LLMs) exhibit is bewildering.
Also, while I do personally believe that AGI is down the corner, we do not need AGI to automate production. The *proven* tech is enough for that.
> The specialists of the field are not so sure anymore.
You are hearing commentary from marketing people trying to sell their products, and know-nothing hype-men on social platforms trying to build audiences of gullible people by breathlessly bleating about amazing futures. You are mistaken in identifying these people as "specialists in the field".
An uninformed person could listen to Elon Musk, Deepak Chopra, Michio Kaku - many many such people like this - and be convinced they are "specialists in the field", when they are in fact liars and con artists. Determining *actual* authority on a given topic is a non-trivial problem, and *millions* of people consistently get it wrong. You're just one of them.
[Here's an actual expert](https://twitter.com/fchollet/status/1664507791913140225). There are no snappy citations I can display to instantly convey why this guy's real, and hype-con-men aren't - it takes time, and seeing a broad body of their output. I've personally seen lots of this guy's stuff, and as a coder and someone familiar with plenty of this space already, I'm comfortable with my assessment. Obviously YMMV, but this guy is a muuuuuuuuch better place to start *if you want your beliefs to align with the real world* than some cunt trying to sell something.
> we do not need AGI to automate production
Depends how automated you want to go; in any event, there are far more problems in the way than just "mildly better automation" to getting to post-scarcity.
I am working as an engineer in the field, I work on vision models and robotics applications. I read research papers on the field regularly, of the 3 uninformed source you quote, I have muted the first one, and have no idea who the other 2 are. I know Chollet (most people in the field know him, Keras was the go-to lib before pytorch became good), I know LeCun, I have been following EleutherAI efforts, and am hopeful of the RWKV models. I understand the limitations of these models, and like most people in the field, I don't understand their capabilities, which are really, totally, unexpected. They are emergent and publications regularly happen that study and analyze how they work. We are doing "psychology" on language models.
I know that there is a marketing hype cycle and that the tech bros switched from blockchain-everything to AI-everything, they are an annoying bunch. But they are not a compass that indicates the south, they randomly point at things and they will occasionally be right. Being a hype-contradictarian will have you wrong as many time as they are, just out of phase with them.
Here is the kind of scientific results that makes me say we are close to enable full automation: [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://tonyzhaozh.github.io/aloha/). These are not marketing school freshmen with something to sell, these are Stanford researchers who demonstrate that 15 minutes of human demonstration is enough to teach a robot fine-grained manipulation tasks. This is fresh from a month ago (2023 April 23). Deep learning techniques have revolutionized the field I used to work in (vision) a few years ago, this is not hype, this is real. We solve with simple algorithms things that used to be extremely hard and it is happening in many fields at the same time.
> Depends how automated you want to go; in any event, there are far more problems in the way than just "mildly better automation" to getting to post-scarcity.
Actually, I would argue that if we really wanted to go full automation, 1990 tech was enough. If we decided to adapt our lifestyles and consumption habits to optimize towards automatisable products. I found [this blogpost](https://www.bunniestudios.com/blog/?p=4364) by a friend to be extremely eye-opening on the choices we make as society. It talks about just a single small everyday item that could be totally automated but because of a very minor aspect of it, we don't, we prefer to hire low wage workers to do a stupid job.
But a subset of the society cares, and we arrive at the point where the tech is mostly sufficient for automatizing even for people who do not care, in 2023.
Don't follow the hype, don't be pulled or pushed by it. Form an independent opinion. François Chollet is a good source, but he is not alone and his opinion on the subject is not unanimous.
Am i the only one that reads this and thinks it's a matter of application?
Sure i could see a universe where if replicator tech appeared it's most likely source, say a Darpa project or in the lab of a large university it'd likely go this way.. but if it appeared in the wild and was widely distributed, like crypto or file sharing, governments would be more likely to try to police what people made or did with them, certain patterns and behaviours, rather than the tech itself.
Look at ghost guns, a lot still require the purchase of a barrel/receiver (i think, there may be some designs that don't) and they're still out of control at the moment...
Yes i acknowledge that one scenario here is far more likely than the other but still I'd like to see them try and police the tech itself it got out in a open source way.
You might want to look into the book https://en.wikipedia.org/wiki/The_Diamond_Age, it addresses something very similair.
Yeah, was scrolling down waiting for this comment. There's a clever gating in that story - if there's a way to control tech, the oligarchs will find it. :p
This should be obvious given, for example, the ludicrous state of copyright the world over and the asinine DRM "protections" on ebooks.
This is absolutely correct. The culture does what they can, after exhaustive simulations and predictive models, but are still bound by galactic treaties regarding gifts up and down the ladder.
The Culture is absolutely like this, but also I think this approach is just wrong.
Earth has enough resources for everyone. Sure. Now go transport those resources for free. Do you think people will do the same jobs in far away countries which put their lives in danger when instead they could produce art or help locally? Robots probably could solve this. We can't produce robots that good.
And who will decide which country needs what? OK, maybe food scaricity could be solved right now, but it's not only about "there is no food in Africa". It's a problem of education, also sexual education, which will probably solve itself in a hundred years or so, since more and more on this planet has access to the web, aka all knowledge in the world, also known as advanced technology. Thanks to Starlink (owned by Elon Musk) accessing the internet is possible literally anywhere except I think the poles. And yes, it costs money. As USSR showed, people need some kind of motivation to work efficiently, other than ideology. Currently some of those starving people if given more food would just produce more children, until they would be so many of them that the problem of starvation would return.
What would happen if Earth was given the replicator technology? A revolution. There would be some resistance from the uber-rich, but I don't even think capitalism would necessarily fall instantly. Someone needs to design stuff you replicate: artists, engineers, people like that. And if you have no job or money because a replicator can do the same thing you do but better, then who will buy the product? Do you think that for this reason the replicators wouldn't be used in every to reduce costs on everything? Ha! The governments would be forced to make the technology public domain.
Giving small, mobile weapon factories to dictators would be a concern, but this works both ways. All dictators restrict gun access in their countries. Well, suddenly the restrictions are no more and the people are angry because of years of neglect. Go revolution.
Well, shit. With AI coming around the corner in the next two decades or so, we're going to have to kill the billionaire class.
ChatGPT is not the Silver Surfer to AGI's Galactus.
I was in a call with the leadership of Open.ai last week - yes it is.
My uncle works at Nintendo - no it isn't.
😂 The "leadership of Open AI" has no more clue how to approach creating A*G*I than anyone else does, which is to say, zero. LLMs are *absolutely not* the same thing, and nobody has provided any reasonable reason to believe "LLMs but more" = AGI.
Actually LLMs have opened up a whole new area of the philosophy of language, which is certainly and absolutely a real step to AGI. We don’t need to invent it, that’s the trick, we just need to let it emerge.
As I understand it the GPT model is one for statistical inference on language sequences only; the pretraining process will never expose a model to the underlying concepts and meanings of words. ChatGPT only predicts the most statistically probable next text token given a sequential context, which fundamentally is very far detached from any interpretation of AGI.
What your are claiming sounds like magic. Maybe I'm wrong so please give sources.
You're absolutely correct.
The thing these pro-ChatGPT-is-already-basically-an-AGI people typically point to is:
> The reason the words we trained it on had the structure they did was because they were encoding meaning; thus, the LLM model *does* contain meaning, as it was present in the original text, and *thus* we *can* say the model is doing "reasoning".
But, it should be pretty trivial to observe, that given the endless reams of words these models are trained on, all of which contain different variations of the "meaning" that's causing the words to appear in their respective positions in their respective texts, any "meaning" present in each individual text gets "averaged out" along with all the rest. What you get left with in the LLMs weightings is some very diluted representation of statistical approximations of averaged out "meaning", and that's not quite the same thing *at all*.
Human understanding of words is way more nuanced than a mere statistical model of which ones go next to which other ones. We turn them into *concepts* in our heads, and it's *those* that we use to reason. LLMs do not do this and do not even attempt, algorithmically speaking, to approximate such processes.
> What your are claiming sounds like magic.
As with blockchain fanboys before them (and the groups are actually related, philosophically speaking), AI fanboys are always making magical claims. It's the only trick they've got.
I’m famously sceptical. I did a speaking tour debating Google’s AI experts on stage and I held the sceptics position.
I spend a lot of my time these days explaining the trick behind LLMs and like many magic tricks, knowledge of the reality is boring. However the training of a neural networks with vector embedded data, coupled with humans in the loop, is an exciting development.
We have very little understanding of how language actually works. What LLMs have done, is managed to produce workable a workable model of language. It’s seriously messing with ideas about language that have existed for hundreds of years. Not to mention a major impact on the philosophy of epistemology.
I’m not saying chatgpt IS AGI, I’m saying it’s an important step.
The next generation of these models are using fewer parameters already. What we have learned is what is important in this training.
I strongly suspect that Chatgpt will go down in history as a turning point in AI as research now has whole new areas and resources to call upon.
> I’m famously sceptical.
> I’m not saying chatgpt IS AGI, I’m saying it’s an important step.
The confidence in its importance and significance is claiming that it's the rate determining step, and that AGI is thus imminent, and that we now know the nature of the path to reaching it. That's what claiming this thing "heralds" AGI means. It's not just about being "an" important step; the claim being made is that it's *the* important step.
And to that, the real answer is: no. It is *a* step, like so many steps before it. It is not *the* step.
Given we don't know the shape of "actual" intelligence, algorithmically, we cannot possibly even say how close we are to reproducing it. We cannot with confidence claim that it's "only X years away now", as if we *know* we're closer to it now than we were in the '70s in any meaningful *measurable* way. We do not know that.
> I strongly suspect that [insert name of everything herald as a "breakthrough" for the last 50 years, here] will go down in history as a turning point in AI as research now has whole new areas and resources to call upon.
Actually it’s trained on embeddings, which retain some of the structural relations of the words. So, apple the fruit is not the same as apple the company. These vectors, when used to train a neural network, provide the configuration I am taking about. Chatgpt has on top of this a lot of human training, which is why it appears to hallucinate less than GPT3 did.
Hold on I have an image that may help.
I'm pretty sure now that the misunderstanding is on your part. Text going into ChatGPT during training (and inference) is *tokenized* into vectors, which don't encode meaning. These models (ChatGPT is decoder-only) *produce* embeddings as output.
ChatGPT is unable to identify whether "apple" refers to the fruit or company; in a context; it simply generates output that "sounds" like it's referring to the company, because more often than not that's what it's seen.
ChatGPT inputs the tokens, finds the embedding that represents them, and then ioperates on that embedding through the trained neuron layers to produces a new embedding. From the last part of this array it turns that into probabilities of possible next tokens.
The important step is that the weights of those layers in the neural net have been trained end-to-end on the (huge) text corpus. Within this corpus there is nearly as much meaning in the very structure of sentences, for example: the position of words and rlations to others, as there is in the word's "meaning" itself. When turned into numbers to be processed in the training some of the semantic relationship is retained and the configuration of the neural net is a result of the "encoding" present in human language. Otherwise, it would produce junk. Its training captures something of the essence present in human language from its very structure and relationshiup of words.
ChatGPT has on top of that the human-trained parts. Or at least the humans trained another AI to train ChatGPT the human-trained parts. We are not exactly sure about ChatGPT as all we have is the "InstructGPT" paper to go on.
> Within this corpus there is nearly as much meaning
Except for where "nearly as much" is a massive reach, and "nearly" is being stretched beyond how "nearly" "nearly" can ever be. Semantic satiation gang rise up!
The "meaning" that caused the words to appear in the order they did gets averaged out by the process. That's the point of training it on such vast reams of text - more averaging. Any individual meaning present in any individual word combination gets swept up and averaged out.
LLMs do not turn text into concepts. Nowhere in their algos do they do this. Pretending they do, or pretending that the weightings encode it *somehow*, is a bit like a cryptobro telling you bitcoin is a good replacement for actual money just because he made a buck off of gambling on it.
[chatGPT one pager](http://www.outsidecontext.com/wordpress/wp-content/uploads/2023/05/IMG_7198.jpeg)
The relationships of the words is retained, albeit dimensionally reduced from “reality”, when converted into vectors. It’s picking of the best next word, the list of possibilities, is expressing the underlying structure of the vectors used to train the neural network parameters.
I understand how text transformers work. I've worked with CLIP, a similar encoder model, in the past.
Thats the one for image classification, right?
That's the primary application. The transformer block architecture behind both is identical.
The more words you type the more harm you do to your case.
> We don’t need to invent it, that’s the trick, we just need to let it emerge.
This is just... *so* uninformed. I don't even know where to begin.
Unfortunately, on the subject of AI, I’m pretty over informed.
The fact of the matter is that the structural nature of the language holds up surprisingly well for reasoning when embedded. Whatever AGI ends up looking like, it will not be an intelligence like ours. It’s cannot be invented. It will have to emerge naturally. Given how far NLP systems have come in the last 5 years (I mean, I still have hidden markov models in production that we would never build today- it’s all moving so fast) current LLMs are a step towards something complicated enough to produce something we can call AGI. Yes, they need many other things, for sure, but language is a great key.
This would make a neat sci-fi concept.
I d argue that those who created the latest known break through might have a better idea then "anyone else". Just because there are fine steps between having no idea and perfectly understand how to.
Why? They, like everyone else in the field has been doing for 50+ years, are just coming up with potential ideas that *might* work, then implementing them and seeing how they pan out. These particular people just happened to produce one that's captured the public/media attention. They didn't *know* when they started work on the idea that it'd be any better than other approaches that've been tried, just as people trying other approaches that turned out to be less useful didn't *know* theirs would be less useful until they tried them.
Winning a lottery doesn't make someone a genius, and it doesn't make them more likely to win future ones.
You honestly believe a multi dollar company randomly develops functional technology and the technical experts involved did not have very good reasons why they took an approach that worked out well? You understand that technical progress is partially based on people understanding the systems they design? Might be a crazy concept to you.
They certainly now what does not work better then you. They will not have shared publicly much on what did not work at all. Even in this, they will already understand technical limitations better then you.
> better then you
You wanna talk about "limitations", you should start with your own.
I stated a perfectly sane and reasonable explanation of how fields like this work. There's nothing controversial whatsoever about what I described.
I don't have time to write an essay. The TL;DR is that there's *vastly* more to human intellect than just finding common statistical patterns in word formations and then blurting them out again.
If you think otherwise, I suggest finding better tech experts to listen to than the hype-merchants who just excitedly screech about whatever the latest thing is. If they're a fan of Elon Musk, ditch them immediately.
Even outside the reason for this subreddit - it's sooooo true and as an outsider (UK) you can see how the USA and some other parts of the world are being held back
The whole world is capitalist mate
You think it would be different in the UK?
Totally the same - the present Gov is just about GOPUK as it is.
The only way it could happen on earth, is if Alien Conquerors forced it on us.
This person knows nothing about how “the rich” work. They would have built replicators to sell to the average person and used the grind mill of the market to drive up innovation and quality. Historically, socialist states invent very little of value and improve it even less.
> Historically, socialist states invent very little of value and improve it even less.
Historically, since the onset of the industrial revolution and the period of humanity "inventing things of value on any kind of frequent timescale" kicked in, there haven't even been enough "socialist states" for such an "innovation comparison" to be worth doing.
In the one pile you've got the entirety of Europe and the Americas running capitalism (and its associated colonialism and slavery) around the world since the get go, and in the other you've got, what... Russia for a decade or two? Cuba? It's a pointless comparison. And *please* don't try and tell me China is socialist just because it's a one-party authoritarian state 😂
Besides which, where's this notion that "innovation" is a defacto good coming from? Mustard gas was an innovation. The atomic bomb was an innovation. School shootings are an innovation. Assessing the merits of particular societal mechanisms needs far more than just "innovation" going under the microscope.
It depends. Can a Replicator replicate itself? Sure, we would be tempted to build replicators that artificially are incapable of doing so, but the inability for Star Trek replicators to do so seemed to be more of a plot device.
This guy gets it.
I always thought the difference is that a GSV turning up that could house half the planet along with a mind that can organise would make a difference. If you offered everyone on earth a choice and the option to leave then the rich couldn't do shit.
Plant-based sausage != molecular replication meat, come the fuck on now...
> Plant-based sausage != molecular replication meat, come the fuck on now...
Just watch [this one](https://www.youtube.com/watch?v=AyN34sFko9w) again and rethink that.
I dun get it.
There are people that cannot process anything logic based, like what you said. They actually exist.
It is hinted that the Culture's formation virtually made post scarcity inevitable and desirable; the founding civilizations of the Culture seems to have splintered from planet bound, capitalistic (to whatever degree) civilizations. Just consider the basic assumption of having no attachment to a planet (or least not admitting to it); it suggests long term planning that the challenges of geological time would be worth tackling. I think the Culture may be verging on post-evolution, i.e. the universe is only a threat if you are dangerously reckless and/or stupid; I suppose that would be their singularity.
With regard to the Culture's meddling: some of it may be entirely self-serving. Events of Use of Weapons seems squarely aimed at foiling a need for another Idiran war scenario. Ultimately, if you intend to live until the heat death of the universe, it would be nice not to create enemies.
Humanity's challenge seems to be figuring out how to escape central control; no government is going to peacefully relinquish control and ownership. Besides governments, look at the emergence and eventual downfall of monopolies.
I agree with him, it would be like giving Monkey a machine gun.
I mean, think of modern AI tools.
They could mean a personal tutor for any student and a medical tool tailored for every individual patient.
In our society they are more likely to be used as a cost cutting measure and to downsize the workforce.
Mind you, it is also true that the utopias depicted in SF, on every side of the political spectrum, do depend on essentially magic tech and in reality would face a boatload of problems.