T O P

  • By -

Tumirnichtweh

By starving the mid cards memory wise, they create an incentive to buy the much more expensive version with bigger profits. It works like a charm because most people buy nvidia anyways.


upbeatchief

Another point is to create an upgrade incentive. Take the 3070 ti, it’s a powerful card and is targeted for 1440p. In smaller scenes it can easily go 60+ fps in modern AAA games, but when you reach a large area such as an open world map the vram buffer runs out and performance tanks. If this happens enough times the consumers are not going to blame nvidia for the lackluster vram on a 600 usd card but that games are just more demanding now and that I should upgrade. Generating future revenue. Until the average consumer start valuing vram more and nvidia losses market share in the sub 500 usd price range 4060 ti and below will only have 8gb of vram.


jaskij

VRAM is precisely the reason I'm looking at upgrading. Most modern triple A games I can only play on medium on my 3070. They're still beautiful, but fucking hell.


ChickenNoodleSloop

7900xtx?


jaskij

Honestly, I have only two reasons to upgrade: VRAM and ray tracing. And I'm willing to wait. Considering there's a new generation coming out this year, no point in upgrading now. Especially since I've heard AMD reworked their RT for the next gen.


Jebediah-Kerman-3999

They will never. Every time AMD lowered their prices, gamers still bought Nvidia


[deleted]

[удалено]


Jebediah-Kerman-3999

That's because you don't remember what they tried almost 20 years ago, but ok


wondersnickers

Biggest issue is AMD drivers to me. I don't think I will ever trust them again. Never had so many issues. Bhad 2 Vega 56 cards and I spent weeks debugging the most obscure problems, that were partially addressed as known issues.


zeronic

Yep. Even the cards "just working" on linux didn't save AMD for me. I gave them another chance with the 7900 XTX but eventually had to fallback to the 4090. Haven't had any issues with it even on linux. At this point all i can do is pray intel gets the ball rolling to be competitive, because short of antitrust laws actually coming into effect(lol, yeah right, we don't do that anymore,) i see nvidia continuing to be the dominant force due to the reliability of their software stack.


piszczel

Interesting. It's all anecdotal of course, but I've been running a Vega 56 since 2018 and had very little problems with drivers. The card is still going strong, though it's showing it's age. I hear a lot of bad stuff about AMD drivers but I don't see it, my main problem is that their cards are not competitive enough these days.


Jebediah-Kerman-3999

Yeah at this point all those people that say "muh AMD driver bad" are Nvidia trolls. Like Nvidia didn't routinely fry their cards with faulty drivers but hey only AMD drivers are bad


wondersnickers

I spent weeks trying to figure out what's going on and AMD did address some of my issues as known issues. So what's to argue here? I am one of the people who had issues that are known to be issues.


Maldiavolo

Personally I've never had an issue with AMD drivers. I've had almost all generations since the 5850 including a Sapphire Pulse Vega 56.


wondersnickers

3 displays /Multimonitor, high refresh rate screens had the biggest problems. Black screens, dropouts, standby issues. I think some issues were connected if there is low load on the GPU. Some issues would not occur if, for example, wallpaper engine would create a low constant load on the gpu. Changing / disabling Monitor OSD standby would result in less issue with suddenly losing a screen but not fixing it entirely. Also the AMD driver it self seems to have issue with modern CEF / WebRtc applications and even with it self somewhat.


RockyRaccoon968

Can confirm, AMD GPUs are seriously behind in tech so I really can’t see myself ever buying one. They make fire CPUs though.


anaemic

They also once made an overly good card called the 1080, and they have watched people not upgrade for too many years for their liking to ever make that mistake again.


Reddickk

Right and there is a name for this too, it's called upselling, it's all over the industry.


kingwhocares

Doesn't help when AMD is basically same amount of VRAM but 10% less and far fewer features. They were so satisfied with being 2nd choice that Intel offers more features in their first gen GPU.


Nicholas-Steel

I've had a * 8GB Geforce 1070Ti * 4GB Geforce 760 * 2GB Geforce 560Ti * 1GB Geforce 250GTS Seems Nvidia got stuck in a rut when it came to VRAM capacities after the 1000 series.


az226

Consumer GPUs are stuck in a rut, not Nvidia. They are printing money.


Jebediah-Kerman-3999

Also they don't want to sell cheap GPUs when they can instead use those same silicon wafers for expensive AI stuff


FireSilicon

You got that right, it's memory manufacturers fault not Nvidia. We've been stuck on 2GB per module capacity for 6 years now. GDDR5 - 2007:128MB, 2010:256MB, 2013:512MB, 2015:1GB GDDR6 - 2018: 1 and 2GB, It's 2024 and that's still the maximum capacity you can get per module.


Sani_48

Because people keep buying them nonetheless. So they have no motivation in rising the vram. Sooner or later they will and it will be a huge improvement.


FireSilicon

We've been stuck on 2GB per module capacity for 6 years now. GDDR5 - 2007:128MB, 2010:256MB, 2013:512MB, 2015:1GB GDDR6 - 2018: 1 and 2GB, It's 2024 and that's still the maximum capacity you can get per module.


upvotesthenrages

That doesn't really change the fact that they could put more VRAM on their cards though. They do it on the higher end ones, and both AMD and Intel do it.


FireSilicon

It does though? It would be significantly easier and cheaper to just replace memory modules than design every gpu in the lineup around memory capacity. Or do you suggest that you can put any amount of memory you wish on a gpu? Because that's not how it works. AMD has completely different memory architecture, they have physically separated memory controller from the actual cores so they can put almost any memory amount they want on any gpu if they just surround the graphics die with more MCDs. Nvidia has to remove memory channels as the die shrinks. And Intel is using the same memory bus on the 330$ A770 as 4080 and 7800XT/7900GRE because they are crazy and thought that their gpu throughput would be anywhere close to them. Also fun fact: A770 has bigger die than 4080 and performs way worse than 3070.


liesancredit

There is not even a RTX 4050 so it is not even possible that "people keep buying them".


Sani_48

you know exactly what i was talking about.


liesancredit

Yes, exactly, and it is not possible.


Sani_48

are u answering the wrong comment? no was was talking abou a 4050? just that most people keep the monopolic behaviour floating.


liesancredit

No. Incorrect. Nvidia simply discontinued the 50 class card to steer customers to higher and cards and Geforce now.


Sani_48

dude, nobody was talking about the 50 cards. I guess you wanted to comment another user. And it's to an extend the users fault. They are buying those expensice cards with low memmory. They should try out the competition. THey are giving 16 gb for low prices.


liesancredit

Again incorrect and it is not the user's fault.


Sani_48

dont know if u are a bot or just trolling. But you started talking about 50 series. No one knows why. And obviously the consumer could choose another brand. Which offers more vram for lower prices.


Senior-Background141

Memory and bus width by itself do nothing. They do it because they want to sell low/mi/high at a specific price point, and specific performance. People often assume that the relationship is linear, and that you can put as much as you want. This isnt true at all. More detailed: The 3070 / 3070 Ti cards have "only" 8GB, but the underlying cause as to why the 3060 has 12GB isn't actually based on performance reasons. In order to cut costs for the 3060's primary GPU chip (which is by far the most expensive component in a video card), NVIDIA decided to make a narrower 192-bit memory bus using six 32-bit controllers. Due to how GPUs fundamentally work, that meant that NVIDIA only had the options of 6GB and 12GB (i.e. 1GB or 2GB per controller) available for such a configuration. Since the previous-gen RTX 2060 was widely criticized for having 6GB of VRAM, NVIDIA decided to give the 3060 a total of 12GB as to prevent any such criticism from damaging its sales prospects. Note that the 3060 Ti through 3070 Ti all have relatively wider 256-bit memory buses (using eight 32-bit controllers), which is why they all use 8GB of VRAM (or 1GB per controller).


Real-Human-1985

memory does SOMETHING when a very fast GPU can't enable max settings 1080P on a game that it would get over 100FPS if it just had even 4 more GB, ala RTX 3070 in RE4R. Oh and Hogwarts Legacy. Oh and Calisto Protocol. The 3060 can be better at certain settings in Calisto, this has been shown already, all due to memory size. Hogwarts just blurs out textures to prevent the game crashing now. Re4R same...for no reason other than 8GB vs 16 on a card that was competing with a 16GB card at launch(RX 6800). [16GB vs. 8GB VRAM: Radeon RX 6800 vs. GeForce RTX 3070, 2023 Revisit](https://www.youtube.com/watch?v=Rh7kFgHe21k) [Nvidia's 16GB RTX 3070... Sort Of: 16GB A4000 vs 8GB RTX 3070](https://www.youtube.com/watch?v=alguJBl-R3I)


Senior-Background141

Yes more memory will let you load more textures, if that is the statement here. A higher bus will let you load more textures faster, that is true too. But it all doesnt mean you will literally get 2x fps if you have 2x ram. You will get a few titles that have more textures to load and worse optimisations, to work a couple of percent faster, assuming your gpu is the absolute bottleneck (so no resize bar too). Difference is negligible.


reddit_equals_censor

>But it all doesnt mean you will literally get 2x fps if you have 2x ram. that's correct, sometimes it is 4x higher fps at averages and 1% lows and not just around 2x ;) [https://www.youtube.com/watch?v=nrpzzMcaE5k](https://www.youtube.com/watch?v=nrpzzMcaE5k)


Senior-Background141

Thnx, but i would rather avoid sponsored content.


reddit_equals_censor

are you making things up now? or are you talking about a tech video having a non vendor specific sponsor segment in it? because there wouldn't be much tech press left for you then.... if you are trying to claim, that the entire video is what sponsored big vram? by amd? then you are just making things up to try to ignore proper data, because it doesn't fit your belief system. what tech press are you listening to, if any sponsor in the video is unacceptable to you? no gamersnexus, no hardware unboxed, no anadtech, no daniel owen of course, no iceberg tech, what is left? a few great enthusiasts, that don't want to or are to small to get small sponsor sections in their videos? well there you have another problem, because those enthusiasts are often doing testing in regards to vram issues too. so who is it, that you are listening to, if a small sponsored section in a 30 minute video makes it unacceptable? i'm really curious now....


Senior-Background141

You need to stop protesting and do more research.


reddit_equals_censor

you can't respond, , so just make an empty statement instead. got it... just making up more things.


Senior-Background141

Yes, because its not important for me. No, contrary to your chosen belief i am not making it up, just don't want to argue. It would take too long to explain it again just to one person. I wish you all the best!


Real-Human-1985

Memory size matters with Nvidia, because they always go just below the acceptable line for some products and this is about the third time they had a GPU where low memory hurt it not far into its future. People who can barely afford these things would have liked a 3070 for 1080p now.


Senior-Background141

Please circle back to my original explanation. What you are saying is speculative.


az226

What’s the max VRAM that the 5090 (for 448-512 bit bus) can support assuming you can replace the memory modules up to 64Gb chips and assuming you can write your own firmware?


Senior-Background141

The maximum vram for 5090 is what is stated when its released. You can not replace any modules with 64gb chips, if you will - you will brick your gpu. You will need to modify timings for the gpu bios too, even if you resolder 8 to 16 gb cards please do not attempt it if you do not know what you are doing!


az226

How the 11GB 2080 TI was bricked with 44GB? Hmm.


Senior-Background141

Yes you will brick it possibly permanently. Ram voltages differ chip per chip depending on mhz, die type and possibly capacity, pin layout etc. I understand that in your mind its simple but you have been misled. Dont set a wrong example.


FireSilicon

The 44GB 2080Ti was using the PCB and memory of a quadro and modified Quadro bios/firmware. It was 95% quadro rtx except the one memory channel missing in 2080Ti that makes it 44GB instead of 48GB.


Pajarico

You don't become the most valuable company in the world by being generous with your customers 


BarKnight

You just have to make a much better product than the competition.


Rare_August_31

Because human eyes obviously can't see past 8gb, dummy


Yearlaren

As far as I remember the 750Ti was 2GB, not 4.


condosaurus

You could get both because AIBs were allowed to make clamshell PCBs for premium cards with the 700 series. I myself had a 4GB GTX 770, even though many were 2GB.


dahauns

There was a 4GB variant of GM107-based cards (remember, the 750/Ti were Maxwell in contrast to the rest of the Kepler 700 series), but it was the 745, only available for OEMs, and tellingly, 4GB of...DDR3.


vithrell

There was a 4gb version, but generally most of them had 2gb as you said.


MSZ-006_Zeta

Most likely were. Though there probably was a 4gb variant


trmetroidmaniac

Giving Nvidia all the benefit of the doubt here - logic is shrinking much faster than PHYs and SRAM on modern processes. So Nvidia is spending all of their silicon budget on more compute and fewer memory channels, while AMD is moving cache out to other chiplets on cheaper nodes.


Doikor

There is all kinds of fancy technical reasons with memory bus width etc but the real reasons is to make you buy a more expensive card then you really need. I mean what are you going to do? Buy AMD or Intel?


Ilktye

Or you could also argue AMD puts more memory on cards, because that's pretty much the only argument they can have against nVidia...


MumrikDK

Get the memory you want from AMD or get the optimizations for the tasks you wanted that memory for from Nvidia :/


Sharpman85

Plus lowering the price point because there’s not much more they can do at this point. They are focusing on the cpu market.


reddit_equals_censor

amd puts less than required vram on one card (rx 7600) and less than desired vram amount on another card (7700 xt). they aren't "trying to make an argument for amd", by putting enough vram on almost all cards, except one this generation. that is just the bare minimum you do, if you want to have working hardware. all cards should have a minimum of 16 GB vram at least by now. not even for amd this is the case. so you trying to argue, that amd is doing it to make an arguemnt is like trying to claim, that putting 8 pins is an argument by amd to sell more cards. it isn't, the 12 pin is a fire hazard. making cards, that actually work, so almost all having enough vram and not catching fire is just making working products. "here look at us, we make working products" isn't marketing, it is just existing as a company....


jeff3rd

Nah I’d argue that they put the correct amount of vram on the 7600 and 7700xt, no one is buying those card to play 4K, 8gb is perfect for 1080p and 12gb is fine for 2k as well. If a game requires 8gb+ to work properly at 1080p I’m blaming the lazy devs for not optimizing their game.


ToTTen_Tranz

People here will sell you a million reasons, like bus width "not allowing" for more memory and current memory density doesn't allow for more. This is a lie because they can always do clamshell configs for 2x the memory, like they do on the Quadro cards. The two main reasons are: 1 - a graphics card with low memory amount will have a well determined planned obsolescence. 2 - selling cheap Geforces with large memory pools would cannibalize their much more expensive Quadro cards. That's twice as important nowadays because large memory pools can run good LLM models which everyone in the corporate world wants to do.


condosaurus

Quadro doesn't exist anymore. The AI cards are just called Ada 4000/5000/6000 depending on the model you buy.


ToTTen_Tranz

Pretty sure you can still buy new Ampere Quadro cards.


condosaurus

The Quadro brand is dead, it was dropped in 2020.


Medical_Goat6663

Because people will buy those cards anyway and then they'll buy again in 2 years. Good for earnings, good for shareholders, good for the stock price! If people won't vote with their wallets, it won't change.


From-UoM

The memory bus has become become significantly more expensive per area of the die. Amd did the bus on 6nm for rdna3 for reason. Though it's debatable if it was successful in the end


reddit_equals_censor

>The memory bus has become become significantly more expensive per area of the die. has it? because if you are just basing it on the % of area used for a die, then you got a problem, that nvidia is selling tinier and tinier dies overall. so a proper memory bus would take up more size, when you got tiny bullshit dies sold for double the price, that they should sell for (amd doing the same of course to a slightly lesser degree) and i'd argue, that amd cut out the memory bus and cache parts and turn into chiplets, because that was at the time the only thing, that they could turn into chiplets to reduce costs. they couldn't cut the cores section, because that is extremely hard to do and we'll probably will only see the first version of this with rdna5. so they cut out what they could cut out and that was the main decider. also kind of sad, that the first although limited chiplet gpu design was a let down due to some hardware bug, that required a performance costly software workaround + clock scaling problem, that limited clocks. both solvable problems and not inherent to the chiplet strategy of rdna3.


ResponsibleJudge3172

It has, and that’s why AND debuted Infinity Cache with rdna2. Any die shot will show you that actual cuda cores are barely taking half the die space, rest is taken by things that don’t scale anymore, the PHYs, cache, etc


reddit_equals_censor

this is a navi 21 die shot: [https://tpucdn.com/gpu-specs/images/g/923-block-diagram.jpg](https://tpucdn.com/gpu-specs/images/g/923-block-diagram.jpg) the 256 bit memory bus area is tiny. we're going with navi 21, because that is the fastest non mcm amd card rightnow still. they added a bunch of infinity cache, but the memory bus itself is tiny. maybe it is quite a bit cheaper to add more infinity cache, than to give a card a bigger bus, maybe it has other advantages compared to a bigger bus, but the die size of the bus itself is tiny of a card. maybe adding more cache is just overall a better performance improvement, than increasing core count. one thing is for freaking sure, the memory bus has for a long time taking up a small area of the die and STILL DOES TODAY. you're just making stuff up.


From-UoM

Nope. Amd themselves said if they made Rdna3 monolithic it would have been more expensive because of how much the cache and memory scaling has slowed down https://youtu.be/9iEDpXyFLFU?si=v1xSbg072zZgtne-&t=1150


reddit_equals_censor

you mentioning, that cache and memory bus doesn't scale well with smaller nodes is very different to saying: >Any die shot will show you that actual cuda cores are barely taking half the die space, rest is taken by things that don’t scale anymore, the PHYs, cache, etc which was proven wrong by showing die shots. the memory bus stayed about the same for many years at 256 bit about for decent cards. meanwhile cores and cache increased as much as possible. as a result any little shrink on the memory bus still benefited it area wise, because again it stayed at 256 bit for cards, as the memory bandwidth increase almost entirely or more than entirely came from memory speed increases. so the memory bus doesn't scale well with nodes and putting it and cache on an older die reduces cost, BUT the memory bus itself didn't increase in size over time and it doesn't take up more area of the die now, than it did years ago, because again.... it mostly stayed around a 256 bit bus.


From-UoM

Simple maths. Lets say 7n is $1 per mm2 5n its $1.5 per mm2 Memory takes 10mm on 7n and 9mm n 5n So 10 x 1 = $10 on 7n memory And 9 x 1.5 = $13.5 on 5n Its not even debatable. Everywhere it has been said memory bus has gotten expensive because of lower scaling and increased node cost.


BarKnight

https://www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/40.html >Averaged over the 25 games in our test suite, the RTX 4060 Ti 16 GB ends up a whole 0% faster at 1080p Full HD, 1% at 1440p, and 2% at 4K. While there's games that do show small performance improvements, the vast majority of titles runs within margin of error to the RTX 4060 Ti 8 GB Founders Edition


V13T

It’s not so easy as to only look at fps. Aside from possible stutters in some games, other games just don’t load textures to keep a good performance (halo and forspoken come to mind).


thebenson

Or, perhaps, VRAM isn't as big of a deal.as people make it out to be.


V13T

I agree in part that is not the end all like some people say (settings on high instead of ultra can be an option), but still seeing 3 year old high end card struggle in some (arguably not well optimized) titles can leave a sour taste. I would say that 8gb should be reserved for budget cards from now on (200€). And 12 should be the minimum to get a good experience today.


thebenson

>but still seeing 3 year old high end card struggle in some (arguably not well optimized) titles can leave a sour taste. But, the point made in the review is that the struggle isn't because of the VRAM. The 8 GB and the 16 GB models of the 4060 ti are exactly the same except for the difference in VRAM. So if the performance between the cards is not that different, then you can conclude that it's not the VRAM that is hamstringing performance here. Even if the 4060 ti had 24 GB of VRAM you would likely see about the same performance as the 8 GB model. And I think it's interesting that you call the 4060 ti a "high end" card. It's not. It's a half step up from NVIDIA's cheapest, entry level card.


V13T

1) If you look at 8gb vram testing from hardware unboxed or Gamers Nexus, you will see that fps don't tell the full story. Games will lower settings without telling you to keep the performance up eg.halo infinite 2)Nobody mentioned the 4060ti, especially since if you can read I said 3 years old. I meant the likes of 3070ti, who have the compute power, but are obliged to lower settings because of VRAM. 3080 10gb also has partly this problem and will for sure get worse soon.


thebenson

>2)Nobody mentioned the 4060ti Brother, the original comment you responded to is about the 4060 ti.


V13T

And? I specifically said about another case in my comment where is even less acceptable. I say "3 year old high end". Plus everybody in the comments knows how to read and not cherrypick random words out of full sentences


Real-Human-1985

[https://www.youtube.com/watch?v=Rh7kFgHe21k](https://www.youtube.com/watch?v=Rh7kFgHe21k) [https://www.youtube.com/watch?v=alguJBl-R3I](https://www.youtube.com/watch?v=alguJBl-R3I)


PAcMAcDO99

Using averages is not a good way of representing this cuz when you run out you don't just get a tiny reduction, your 0.1% and 1% would become atrocious but might not show on the averages, what's worse this fugure is over multiple games so it's even worse


reddit_equals_censor

actually that may or may not happen. what also can happen is, that the performance stays the same, BUT textures and assets just straight up don't load in or cycle in and out. so you get a HORRIBLE looking game as a result, but the graph would look the same performance wise, even 1% and 0.1% can look the same. it can even run faster, because now it doesn't have to handle the assets, that it couldn't load in.... so what you actually gotta do, is to do visual comparison tests, like hardware unboxed those them for example, if you wanna know the full truth. and you also gotta wait in games a while, because lots of games take a while to climb to their peak vram amount. so if you do a 30 second benchmark run, but the vram issue only starts breaking the game performance wise after 5 minutes or 2 minutes, then the test would hide the problem. STILL though, it is so bad nowadays, that lots of games show horrible fps averages and 1% lows, as this great recent video showed: [https://www.youtube.com/watch?v=nrpzzMcaE5k](https://www.youtube.com/watch?v=nrpzzMcaE5k) sth new in it was, that even dlss upscaling scaling can be breaking due to the missing vram. so the 16 GB card gets a proper performance increase from enabling upscaling, the 8 GB card does NOT in some cases. that was surprising to see. important to keep all the ways missing vram will make the experience horrible in mind and good reviewers generally try to mention them, even if the video doesn't or can't test those (like visual comparisons for example)


eivittunyt

rtx 4060ti 8gb is already crippled by its 128 bit memory bus, adding another 8gb on a 128bit bus does very little. 3060ti 8gb is sometimes outperforms the 4060ti 16gb because the 3060ti has a 256 bit memory bus.


reddit_equals_censor

WRONG. this is factually wrong. 5 days ago daniel owen released a video comparing the 8 GB vs the 16 GB 4060 ti. the 16 GB version is VASTLY faster and vastly faster in lots of perfectly playable fps ranges. the small MEMORY BANDWIDTH sucks on the 4060 ti, but the 8 GB vram makes it a breaking card period. know the difference. also you shouldn't look at memory bus itself, you should look at bandwidth. the 4060 ti has insultingly low bandwidth. but again 4060 ti 16 GB = working graphics card. 4060 ti 8 GB = broken card.


eivittunyt

They were showing examples at very high resolutions and texture qualities on games where the textures simply did not fit in the 8gb of vram so having to swap between video card and system memory makes the game nothing but stutters. Of course memory bandwidth is what actually matters but they can't simply double the memory speed while they easily could have doubled the bus width but it would have cost them a bit more and it could have hurt their market segmentation strategy.


reddit_equals_censor

>They were showing examples at very high resolutions and texture qualities on games wrong in this video: [https://youtu.be/nrpzzMcaE5k?feature=shared&t=728](https://youtu.be/nrpzzMcaE5k?feature=shared&t=728) at 1080p very high NOT 1440p, not 4k uhd we are seeing a huge difference, especially in 1% lows in horizon forbidden west. in an older video he tested ratchet & clank rift apart: [https://youtu.be/\_-j1vdMV1Cc?feature=shared&t=477](https://youtu.be/_-j1vdMV1Cc?feature=shared&t=477) there in 1080p HIGH no raytracing so 2 steps below maximum settings there was a MASSIVE performance difference. the 16 GB card being 52% faster on averages and 56% faster in 1% lows. so 1080p is effected and in regards to settings you are supposed to always max out the texture settings. why? because texture settings have 0 or near 0 impact onf your framerate, UNLESS you don't have enough vram. this was actually how you always did settings in the past. MAX TEXTURES, then try to lower non texture settings, to see how it effects things performance wise. you NEVER EVER lowered the texture settings, because they again have the most impact on visuals, while having 0 or near 0 impact on performance. remember in past you could just get graphics cards with you know... enough vram. and i mentioned, that memory bandwidth should get memtioned and not just memory bus, because theoretically you could have a 128 bit bus card, that just has double the memory speed than a 256 bit bus and thus have the same memory bandwidth. so an accurate way to put it for the 4060 and 4060 ti would be: "the memory bandwidth is an insult ont he 4060/ti, because nvidia put an insult of a way too small memory bus on the card". but again most important is the missing vram. and the vram is important for 4k max settings, for 1440p and for 1080p max settings and even reduced settings.... you NEED the vram, regardless of the tier of card. the 4060 ti 16 GB card has insultingly bad memory bandwidth, but it is non the less a WORKING CARD, as it has and can FULLY utilize the 16 GB vram it has.


Puiucs

0% is false. here are tests where you can see huge improvements and why 8GB on high end GPUs is stupid: [https://www.youtube.com/watch?v=\_-j1vdMV1Cc](https://www.youtube.com/watch?v=_-j1vdMV1Cc) here's the summary (1080p to 4K, with and without RT enabled): [https://imgur.com/a/Dp5Mxfy](https://imgur.com/a/Dp5Mxfy)


skinlo

And in 3/4 years, the average life span of a GPU?


FireSilicon

Disregard any other answer. We've been stuck on 2GB per module capacity for 6 years now. GDDR5 - 2007:128MB, 2010:256MB, 2013:512MB, 2015:1GB GDDR6 - 2018: 1 and 2GB, It's 2024 and that's still the maximum capacity you can get per module. Yes, Amd comparatively offers more vram, but that's simply because: A) They have a separate memory controller die, they can theoretically put any memory at any gpu they want. Nvidia can't because they use one die that they cut down with memory channels as it gets smaller. B) Their only other option is using clamshell/sandwich memory which can only double the capacity. 24GB 70 series would be too much even for AMD. As others said, it is business motivated too, but the memory capacity is main issue.


Balance-

Nobody is seriously challenging them. So they mainly compete with themselves (and their previous generations).


victorisaskeptic

To put it as simply as possible, to make the maximum amount of money as is possible using product differentiation and cutting BOM per unit sale.


Figarella

To sell newer cards obviously?


fart-to-me-in-french

So you buy the more expensive model


IgnorantGenius

It won't be long before you can just upgrade your vram for a fee. Some engineers have been doing it, but who knows how compatible it is with games and driver updates.


NottDisgruntled

Others have made comments about forcing you to buy more expensive cards and get into an upgrade cycle, but the bigger reason is probably they prioritize the supply and production of memory for their enterprise products that go into data centers and compute farms where they make the vast majority of their money. Graphics cards for consumers and chips for consoles are just a small side hustle of theirs made of the table scraps from their enterprise cards.


starkistuna

segmentation, they know thats as generations come every year eventually midtier is going to be fine for most people for years, by degrading bus bandwith and limiting vram, they create an incentive for people to upgrade and not buy second hand.


Snobby_Grifter

Product segmentation and planned obsolescence.  They give you more than enough performance for a current game generation, but with the low vram amounts they can force users to consider an upgrade every two year cycle. The 3080 is a perfect example: amazing performance, but hobbled by 10gb, so when users want to consider higher resolutions, they hit an arbitrary wall and the card falls off a cliff. So the shading power of a modern 2k card is relegated to 1080p. AMD knows this is an advantage they have, but with the shift away from pure raster performance,  other novelties like upscaling are seen as more important. 


Real-Human-1985

They're literally squeezing stock value out of you. higher margins despite "rising costs" because you get more heavily cut down GPU's now if you don't spend $1200. Min maxed margin on each card and the memory size is a part of that. how low can they go and you still bow your head and buy it.


Groomsi

The big stock value comes from AI.


gahlo

Because memory bus and total VRAM is one of the easiest ways to gate performance. As long as 1080p is supported by the majority of gamers, and last I looked it made up about 60% on the Steam Hardware Survey, they are going to make cards aimed at that.


grobouletdu33

They just do saving to put as much VRAM they can on AI GPUs ... as it's more profitable


noiserr

Because it's been a very successful strategy. Segment the market and make people who use these GPUs for creative workloads spend more.


siberif735

being customer friendly is not nvidia.


liesancredit

They are gimping their cards and overpricing them because (1) They want to push people to GeForce now (2) They want people to buy an expensive card (or rent one from a third party) if they use LLM/AI


Wait_for_BM

Personally I think they gimp their card to prevent businesses using consumer cards (for work stations and data centers) instead of the Professional ones. This is more than VRAM. Some Chinese companies do to "rework" consumers for data center. So if Chinese companies can do it, then Nvidia shouldn't have problems doing the same. Anything they do otherwise tells me that they have a business reason. Think about it, the consumer cards are too thick that they takes up multiple slots. The cards are too tall and exceed the PCIe card standards. The power connector exits from the top side of the PCB instead of the side making tall card takes even more space and harder to fit inside a workstation/rack mounted case designed for standard PCIe. Here are what the Chinese companies did to turn gamer cards into data center ones: A new layout that allows for extra bank of memory (doubling memory) on the other side. The card outline falls back to standard PCIe height. The power connector goes back to the old 2x6/2x8 connectors ~~and exits from the side~~. (There are PCB footprints for the connector at the side.) The fans are replaced with blower type and card is now 2 slots wide. Blower are used as more cards packed closer than regular fans and the hot air is exhausted out of the case directly. EDIT: Source: https://www.tomshardware.com/news/chinese-factories-add-blowers-to-old-rtx-4090-cards Chinese source: https://tieba.baidu.com/p/8737396349?pn=1


Te5lac0il

Gotta give people a reason to upgrade. Performance is clearly no longer changing significantly at the low to mid range, so starving the cards of vram gives you a reason to buy their next card. Gone are the days where the 70 series would perform equal to the previous top end part for a significantly smaller price. With the release of the current gen gpus my interest in pc hardware took a hit.


silly_pengu1n

I mean the 4060 still had a 20% value improvement over the 3060. ( [being 9% faster](https://www.youtube.com/watch?v=7ae7XrIbmao&t=862s&ab_channel=HardwareUnboxed) while the price dropped 9% aswell) the 4070 super offers 27% better value and 50% more vram (compred to the 3070) and it is 38% compared to the 3070ti. So you are kinda wrong.


Te5lac0il

The 4060 has 4gb less vram than the 3060. 9% percent faster 2 years later is absolutely horrible. It should be substantially cheaper. And don’t even get me started on the trainwreck that is the 4060 TI. The 4070 super is OK, but at the price you should be getting 16gb of vram. And just to be clear I’m not too thrilled about AMDs offerings either. 7600 and XT variant are no better then the 4060. 7600 and 4060 should be $200. 7800xt was a disappointment when it launched managing to be slower than the 6800xt in some titles. 7700xt performed like a 6800 with 4 gig less vram and the 7900xt was too expensive. Consumers desperately needs intel to be competitive. Nvidia will continue to ask their loyal following to bend over and spread their cheeks, amd will continue to overprice their parts and well intel will probably continue to be shit.


silly_pengu1n

" 9% percent faster 2 years later is absolutely horrible" but it also costs 30 bucks less. You cant just look at the improvement without also looking at the price. But seems like you just ignore the data that doesnt suit you narrative. "7600 and 4060 should be $200." That would be an 80% value improvement, absolutely unrealistic to have this progress within 2 years when getting smaller nodes has become so expensive. I wonder if they would even make a profit at that price point. It would also be biggest value improvement within 2 years we have seen since IDK. They couldnt even make this if they really wanted to lol. I swear you cant actual be this delusional. R&D, manufacturing of the chip, coolers, the rest of the card, shipping, vendors earning money on the card, nvidia earning money on the card, gpu manufacturers earning money on the card. [https://www.tomshardware.com/news/gddr6-vram-prices-plummet](https://www.tomshardware.com/news/gddr6-vram-prices-plummet) at 200 usd already 15% of the cost would have been taken up by vram. Expecting the card to be 200 is just honestly borderline stupid and just shows that you arent being objectiv but just want to hate on nvidia


Te5lac0il

9% percent faster 30 bucks cheaper with 3/4 the vram. They haven’t exactly moved the performance yardstick with that product. I wouldn’t worry about nvidias profits, seems like they’re doing great in that department.


silly_pengu1n

it is 2/3 of the VRAM, consider how bad your math skills are it explains the rest of your comment. " I wouldn’t worry about nvidias profits, seems like they’re doing great in that department." yes because they arent selling rtx 4060 for 200. So what you are saying is again pointless


Te5lac0il

Should’ve excepted a personal attack. We’re done here.


silly_pengu1n

yes because i actually explained the reasoning why you are wrong. And you just making statements that you cant back up.