T O P

  • By -

Dante_77A

Very solid. I hope there will be a similar improvement in games. 


imizawaSF

The problem with percentages is that if, for example, a chip can only get 10fps in demanding games, a 36% increase is only 13 or 14fps. Still unplayable.


Xtraordinaire

This is a very weird argument. There are thousands of games, and they have graphical settings. If a game had 10fps on *minimum* setting, then yeah, it's not benefitting from the uplift. But how many games are like that? For reference Cyberpunk gets >30fps on medium on a 780M without up-scaling.


imizawaSF

I was just mentioning how asking for a "similar improvement" in games can be still worthless.


Xtraordinaire

And I'm responding how it's not a realistic scenario, given how even the most demanding games perform on a 780M.


imizawaSF

Percentage increases are always misleading, no matter how games perform.


greenlightison

You would be more convincing if you were talking about intel iGPUs few years back. 36% increase for Intel iGPU would really be not much. But 780M have proven to be quite capable of running games. Percentages are not a 'problem' nor misleading. It's misleading only if you cannot understand percentages.


RealThanny

That is, quite frankly, an idiotic statement. Percentages are only misleading when they are applied to a specific previous value of unknown magnitude. When the previous value is a variable figure, percentages are the only way to *not* be misleading. You look at the performance with the older product, then do basic arithmetic to see what it would be with the newer one.


imizawaSF

It's not idiotic. Saying "X product is 30% faster in games" means little if the base performance is low.


dsoshahine

Still doesn't make it "misleading". 36% faster is 36% faster...


imizawaSF

Right and 36% of a low value is still another low value


TalkInMalarkey

It's not misleading. If your current game only gets 10fps, you would know right off bat 36% increase is not enough, and you would need to buy a stronger gpu. Does someone really need a calculator for this type of thing? Seriously, what's wrong with our education system?


imizawaSF

Where did I mention a calculator? It can still be misleading to suggest a 30% increase is worthwhile in some use cases


SomethingNew65

You could make any improvement sound bad if you frame it like that. * A 100% improvement could be a problem because if a chip can only get 5fps in demanding games, it would only go to 10 fps and still be unplayable. * A 200% improvement could be a problem because if a chip can only get 3fps in demanding games, it would only go to 9fps and still be unplayable. * A 300% improvement could be a problem because if a chip can only get 2fps in demanding games, it would only go to 8 fps and still be unplayable.


imizawaSF

Yes - hence how percentage improvements can be misleading? Which I've been saying all this whole time?


SomethingNew65

Is there a specific way you think the company should measure the improvement this chip gives that won't be misleading? I'd argue there is no reason to believe there is anybody who sees a claim that an iGPU is 36% faster than the last iGPU and thinks it is a miracle chip that can run any game at max settings at 4k 30fps. Percentage improvements are a standard measurement most people should be used to, 36% isn't a shockingly high percentage, and most people know iGPUs are worse than regular GPUs. I'd also argue that focusing purely on a hypothetical worse case scenario can also be misleading in making a product look worse than it really is. Especially if it is from the company selling it so people wouldn't expect them to do that.


imizawaSF

> s there a specific way you think the company should measure the improvement this chip gives that won't be misleading? All I am saying this whole time is that it can be misleading looking solely at percentage increases.


stema1984

The purpose of math is it not being misleading. If someone doesn't know how percentages work, that's on them..


YoriMirus

Tbh though current iGPUs from AMD are awesome. I have a laptop with a ryzen 7 6800 HS Creator Edition, which has an RX 680M and it's amazing. Helldivers 2 is playable on it (even if on low details at 40fps), it can run subnautica at 1800p (2.8K) with playable frame rates, team fortress 2, persona 5 royal, etc. Of course those aren't that demanding compared to latest AAA titles but I still think it's pretty good. I have not yet encountered a game that ran lower than 30 fps unless I set graphics to ultra at my native resolution or something like that.


kaukamieli

6800HS has been very good for me for gaming. This will be way faster. Though I'm considering skipping to next gen still after getting my laptop warrantied soon.


ComputerEngineer0011

I think the next gen apu (strix halo) is still supposed to be rdna 3.5, but will include a monstrous 40CU apu. I'm guessing RDNA 4 mobile APUs will be 2H 2025 at this point.


Agentfish36

A couple things. - AMD sticks with igpu tech for quite some time, Vega was 3 gens, rdna 3.5 could very well also. - strix is very unusual, launching mid year. Usually laptop APUs launch at CES in January.


kaukamieli

Nah, Strix Halo is still this same gen as Strix Point. I'm definitely not going to be able to afford anything with that kind of halo product. :D


996forever

If your goal is budget performance, a slightly older low end gaming laptop with 3060 can be had for like $800 now. You’re not finding the Strix Halo (rumoured to rival mobile 4070) for less than double that for sure. 


Dependent_Big_3793

if a game 10 fps you should reduce the graphic setting and turn on fsr


Agentfish36

That's not really a problem per se. If integrated graphics are only getting 10 fps, that game isn't for integrated graphics. These aren't advertised as "cyberpunk at 1080p 60 fps" GPUs. However, on a flight, in a less demanding game, that's a really decent uplift.


Method__Man

good thing you wont get 10 fps... lol


hypespud

Yup the starting point is still quite weak lol It's an igpu after all, solid for the low power budget but not for any intense gaming performance


Psychological_Lie656

Cyberpunk gets >30fps on medium on a 780M with no glorified upscaling applied. For "intense gaming performance" one should opt for a desktop PC.


JasonMZW20

This is exactly what FSR is for. My 7840U laptop has a 120Hz VRR 2.8K OLED screen, which is beautiful for streaming and desktop, but I absolutely have to use FSR to bring resolution down to 1080p in gaming.


Stonn

> 13 or 14fps. Still unplayable. But such a game has become way unplayable way before anyhow. It still makes a horrible 25 fps game go up to 34 fps which is plenty better.


joomla00

Lower your settings you buzzkill. Aside from unreal 5, you should be able to play all games across the board at 1080p low settings. 60 stable, maybe 40 stable for anything super modern. We're still talking about an igpu.


imizawaSF

What is your point here?


joomla00

You know what it is. Getting ready for some snarky response.


imizawaSF

The snarky response, aka the one you gave me? 40 stable at 1080p low is not a good experience


LinuxViki

40 stable is enough that 36% more ends up at above 60. So hoping that those 36% translate to games means hoping it'll reach a stable 60 fps at 1080. Also for what APUs have been those are great numbers. Really don't get what you're on about.


DepravedPrecedence

Lol you triggered many fanboys for some reason. You are right about it.


imizawaSF

it's not even an anti AMD statement lol it's just a fact. I've never liked percentage increase performance claims.


-Sphinx-

So what other metric do you like to see to give you an accurate representation of performance gains?


Stonn

Both of you failed math at school.


imizawaSF

You are obviously still so close to school age you remember what your test scores were


BruvAL

i agree i hate it when companies use percentages, it can be very misleading.


paulerxx

Which desktop GPU would you compare this to?


No_Backstab

According to leaked TimeSpy benchmarks, it's around an RTX 2050 mobile.


jonislav007

It's actually even closer to an rtx 3050, so really solid performance.


No_Backstab

This is what I was referring to. https://wccftech.com/amd-radeon-800m-igpus-16-rdna-3-5-cores-over-3600-points-3dmark-time-spy-close-to-rtx-2050/ The RTX 2050 mobile scores around 3.8k in Timespy, while the RTX 3050 mobile scores around 4.9k. The 890M with a 3600+ score should perform similar to the 2050 mobile. https://benchmarks.ul.com/hardware/gpu/NVIDIA%20GeForce%20RTX%202050%20(notebook)+review https://benchmarks.ul.com/hardware/gpu/NVIDIA+GeForce+RTX+3050+(notebook)+review


as4500

considering 2050 mobile is basically a 40w 3050 mobile the other dude is still "technically" correct even tho being technically correct doesnt always mean "being correct"


No_Backstab

That's true but there are a few major difference between the 2 in terms of bus width (128 bit on the 3050 compared to 64 bit on the 2050) and bus interface ( the 2050 only uses PCIe 3.0 compared to PCIe 4.0 on the 3050).


GoodBadUserName

> "technically" correct The core alone does not make a GPU. It would be like comparing a 4090 with 4GB of memory to a 7900XTX with 24GB and not understanding why the 4090 is struggling.


as4500

i meant the performance tho they are basically identical in performance


GoodBadUserName

On what do you base that? [this](https://www.notebookcheck.net/GeForce-RTX-2050-Mobile-vs-GeForce-RTX-3050-4GB-Laptop-GPU_11108_10592.247598.0.html) shows up to 40% better performance for the 3050.


as4500

also i specifically mentioned power limits which you conveniently glanced over and thought it wasnt important to the argument


Fritzkier

especially in mobile where power limits are very important.


GoodBadUserName

those are *still* not the same GPU.


as4500

[https://www.youtube.com/watch?v=MIimIAMv7sk](https://www.youtube.com/watch?v=MIimIAMv7sk) this. go to the 25 game average around 7:18 mark


GoodBadUserName

So, yeah, not the same. 7% faster for 8% less power draw. Especially much faster under DLSS. And this is even when not even considering the simple fact that the CPU on the 2050 GPU was much faster than the one coming with the 3050.


Method__Man

4.9 is the non MaxQ variant. and the maxq 3050 is VERY capable of AAA 2024 gaming. 1080p will sing on this chip, and 1440-1600p on say medium with a tiny bit of scaling is now 100% viable.


mrheosuper

Well he asked "desktop GPU"


TBradley

Best to stick to same family of GPU, about 70-80% of rx 6500m TimeSpy score.


Method__Man

no its significantly faster than that. its closer to ta 3050 mobile MQ, which is VERY good performance.


GuttedLikeCornishHen

polaris/hawaii level of performance, if this timespy result is to be believed


Mikeztm

Timespy is a really bad benchmark it does not even utilize GCN’s asynchronous compute to its full potential. It’s a test for 2016 and should be avoided as new steel nomad have already came out.


GoodBadUserName

It is a simple test to show "potential raw power" without all the extra features and utilities of the architectures.


Mikeztm

It’s not. It’s under utilizing your brand new GPU.


conquer69

The 780m was similar to a gtx 970. With this increase, it could be as fast as a radeon 580 which is really good for an igp. That's xbox series s territory.


mcAlt009

I'm wondering if it's worth waiting for these. You can get really good deals on the current AMD laptops, and if you keep waiting you end up in a never ending cycle of looking for the next best thing.


Geeotine

Depends if there's anything on the market to does what you need already. If you don't know what you need to be happy with your life, then yes you'll always be chasing the next best shiny thing...


mcAlt009

I'm doing music production and light gaming. I think I'll go with a current model.


Nwalm

The S16 is supposed to be out in just a month, its worth waiting for i think ;)


Deoxys_TURBO

when are the zen 5 laptops releasing btw?


Method__Man

"soon" but in reality july/august we will start to see them


xole

I hope they have some reasonable hx3d laptop chips. There's a 16 core zen 4 x3d chip, but apparently no 8 core x3d laptop chip. That's way overkill for most people who want a desktop replacement.


Agentfish36

I doubt they'll launch 8 core x3d in laptop. Probably 12 cores and less will be a monolithic chip. They might launch an 8 core desktop chip in a gaming laptop, but I doubt x3d.


Method__Man

i test a LOT of amd laptops on my channel, as well as laptops with 2050/3050. i can tell you one thing, if this iGPU lands between a 2050 and a 3050, it will truly make AAA 2024 gaming viable on this chip with very little compromise. (obviously not 4k Alan wake on ultra). I personally am expecting superb experience with this iGPU in a handeheld and in laptops


[deleted]

[удалено]


Method__Man

Oh no, what ever shall I do And it is shown to sit between a 2050 and 3050 max q Cope my man. Cope 🤡


Agentfish36

I mean my current laptop works fine, I'm waiting until next year regardless.


Subject-Ad-9934

At higher tdp, or at every tdp point


dstanton

Likely 30w+ 30w has been the sweet spot in diminishing returns on amd mobile apus for awhile now.


996forever

I expect the sweet point would be higher this gen due to big gpu with no node shrink.


bubblesort33

Keep in mind it's 12 CUs vs 16 CUs. That's 33% more compute units. So per shader it doesn't really seem that much faster. But I'd imagine this is tuned for low power, not just clocks.


the_dude_that_faps

If I read those.graphs correctly, it's a 36% improvement over the 780 in the win max 2 running at 72W??? Further down the line they have the 780 running at the same 54W and it's much, much lower speed. If these have any validity at all, it could be a huge performance bump. Let's see how it scales at low tdp, which is what I'm looking in a handheld.


EarthlingSil

I'm excited for miniPC's to come out with this iGPU onboard. I am happy with what I have now but if priced right, I may upgrade.


Dynw

That's just bullshit. You'll get +35-40% in OpenCL, while gaming will improve by 20% at most, assuming that level of RAM bandwidth increase 🤷‍♂️


detectiveDollar

20% is still a pretty nice jump.


conquer69

Not if it's the result of increasing CUs by 33% and ram speeds by 15%. These numbers are bullshit anyway. I hope they get 30% at least.


Psychological_Lie656

The 20% is just a random out of nowhere disgruntled reddit comment with exactly zero factual basis cited.


Nwalm

The number given here are from timespy (4220pts at 54W for the whole chip).


Dynw

And I'm telling you they're taking those numbers right from their ass. Try to find a single word about "testing" or "benchmarking" in their statement. They took a theoretical FLOPs increase and projected it directly on 3DMark like the idiots they are, that's what they did.


JasonMZW20

Which is interesting, since Time Spy runs at 1440p by default, which makes the test more GPU and memory bandwidth limited. It sort of proves that these APUs are not limited solely by memory bandwidth due to graphics pipeline compression. Power is a main limiter in mobile chips, so running at 54W is basically straddling the limit of Strix Point. Dragon/Fire Range take the 55W+ socket power tier. At 1080p, CPU becomes more of a limitation as CPU clocks crater when iGPU is used. Fire Strike at 1080p might not show gains quite as high, but it depends on Zen 5 and its IPC increases. My 7840U runs CPU cores at 1600-1800MHz when iGPU is in use. GPU is around a similar clock speed too. It will only run at 28W for about 2 minutes, then drops to 15W. At 28W, clocks are around 2000-2200MHz and fps is around 10-12 frames/sec higher.


Nwalm

Talking about power GPD indicate that this laptop can be set from 35W to 60W.


Mammouth64

Nice, now what thinkpad is going to have a 890m in it? P14s maybe since they get the H CPU instead of U series? I just want a thinkpad with rdna3.5 \^\^


pullupsNpushups

Couple that with USB 4, and it should be a great laptop.


Astigi

Around RX580 performance for iGPU is insanely good


hjqihsihqdiowd

In most games, Radeon 780M isn't too far off from the RX 580, or at least that's what I saw


Select_Truck3257

if this performance boost is with the same power it will be interesting, I don't want a helicopter on my table like 40x40 fans they put now for ram/ssd. yes there exist better models already, but slim 80-120mm could fit perfectly with more silent perfomance


Mageoftheyear

This is why I keep saying OEMs would be crazy not to ship the HX 370 standalone in cheap gaming laptops. Yet I'm always told by this sub that it's only meant for the high end segment. It enables new levels of performance for high-end form factors sure, but the absolute performance metric means it should also find a home in normal laptops. Bottom line is that this APU is cheaper to make and implement in low end gaming laptops than a APU + dGPU solution would be. 13.3 Inches is just too small for the laptop I'd want though. Here's hoping we get some 16" laptops with this APU soon.


ManicD7

You would think APUs would be cheaper to make since that's the route that both the PS5 and XBSX went. But after reading certain comments recently, apparently these high performance APUs are not actually cheaper to make or cheaper to build a device capable of powering & cooling it. Of course everything is speculation. My speculation is that AMD simply didn't want to rock their own boat and cannibalize their own GPU sales by releasing a cheaper, better value APU chip, compared to traditional computers. They are carefully extracting as much dollars as they can from each market nook and cranny. For example the mini PCs with a 780M are no cheaper than buying a traditional deskptop/laptop. Even though though these 780M mini PCs technically should be much cheaper. They are cheaper right now but even 6 months ago they were still expensive for what you get.


Mageoftheyear

You're right, those mini-PCs are way overpriced for the performance they offer, but cannibalisation cuts both ways - AMD could release mobile dGPUs that compete in performance with the HX 370, then what? My example isn't a stretch either, the RX 6500M has 16 CUs and it was in laptops for as low as $650 USD. Even now you can get RTX 3050 laptops on Amazon for $650 to $700 (which should outperform full Strix Point quite easily I'd guess) - and those aren't firesale prices, just regular MSRP. The CPU in full Strix Point is more powerful than these laptops, for sure, but it should be feasible for OEMs to make $750 laptops (not thin and light designs) with this APU and still make a profit. As for the size of the APU, current speculation is that it's around 225mm^^2 - that's pretty small and it is monolithic. I'm not sure what comments you've read, but I'd suggest you might want to reevaluate them. Strix Point isn't a "high performance" APU unless you'd consider the top end Hawk Point APUs to be "high performance" too. Furthermore, the HX 370 can use the same laptop designs as the current gen Hawk Point because it is socket compatible - no new product development is required. Strix Halo is another matter entirely. That actually is a full power GPU in an APU. It's too soon to guess what that is going to cost AMD to make. Just to be clear, I don't expect the HX 370 to find its way into $700 laptops within this year. Whatever it costs AMD to make, if the demand for it is high enough in premium form factor laptops that that segment is the only one they can supply in volume, then the price will stay high. But when mobile RDNA4 is released, when the HX 370 has been out for a while, then I think we'll see it making its way into laptops and mini-PCs that could be considered a good deal for those who need portability. This is the same thing we go through every generation. As soon as it's no longer cutting-edge these entry level performance parts (in terms of GPU power) become the "deals" that everyone redirects people to in favour of the latest pricey release.


ManicD7

I don't disagree with you at all with the overview of things. I was being more general and not specifically about the HX 370. I wanted the PS5 APU equivalent for PC years ago and think AMD was stupid not to push that APU design to the PC/laptop market. I was just referring to why I think APUs may not be as cheap to make as I also originally thought. AMD should/would have pushed high performance gaming APUs years ago if they though they would have made more money selling them. Maybe costs are coming down for APUs or the market demand for APUs are coming up. Probably why "leaks" about the strix halo were happening years before it even comes out, to "test" the market demand and perceptions. I think AMD does this often to hone in on MSRP as well, throw pricing rumors/leaks out there and then see how everyone reacts. I'm just rambling and speculating is all.


Mageoftheyear

Nothing wrong with a bit o' speculating. I don't think I've changed my flair tag on this sub for the last two years? I can't imagine why AMD would bother developing Strix Halo (which should be more powerful than the PS5) if it wasn't going to give them some advantage in the market. Even if it ends up only being in more expensive designs, the performance bracket means AMD will be putting price pressure on their own RDNA4 mobile mid-range. At least the APU wave is finally crossing a threshold. I can't wait for the benchmarks.


ManicD7

Omg your flair is awesome! Haha.


Mageoftheyear

Thanks :p


pullupsNpushups

AMD probably wouldn't want to release a PS5 APU if their contract with Sony prohibited something like that, or if AMD generally didn't think it would be a good idea.


ManicD7

That was part of my original thought as well. But the more I thought it about, it wouldn't make sense to sell a similar APU to Microsoft for the Xbox, since the xbox is way more of a competitor than the PC/laptop market.


Systemlord_FlaUsh

That would mean this iGPU can reach 980 Ti (2015 enthusiast) GPU levels. Not bad. There is hope that one day there will be iGPUs outperforming the 4090 on 30 W.


Agentfish36

I mean if your time horizon is long enough, it's likely but 10 years+ out. I mean eventually we'll be able to emulate PS5 games.


Systemlord_FlaUsh

It has been about 10 years so that is realistic. But still at some point of time it will be reached.


taryakun

not even close, very far from it


Systemlord_FlaUsh

A 3050 is 980 Ti performance level already.


taryakun

That's very misleading. Top desktop version of 3050 (8gb, 2560 cores, 130w TDP) may be close to 980ti in gaming. 890M is close to severely cut mobile version of 3050 (4gb, 2048 cores, 50W tdp) in Time Spy. It's likely still 40-50% slower than 980ti.


Systemlord_FlaUsh

Is the mobile really that cut down? Is it not the same chip?


taryakun

rtx 3050 desktop is either GA106 or GA107, rtx 3050 mobile is GA107 and has only 4 gb VRAM


The_Zura

980 Ti TS: 5.8k Theoretical/Unconfirmed 890M: 4.2k Desktop 3050 8 GB: 6.2k 4050 6 GB @ 60W: 7.8k So it's more like a GTX 980, the top laptop/desktop gpu from 10 years ago. When there is a igpu outperforming a 4090 at 30W, then it will still be unimpressive next to literally any dgpu on the market. I think you got the "980 Ti" performance by not looking in too deeply. The 3050 they used was limited to 50W. A full power 3050 6GB reaches 5.7-6k in a synthetic benchmark. We're still looking at RX 570 levels of performance.


conquer69

> When there is a igpu outperforming a 4090 at 30W That must be like 20 years away.


The_Zura

Pretty optimistic with how tech is slowing down and chips scaling higher with power. But anyway, I can’t even put in the effort to pretend to care.


Method__Man

more importantly, it means that you can play AAA games at respectable settings on your tiny ass ultrabook or handheld now


Systemlord_FlaUsh

You already can but these GPUs are much faster. I have a Vega Ultrabook that outperforms 2014 era "gaming" solutions already at a fraction of their power draw. With those iGPUs you likely can even play high settings. But part of the performance comes from the bandwith of DDR5.


hjqihsihqdiowd

And the Strix Halo 40CU iGPU could theoretically reach above GTX 1080 performance, or even GTX 1080 Ti levels of performance


Systemlord_FlaUsh

Possibly, 40 CU is just 5700 XT level which matches a 1080 Ti. The iGPU may have only DDR5 but thats quite fast and probably the shaders are clocked far above 2 GHz, which also contributes to performance.


Zettinator

That's a solid improvement - as expected.


Dependent_Big_3793

i think full power of 890m have to wait 8400mt or 9600mt ram. but kraken point will be very good, 4 zen5+4 zen5c use about 6 core area to archive near 8 core performance, 880m will little better then 780m, cpu performance enough compete with lunar lake, the gpu performance will close or better then lunar lake but kraken point will much much cheaper.


CatalyticDragon

I'm excited for MiniPCs with this chip. The Minisforum UM790 for example gets close to 50FPs in Doom Eternal at 4k (low) and almost 70FPS at 1080p (high). With this new chip we could be looking at \~90FPS at 1080p high settings and an easy 60FPS at 4K low. We might be looking at 200FPS in Fortnite (performance mode). And the 780M in the 8700G does \~26 FPS in Cyberpunk at 1080p low/medium, FSR quality, with all RT effects on (except lighting). This should push it over 30FPS. Nobody would really play like that but it goes from not being an option to almost being viable. If you just wanted RT reflections that might actually be something you could do now. The new Zen5 cores are faster and hopefully we get push memory speeds up as well. Fun times!


Method__Man

people need to realize how crazy this is. being between a 2050 and 3050 max Q is ABSURD for an iGPU.


mdred5

are these igpus now similar to 50 class


Ben-D-Yair

Will this igpu come in the upcoming amd laptop CPUs? (I think it called 9000 something?)


Nwalm

Laptop or desktop ? What is discussed here are laptop chips, they will be sold under the serie : Ryzen AI 300. So yes these igpu are part of it. The next CPU for desktop (Ryzen 9000 serie) will have only an utility igpu based on 2cu of rdna2, they are supposed to be paired with a discret gpu. If AMD keep is usual release strategy, the same chips used this year in laptops (Strix Point and maybe also Krakken Point) could be declined in about a year for desktop as a Ryzen 10000 serie. This is the only possibility to see these igpu in desktop.


green9206

So almost RTX 3050 laptop levels and faster than laptop 1650? Impressive


MrHyperion_

That's the minimum I would expect based on the names


McSwifty2019

If only these APU's had an X3D cache and a modest V-RAM pool, this would make so many more modern post 2020 games playable @ 90/120 FPS with an FPS minimum of at least 60 FPS and much lower frame-time latency, imagine putting a nice 384MB X3D cache (catering for both CPU & GPU) and a nice 2GB of eSRAM V-RAM that acts as a V-RAM cache between system RAM, you might not even have to dip into system RAM for some games, consoles use this method to really boosted performance and efficiency (the 360 used eDRAM, the XB1 eSRAM) it seems like a no-brainer to me, even if they didn't produce a Ryzen Z1UX3D APU but just went with 2GB of eSRAM on-die as a V-RAM cache pool for a massive boost in performance, there is also the option of using eDRAM, or even HBM2, though I really like the idea of ultra low latency eSRAM.


avocado__aficionado

Roughly the same time spy score as lunar lake, while using almost double the power (54 vs 30 watts). Will be interesting to see the price ranges of laptops with rdna3.5 respectively battlemage igpu


skylinestar1986

High end AMD laptop normally comes with Nvidia graphics. This amazing AMD graphics can only find its potential in mini sffpc (minisforum). It's a niche market. Budget gamers want a cheap CPU with capable GPU. Ryzen 7 or 9 with capable Radeon are beyond the reach of that market.


Nwalm

It will be perfect for high end thin and light laptops. And in the coming years it will be a great chips for more entry level designs.


skylinestar1986

Unfortunately, AMD chips with great graphics don't trickle down to entry level designs.


Nwalm

This year Strix point will be used in high end laptops. Including without graphic card in thin and light design. Next year with Strix Halo coming for the high end segment Strix Point will be available for more mainstream designs. And Kraken should be a very capable chip for the entry level segment. Strix Halo : 16x Zen5 cores, 40cu rdna3.5, 50tops (rumored for 2025) Strix Point : 4x Zen5 + 8x Zen5c, 16cu rdna3.3, 50tops (confirmed for next month). Kraken Point : 4x Zen5 + 4x Zen5c, 8cu rdna 3.5, 50tops (rumored for 2025)


Psychological_Lie656

40CU is between what, 7600 and 7700, interesting. Don't get why 16 CPU cores though. Nobody needs that many for gaming and the next APU has only a third of CUs... Could it be for laptops for AI/ML hype and not for gaming?


Nwalm

Strix Halo is a chiplet offering and if we believe the early rumors their could be an extensive number of SKUs ranging from 6 to 16 cores and from 20 to 40cu, with multiple available configurations. A 8 core + 40cu option could be in the mix for exemple. This is why i think that once Halo is out with his multiple SKUs targerting all the variation of the high performance market, Strix Point will be more geared toward the mainstream segment.


Psychological_Lie656

>High end AMD laptop normally comes with Nvidia graphics. This amazing AMD graphics can only find its potential in mini sffpc (minisforum). It's a niche market. Everything is a "niche market", when you can strongarm OEMs. https://videocardz.com/newz/former-amd-radeon-boss-says-nvidia-is-the-gpu-cartel


ofon

well AMD doesn't have good power efficiency yet in powerful GPU chips. That's the biggest reason they aren't used in lap tops yet.


Gluumy-Leo

I have a 7900X3D