Intel's Arc Alchemist GPUs rumored to launch in March, will take on the RTX 3060 and 3070

Polycount

Posts: 3,017   +590
Staff
In context: The PC community has long awaited the launch of Intel's first true dedicated gaming GPUs, and now, that day is just around the corner. With competition from AMD and Nvidia likely to arrive later the same year, an early 2022 launch window for the first generation of Intel's upcoming Arc GPUs (codenamed Alchemist) seems likely. But how will Alchemist GPUs compare to existing competition?

Thanks to new rumors allegedly published on the ExpReview forums, we might finally have an answer to that question. ExpReview, for the unaware, is a Chinese tech news site that emphasizes PC hardware coverage, including reviews, benchmarks, and leaks.

According to the site, Intel's Alchemist architecture is set to launch with several models sometime in March 2022 -- a January release was reportedly planned, but it had to be pushed back by a couple of months. In any case, Q1 is still on the table, apparently.

Intel's planned offerings include three discrete desktop GPUs, and five laptop GPUs (mostly variants of the desktop cards).

The desktop line-up will house the Intel Xe HPG 512 EU, the 384 EU, and the 128 EU.

The 128 EU is rumored to launch with 1024 ALUs, 6GB of VRAM, a 75W TGP, and a 96-bit memory bus. Intel is hoping the 128 EU will be able to take on the Nvidia's GTX 1650, but with RT support -- something the 1650 and 1650 Super are both lacking. Understandably so, given the performance hit that comes with turning such features on. Base clock speeds will probably cap out at 2.5Ghz here.

The 512 EU is set to ship with up to 16GB of VRAM, 4096 ALUs, a 256-bit memory bus, a TGP of 225W, and rumored clock speeds also maxing out at around 2.5Ghz. The Blue Team is positioning this model as a competitor to Nvidia's RTX 3070 and 3070 Ti.

The 384 EU, on the other hand, will take on the lower-end 3060 and 3060 Ti with a 192-bit memory bus, up to 12GB of VRAM, a TGP of around 200W, and 3072 ALUs.

The laptop version of the 128 EU drops the VRAM to a measly 4GB, and reduces power draw to about 30W. The other low-end laptop chip, the 96 EU, downgrades the ALU count to 768, while keeping everything else roughly the same.

Intel's high-end Alchemist laptop GPUs differ from their desktop counterparts primarily in power draw, with lower TGP across the board (up to 150W for the 512 EU and up to 120W for the 384 EU). The mid-range 256 EU is a laptop-only card with 2048 ALUs, 8GB of VRAM, a 128-bit bus, and up to an 80W TGP.

As you can see, Intel is primarily targeting AMD and Nvidia's current-gen cards with its first gaming GPU launch. As such, Blue Team fans will likely need to wait for the company's next GPU architecture -- codenamed "Battlemage" -- for an Intel alternative to AMD's RDNA3 and Nvidia's Lovelace cards.

Permalink to story.

 
Note to hardware reviewers when it finally arrives... please include encode/decode and content productivity comparisons (Premiere Pro, Resolve/Fusion, OBS etc) alongside the usual gaming FPS benchmarks. Any info on viable alternatives to NVENC & CUDA are much appreciated, as AMD definitely plays second fiddle in this space. In-depth exploration of DeepLink most appreciated. Pretty please Steve! 🙏
 
Last edited by a moderator:
They should have LHR-like feature from the start if they're serious about gaming (which they probably aren't).

This could be a move from Intel to make bank by selling these cards to miners since to them it doesn't matter wether the card is actually good.

I wonder how many games will be incompatible or have performance issues on Intel GPUs since nothing is optimized for it, especially older games.
 
It's like the 2nd or 3rd time already that it gets delayed.

Not a good sign, reminds me of the 10nm Intel fiasco.

Not optimistic.
 
It's like the 2nd or 3rd time already that it gets delayed.

Not a good sign, reminds me of the 10nm Intel fiasco.

Not optimistic.

I don't think anyone would be expecting miracles for the first release. But if they can establish a design, manufacturering and distribution pipeline, along with a framework for a relationship with developers than they're very well positioned to compete in the coming years. Give it 5 - 10 years. All they need is one good design or a slipup from Nvidia.
 
It's like the 2nd or 3rd time already that it gets delayed.

Not a good sign, reminds me of the 10nm Intel fiasco.

Not optimistic.

I was thinking the same thing but it may be more of a chip shortage than anything else. That still doesn't explain the delay's as we all expect a paper type launch with few cards or they are gobbled up by the miners/scalpers.
 
Maybe I'll be able to get one and replace my 980ti thats been begging for death.

theyll probably only exist for reviewers like all other tech tho.
 
I guess I`m so desperate by now that I`ll probably buy Intel with no second thoughts, even as it is a first gen untested card, if it only checks one box: a fxcking decent price. Of course, I highly doubt this, because all market now is fixated with miners in mind and there`s still not a standardized practice in stopping scalpers more that a year later. I can only pray the market will get slowly saturated by the influx of extra cards, because there`s really nothing else to hope for at this point.
 
Too bad they didn't get it out last year it will be fighting RDNA3 and Lovelace later this year and Battelmage isn't due until 2023. Still if it could offer 3070 performance at a decent price, is obtainable and has good driver support it will be successful.
 
I doubt INTEL a lot but I have a good feeling they will do very well...........................in the small controlled environment to test the waters of supplying to OEM pre built systems and miners and listening to the feedback driver wise. Then they will go from there.

Baby steps.
 
Before I even consider. I'm waiting on three things.

Benchmarks​
Pricing​
2nd Gen​

---------------------------------------------------
Dear admin for some reason your list features do not function. I tried ordered and unordered. I could only get indent to work.
 
Like any of this matters.....the stock will be gone to miners and scalpers after 10 minutes.
 
I wonder how many games will be incompatible or have performance issues on Intel GPUs since nothing is optimized for it, especially older games.

Since I play a lot of older games and a lot of niche games, some of them being temperamental and require a lot of optimizing, I have a feeling it will be years before Intel GPUs will work for me. Heck, it’s the reason why I still think twice about AMD GPUs. I welcome having more choice though. The more the merrier.
 
I must admit, these upcoming GPUs are the most exciting moment in recent history of PC hardware. New GPU player in the scene is something big, even if it's the huge old Intel. I can already see the benchmark bars in front of me, but not quite the results. Kinda hyped, though I personally expect bad availability till at least summer and high prices at launch due to the skewed market situation. Unfortunately even Intel cannot fight the demand issue in such a short time period. Autumn must be better already, I certainly hope so.
Though, how good Alchemist is for mining? I assume it's pretty much as good as the others. :/
 
The 128eu card is promising, I'v ebeen waiting for years for a 75w low profile replacement for my 560x, but the 1650 (and for the leaks, the RX 6400) were both hamstrung with 4GB of VRAM, which is a limitation on the slower 560x at times.

6GB, 1650 super performance, sub 75W TDP? I'm all in. And if the architecture is good, I'd take the 512EU card, now that my vega is dead the 480 is really long in the tooth.
They should have LHR-like feature from the start if they're serious about gaming (which they probably aren't).

This could be a move from Intel to make bank by selling these cards to miners since to them it doesn't matter wether the card is actually good.

I wonder how many games will be incompatible or have performance issues on Intel GPUs since nothing is optimized for it, especially older games.
LHR is pointless. You limit 1 coin, there are 1000 others that can bemined without limiters, and miners have already bypassed nvidia's LHR.

LHR limiting was a pointless marketing trick that many people have fallen for, for some odd reason.

Since I play a lot of older games and a lot of niche games, some of them being temperamental and require a lot of optimizing, I have a feeling it will be years before Intel GPUs will work for me. Heck, it’s the reason why I still think twice about AMD GPUs. I welcome having more choice though. The more the merrier.
Intel's software support team islarger ten all of AMD combined. If they are at all serious, they could easily hammer any issues out in under a year.

For competitions sake, I hope they are.
 
The 128eu card is promising, I'v ebeen waiting for years for a 75w low profile replacement for my 560x, but the 1650 (and for the leaks, the RX 6400) were both hamstrung with 4GB of VRAM, which is a limitation on the slower 560x at times.

6GB, 1650 super performance, sub 75W TDP? I'm all in. And if the architecture is good, I'd take the 512EU card, now that my vega is dead the 480 is really long in the tooth.

LHR is pointless. You limit 1 coin, there are 1000 others that can bemined without limiters, and miners have already bypassed nvidia's LHR.

LHR limiting was a pointless marketing trick that many people have fallen for, for some odd reason.


Intel's software support team islarger ten all of AMD combined. If they are at all serious, they could easily hammer any issues out in under a year.

For competitions sake, I hope they are.
Not only that but they released a driver that gets around LHR limitations at the same time LHR was pushed out. Should've spat in our faces while at it. I didn't know LHR works only for certain coins and not for other ones.
 
Sure intel has a **** ton of 14nm facilities available they would have been better designing and releasing cards on 14nm++++ so seeing how they could perform as a first range. Going straight for the latest node isn't always the best as it takes a few years of that new node for it to mature and the yields to get better. Clearly intel aren't bothered about getting cards in the hands of gamers if they're using TSMC's already over subscribed waffers.
 
Sure intel has a **** ton of 14nm facilities available they would have been better designing and releasing cards on 14nm++++ so seeing how they could perform as a first range. Going straight for the latest node isn't always the best as it takes a few years of that new node for it to mature and the yields to get better. Clearly intel aren't bothered about getting cards in the hands of gamers if they're using TSMC's already over subscribed waffers.
14nm nodes are being spun down and upgraded for 10nm superfin. Intel fabs have also, if you remember, been running at capacity for years now.

Not ot mention, 14nm, even intel's 14nm, would in no way be competitive to 7nm TSMC.
 
Back