Onboard Radeon HD 4200 questions

Steelhedgehog

Posts: 17   +0
I recently bought some parts to throw together a new budget computer, but I haven't built one in a while and I realized there was some questions I couldn't answer with google. Probably because I'm a tard, but I digress. The parts I bought are as follows:

CPU: Athlon II X4
Mobo: Biostar A785G3 AM3
Ram: G-Skill 2x2GB
PowerSupply: OCZ ModXStream Pro

I have two questions concerning the Radeon HD 4200 graphics on the Mobo (you can google the specs).

First, I have an old Radeon X1300 XT 512mb video card. I can't seem to find specs for it I can compare to the onboard 4200 graphics. Should I use my old video card, or are the onboard graphics superior?

Second, I will be installing Window 7 32-bit. The max memory share for the 4200 graphics is 512mb. Since Windows 7 can't use all the 4gigs of ram I am installing anyways, will the 512mb I plan on dedicating to the onboard graphics (if I use it) take away from what is usable by windows 7? Or will I be able to have the whole ~3.2 gigs of ram available to me?
 
The discrete card would probably have the edge on the integrated graphics in raw gaming benchmarks . Here's a basic graphics hierachy chart, and your 512Mb video RAM comes out of the ~3.2Gb RAM available (or seen) by the system, and here's a review of the HD 4200 in relation to your motherboards chipset.

If you're after numbers for graphics card comparisons (or release dates etc.) then this set of tables will provide pretty much everything at a glance (be aware that some of the tables are rather large)

Good luck with the build.
 
Wow, after reviewing that huge chart, I can see the X1300 XT and the HD4200 really go neck in neck! The reasons I see to go for the HD4200 is the Direct x 10.1 and 4.1 vertex shader version. That and it has 40 vertex shaders, not 2. So... why is the X1300 XT up higher on the hierarchy list you showed me then...? It seems HD4200 is better as far as the numbers are concerned. So what makes the X1300 XT better?

Let's say I play an older game like Bioshock- which game is gonna have better performance at max settings? How do you see the fps of a game while you play it?
 
Last question first...Use the FRAPS program (Download link is at the bottom of the page)
As for why one is faster than the other, at a guess I'd say memory bandwidth and shader ability. Remember you are dealing with two different architectures- the HD series basically use 4 simple and 1 complex shader which basically equate to the earlier single shader unit.
As I mentioned earlier the difference is in "raw gaming benchmarks" (fps) and doesn't take into account the architectural changes or better feature set of the later onboard video.
 
maybe throw in a couple of bucks for a mid-low end graphics card if you're a gamer? Bioshock has some awesome graphics, and not having it up on a significant level feels like a shame.

A GTS 250 should be available now for less than 100 bux. You might even get lucky and get some older graphics cards second hand, if you're happy buying second hand stuff.


But directly answering your question, I think the discrete graphics should be better.
 
I'll take your advice then and stick with my graphics card for now then! Thank you!

I wish I had the budget to be more of a gamer... If I did, I think I'd crossfire a couple Radeon 4850s seeing as they're getting cheap lately. Well, that and get a decent motherboard instead of this cheap one (albeit it had great reviews for a budget board on newegg). I really like those "Best graphics card for the money" articles on Tom's Hardware. What do you guys think? They pretty accurate?
 
Oh, and sorry about going further off-topic, but I was curious. I was looking at graphics cards (drooling) and I saw two versions of the Radeon 4850. One had only 512mb memory, but it was GDDR5. Which would be better? The 512mb GDDR5 version? Or the 1gig GDDR3? I don't know a lot about hardware comparisons based on graphics cards, I never got too much into them until now (that I'm broke, lol).
 
The Tom's guides make pretty valid choices, although they are only released quarterly (I think). I wouldn't let Tom's be your only guide if you are looking for the best card/s for your purpose. Amongst others, Techspot also has an ongoing and updated buyers guide for example. While Tech Power Up's graphics card reviews feature a large selection of previous generations cards alongside the newer models, which can make comparisons more encompassing. They also retest each card in their evaluation to ensure that all cards are using the same applicable drivers, which keeps the benchmarks on a level playing field.

As for GDDR3 or GDDR5. I think you'll find that most GDDR5 HD 4850 cards are factory overclocked models (and of course were released fairly recently in relation to the GDDR3 models) that were tweaked for more speed to combat nvidia's upgrading of the 9800GTX to GTX+. Also at this time AMD (ATI) were transitioning all their future cards to use GDDR5, which is why GDDR3 equipped cards fell by the wayside.
A 1Gb card generally comes into it's own once more antialiasing (AA) is used in game, and at higher screen resolutions, while faster core and memory speed will generally favour (as opposed to memory size) less demanding games played and/or lesser degrees of eye-candy.
Here's a review of 1Gb GDDR3(stock) v 512Mb GDDR5 (oc) .
 
I will try to keep it as simple as I can; the main difference is total bandwidth available and speed. "Main difference between DDR (one fits all) and GDDR (Graphics) is that capacity is not crucial, but performance is. Hence, standard DDR is geared towards enabling as much capacity as possible (with much less emphasis on performance in comparison to GGDR), probably that is why GDDR is sometimes referred to as the “Ferrari of the bunch".

GDDR transfers 32-bit data, while conventional DRAM transfers 64-bit data chunks, and previous generations of graphics memory (i.e. GDDR2, GDDR3) were to some extent were based on the DDR2-SDRAM memory standard, while GDDR5 is heading into a relatively new direction.

In recent times manufacturer's have started producing 'differential modules'. Differential clock signaling is a method similar to interconnect buses such as HyperTransport, PCI Express, or Intel’s Quick Path Interface from Core i7. Differential introduces Reference clock, a clock that memory cell follows. Instead of using Ground wire as a passive driver, differential mode enables 'precise communication' and exactly this feature is the reason why available bandwidth is set to change during lifetime of GDDR5.

The sheer bandwidth gain from one GDDR generation to another are rather impressive. GDDR3 peaked at 2.4 Gbps, GDDR4 ended up at 3.2 Gbps.

GDDR5 chips are split into two: maximum of 6.4 Gbps (for single ended) and for differential chips the maximum yield will be 12.8 Gbps.

GDDR5 also introduces an Error Correction Protocol based on a progressive algorithm that actually enables more aggressive overclocking. Major changes in internal chip design also include Quarter-Data Rate clock, continuous WRITE clock, CDR based READ (no reading clock/strobe information), DRAM Interface training, Internal and External VREF and x16 mode, and various power saving features.

Now that the lesson about some basic elements of GDDR is over, the question what would be the performance bump in reality? What I could find so far averages about 8% at minimum or upwards.

The remaining points has been covered by DBZ.

Note: I started putting together this answer a while ago, (because I am at work, I finished it just now :p) but I decided to post it nonetheless. Regards
 
Wow, that's all really informative! I can't believe how knowledgeable you guys are! Can I throw a few more Radeon 4200 questions at you?

The motherboard came in today!! And I can overclock my on-board graphics! What is the safest level I can overclock a radeon HD4200 igp to? I can't control voltages, just the Mhz of the igp. Do you think 700 would be okay? And should I worry about heat in a IGP? What's a safe level for the heat?
 
Not sure I'd suggest overclocking an on-the-board graphics chip.

Cooling solutions on the Northbridge on these sorts of chips are kinda small, being supposedly budget and all. If these aren't budget boards, then they aren't designed with high graphics loads to start with.

I'd be much more comfortable overclocking the X1300XT. Especially if it comes with a fan. If yours is passively cooled, a closeby side-panel fan will help too.

And ATI-Tools should do the job, if your OS is supported. If not, there might be newer programs out there for graphics card overclocking, which I haven't done in awhile (not since keeping VRMs cool has been a problem with high-end cards using 3rd party coolers)
 
Just a quick note, don't always assume that a card with "GDDR5" is better than a card with "GDDR3". I've noticed some cards that have GDDR3 and a 256bit bus and the GDDR5 version only has a 128bit bus (can't remember which, was looking into cards a while back when I needed a new one). The reduced bus width makes these almost identical as far as performance goes.
 
Back