Nvidia has opened the doors on its GeForce Experience after letting thousands of users hammer on it over the last month. Announced last April and introduced as a closed beta in December, the PC game optimizer aims to help players get the most out of their machines by automatically adjusting in-game settings for their hardware.
When the initiative was first revealed, Nvidia cited a survey that suggested more than 80% of users play PC games in their default configuration, presumably because they're either intimidated by the myriad of quality settings or they simply don't care to invest the time necessary to find a decent configuration for their particular system.
When the closed beta began last month, the GeForce Experience only supported 32 games, and while that number hasn't increased by much, Nvidia has added nine more titles to its database, including Far Cry 3, Mechwarrior Online and Hawken. You'll still need a Fermi or Kepler-based graphics card, though the software now offers limited support for Core 2 Duo and Core 2 Quad processors, which weren't backed before.
Nvidia says other changes since the closed beta include enhanced game detection logic, support for optimizing games played on 2560x1440 displays, better Chinese, Danish and UK English translations, improved client startup, billboard display, game scan and communication with Nvidia's servers, as well as with various bug fixes. The company previously outlined its six-step game testing process and we'll list that again:
- We start with expert game testers that play through key levels of the game (indoors, outdoors, multiplayer etc.) to get a feel for the load and how different settings affect quality and performance.
- The game tester identifies an area for automated testing. This area will be from a demanding portion of the game. We don’t always select the absolute worst case since they tend to distort the results.
- As part of the game evaluation, the expert game tester will identify an appropriate FPS target. Fast paced games typically require higher FPS. Slower games lower FPS. We also define and test against a minimum FPS to minimize stuttering. The average framerate target is typically between 40-60 FPS, the minimum 25 FPS.
- The most difficult part of OPS is deciding which settings to turn on and which to leave off in a performance limited setting. This is done by analyzing each setting and assigning them quality and performance weights. The game tester compares how each setting (eg. shader, texture, shadow) and each quality level (eg. low, medium, high) affects image quality and performance. These are stored as weights which are fed to the automation algorithm.
- From here on the testing is automated. The GeForce Experience supercomputer tests the game by turning on settings until the FPS target is reached. This is done in the order of maximum bang for the buck; settings that provide the most visual benefit and least stress on the GPU (eg. texture quality) are turned on first; settings that are performance intensive but visually subtle (eg. 8xAA) are enabled last.
- Finally, the GeForce Experience supercomputer goes through and tests thousands of hardware configurations for the given game. Unique settings are generated for each CPU, GPU, and monitor resolution combination.
The Nvidia GeForce GTX 690 features 3,072 CUDA cores, 4GB of GDDR5 RAM at 6.0Gbps running on a 512-bit bus and a base clock of 915MHz, although it can be boosted to 1019MHz. Video output is provided by three dual-link DVI ports and a single Mini-DisplayPort 1.2 output. Power is provided by two 8-pin connectors using a new 10-phase heavy duty power supply which is connected to a ten layer two-ounce copper PCB.
The Nvidia GeForce GTX 680 contains four GPCs with a total of eight SMXs, 1536 CUDA cores, eight geometry units, four raster units, 128 texture units, and 32 ROP units. The base clock is 1006MHz, the GTX 680 also carries 2GB of GDDR5 VRAM running at 6008MHz with a 256-bit interface providing 6.0Gb/s of throughput. Dual six-pin power connectors feed the card's TDP of 195W.
The NVIDIA GeForce GTX 580 packs 512 CUDA cores with a graphics/processor clock of 772/1544MHz, 1.5GB of GDDR5 memory with a 384-bit interface and a data rate of 4.0Gbps. The 10.5-inch, dual-slot card draws a maximum of 244W over one 6-pin and one 8-pin PCIe connector and carries two DL-DVI outputs with one mini-HDMI port.
Downloads and Drivers
From the Forums
Subscribe to TechSpot
Receive a weekly update of our best features and tech news you don't want to miss: