Point rating systems are mainly used to tell overall rating of product for those who are too lazy to read review carefully.
I'm not terribly interested in how people too lazy to educate themselves should be given special dispensation.
Good example of this is Metacritic, quite popular site btw. They just translate review scores to 0-100 system and calculate average. Higher metascore will surely translate to "better product" in many minds. Altough their rating system is totally broken.
Their averaged scores are aggregated from a sites that vary from the valid to little more than blogs and poorly disguised marketing. Again, there's no cure for stupid.
Just like that. I previously mentioned Metacritic. That site has no use without rating systems.
I'd argue that it doesn't have any use even with a rating system.
I bet most readers check only review score and forget reading whole review. Generally amount of information available today is much more than anyone can handle. Too much information = most people do not have time to extensively read long articles. So they just jump to last page and see review score, because that gives some kind of overall rating. No at all same as reading whole article, but better than nothing. No time to read whole article easily means just looking score.
So you'd like the score to better cater to people who can't be bothered to spend a few minutes researching their prospective purchase, while also catering to their twitter/ADHD tendencies by eliminating the need to read the review. I hope I never have to see an era where enthusiast sites have to dumb down content to Twitter sized word bites liberally sprinkled with emoji's for those whose attention strays after half a dozen words.
Personally, I hope these people just bypass TechSpot and move on to sites fully geared for 2 minute attention spans and a low level of mental acuity.
Not everyone that reads this site are enthusiasts. I know way too well so called "computer experts" that Think they know things well. After some conversation it becomes clear that they have checked 1-2 benchmark scores from one article. Even worse are those who know that site X gave good rating for product Y.
I have a system for that. It's called adding to their knowledge base when I can. I'll provide further reading links and maybe a synopsis of the information. I'd prefer to offer information rather than have the site kowtow to the lowest level of interest.
As I mentioned previously, many people don't bother to read whole article.More perhaps read last page, even more read last chapter and even more check only overall score. For that last category 100/100 rating tells this is must buy.
So the site should better tailor their articles to people who can't be bothered reading said articles so these people, who can't be bothered researching their possible hardware buy will feel....what exactly? A 95/100 is also pretty much a must buy. You think these people are then going to go back and re-read the entire article just to see if the missing 5% impacts their purchase? They couldn't be bothered in the first instance.
I have no sympathy for anyone who makes a substantial purchase without researching it beforehand. Anyone who relies on snippits from a single review (let alone its final distilled score) and blames anyone but themselves for the outcome AND is too stupid to return the item if dissatisfied really deserves everything they get.
It seems that among 28nm chips AMD leads there.
You missed the point. The AMD lead is less than half the average it posts in other games (aggregated) at 4K. What I posted was the best case scenario for AMD bearing in mind 4K's love of high texture fill rates. If I were to choose the middle ground where Nvidia's TAU's weren't limiting their cards, and AMD's cards were likewise unaffected by raster op inefficiencies at much lower resolutions, the difference is more noticeable
Even if power consumption was commented with Fermi, it had very little importance. When Maxwell came, power effiency was most important feature ever according to so called Nvidia fanboys. Before Maxwell it had almost zero importance.
These things are cyclic. Fermi was a compute-centric architecture, and still stands as one of the best archs for compute efficiency. Yes, during Fermi's reign (and GT200 before it) Nvidia fanboys paid no attention to perf/watt. But you know who held it as paramount? AMD fanboys. When Evergreen arrived, perf/watt and perf/mm (a newly important metric that arrived overnight) was the sole point of interest. As soon as AMD pushed "always on" compute and wattage climbed with the GCN architecture, perf/watt suddenly became irrelevant to AMD fanboys.
It cuts both ways. Always has.
What is AMD's downfall? AMD didn't release new architechture for 28nm, just modified old ones bit. Probably because 20nm parts were cancelled and they thought nobody would buy old tech cards.
Nope. AMD's R&D couldn't sustain multiple developments and AMD had too many irons in the fire. Console development, an expensive to run logic layout business (since sold to Synopsys), a poorly thought out attempt at making a splash in the ARM server architecture market, and very likely a substantial ongoing investment in HBM integration which began at least 5 years ago....not to mention a long running APU/CPU architecture development.
If you want to distill AMD's woes down to a single point, it is their managements lack of strategic planning, goal setting, and a reliance upon being reactive rather than proactive in the industry. Too busy trying to imitate those more successful, but putting little thought into how to achieve goals and the actual returns on investment and time (see the SeaMicro acquisition for a prime example) once a course of action is embarked upon.
Looking how many used GTX980 Ti cards are for sale over internet right now, even Nvidia owners seem to think 28nm cards are were not so good buy after all.
That is one interpretation, but I don't think it is correct. As a serial upgrader myself, the best time to sell old hardware is just before the new series arrives. You still recoup a reasonable amount of your original purchase cost and can use the funds to offset the new purchase.
Nvidia has less features and less power consumption. Nvidia is better on DX11 software.
Not just DX11 software.If that were solely the case, why did it need Nvidia to publicize frame pacing, ShadowPlay, GeForce Experience, and a host of other software that AMD has eagerly tried to adapt to its own uses. I'm guessing that Ansel and MSP will also find themselves with AMD analogues in the not too distant future
Also AMD has not yet released latest architechture and AMD probably has much more manufacturing capacity for new technology parts than Nvidia has.
I've been hearing a near constant stream of this marketing since
Raja Koduri claimed that Polaris and 14nm was well ahead of Pascal and 16nm....
"We believe we're several months ahead of this transition, especially for the notebook and the mainstream market" said Koduri. "The competition is talking about chips for cars and stuff, but not the mainstream market."
...Yet Nvidia have demonstrated the largest non-Intel GPU in the world on 16nm with series production underway (with over 4500 pre-sold at $10K/per), have the GTX 1080 reviewed and a week from retail availability. the volume market GTX 1070 basically ready to go (holding it back obviously a marketing strategy) and mass market GP106 due to arrive in a month.