Some advances in AI have been exaggerated

Shawn Knight

Posts: 15,240   +192
Staff member
The big picture: Nary a day goes by that we don’t hear about some revolutionary breakthrough in the field of artificial intelligence. And on the surface, we’ve got the proof to substantiate those claims – better facial recognition and enhanced photo detection on mobile devices, for example. But are things really progressing at the rate in which we are led to believe, or even substantially at all?

Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT), and some of his colleagues recently compared 81 pruning algorithms – tweaks that make neural networks more efficient. “Fifty papers in,” he said, “it became clear that it wasn’t obvious what the state of the art even was.”

Science Magazine cites several reports to back up the claim. In 2019, for example, a meta-analysis of “information retrieval algorithms used in search engines” found that the high mark was actually set in 2009. A paper that found its way to arXiv in March which looked at loss functions concluded that accuracy involving image retrieval had not improved since 2006. In a separate study from last year that analyzed neural network recommendation systems used by media streaming services, researchers found that six out of seven failed to outperform simple, non-neural algorithms developed years earlier.

Part of the problem is that, as Zico Kolter, a computer scientist at Carnegie Mellon University, notes, researchers are more motivated to create a new algorithm and tweak it rather than simply tuning up an existing one. The latter approach also makes it harder to derive a paper out of, Kolter added.

It’s not a total wash, however. Even if newer methods aren’t always fundamentally better than older techniques, in some instances, the tweaks can be applied to the legacy approaches to make them better. Blalock said it’s almost like a venture capital portfolio, "where some of the businesses are not really working, but some are working spectacularly well."

Image credit: Andrii Vodolazhskyi, iStock

Permalink to story.

 
State of the art is this years model. "Let's not make an easy thing hard." Frank Verley, genetics professor.
 
Haha. They slap the "AI" word on about anything these days. A computer doesn't do anything we haven't already coded it to. Kinda like about anyone slaps the title "scientist" on a person in a field of study or "engineer" for someone in IT. It doesn't mean you are always right or know what you are talking about. :)
 
It is kind of moronic, to suggest that we should listen to some post-graduate ranting about AI state of things. Frankly, it is almost offending, to have such unresearched crap here.

The AI is developing at an exponential rate, especially with such breakthrough as the nVidia's latest DGX, which can take AI development to a new level.
 
It is kind of moronic, to suggest that we should listen to some post-graduate ranting about AI state of things. Frankly, it is almost offending, to have such unresearched crap here.

The AI is developing at an exponential rate, especially with such breakthrough as the nVidia's latest DGX, which can take AI development to a new level.

The thing is, the article cites more then one paper that came to the same conclusion.

In addition, the first paper that you criticized [1] has had other sources, as stated in the article, that cite multiple other reports that backup the claim.[2]

[1] https://proceedings.mlsys.org/static/paper_files/mlsys/2020/73-Paper.pdf
[2] https://www.sciencemag.org/news/2020/05/eye-catching-advances-some-ai-fields-are-not-real

Companies like Nvidia developing new uses for AI does not discount any of the sources cited in the article. You also seem to be confused on the subject matter. This article is about AI models and the various processes they go through (like pruning) to achieve an end result. Nvidia DGX on the otherhand is hardware and is a completely different topic.
 
For the moment AI (not to be confused with A.I.) rather means "spyware" than anything else.
 
Nvidia DGX primary purpose is AI. It is THE key hardware to further AI development. Different topic, LOL.

Again, This article is about AI models and the various processes they go through (like pruning) to achieve an end result. Nvidia DGX on the otherhand is hardware and is a completely different topic.

You criticized the software side and then used hardware as an example, as if it disproved that the software was not improving. I re-iterate, you are clearly confused.
 
Last edited:
My phones camera has "AI" mode and all it does is when it detects what you're aiming at it increases saturation a bit. Figured the whole AI thing is just a marketing term so some zoom zoom thinks it's futuristic. We are far off the things you heard are possible in theory.
 
Some advances in AI have been exaggerated

Really?!?!? And I thought that was the new marketing mantra for several decades ......
 
State of the art is this years model. "Let's not make an easy thing hard." Frank Verley, genetics professor.
Problem is "this years model" is actually just "last years model", but with different verbage.

University researchers are incentivized to to generate novel papers, not review or reproduce the work of other researchers, so there is almost no improvement to the existing models going on. Instead, you get each researcher giving their own independent take on the same topic. Two researchers read the same statistics paper, and independently develop their own computer algorithms to apply the research paper. Instead of then reading, reviewing, and producing the work in each other's papers - potentially improving one or both of their algorithms - they both simply move on because the university they are affiliated with and the groups that write their grants want to see 'new' instead of 'improved'.

It is kind of moronic, to suggest that we should listen to some post-graduate ranting about AI state of things. Frankly, it is almost offending, to have such unresearched crap here.

The AI is developing at an exponential rate, especially with such breakthrough as the nVidia's latest DGX, which can take AI development to a new level.
Hardware != Software. nVidia is creating processors to run statistics algorithms (AKA: "AI") because the generalize architecture of CPUs is hilariously bad at it. Meanwhile, researchers are the ones creating the algorithms that actually run on the processors. This is like bread makers going 'no one has actually made a new dough recipe in a while, despite what most people think' and you respond with 'what are you talking about? They came out with a new, more efficient oven last year!'
 
Problem is "this years model" is actually just "last years model", but with different verbage.

University researchers are incentivized to to generate novel papers, not review or reproduce the work of other researchers, so there is almost no improvement to the existing models going on. Instead, you get each researcher giving their own independent take on the same topic. Two researchers read the same statistics paper, and independently develop their own computer algorithms to apply the research paper. Instead of then reading, reviewing, and producing the work in each other's papers - potentially improving one or both of their algorithms - they both simply move on because the university they are affiliated with and the groups that write their grants want to see 'new' instead of 'improved'.

Apply to the news, video games, etc And if this years model is last years then MY thesis still rings true
 
I have long thought the idea of what AI is, has been heavily misused, much like 4G and 5G were, in the name of marketing (ie. selling by lying). At what point does a software program stop being well written If..Then..Else statements and become a true AI?

Even if the code uses probabilities based on known data - such as, in this scenario, 67% of responses were Yes, therefore I reply with Yes - this to me is not AI, it's just well written code with a lot of data to compare against.

Unless the software is doing the thinking instead of the programmer, then to me it's not AI.
 
99.9% of 'AI' is just a Regression Model, K-means Cluster Model or a Neural Net model. All these concepts have been around for decades. They are nothing more than mathematical formulae to predict outcomes. True AI (whilst maybe using these for some of its capabilities) is so much more than this and still so far off it's laughable.
 
99.9% of 'AI' is just a Regression Model, K-means Cluster Model or a Neural Net model. All these concepts have been around for decades. They are nothing more than mathematical formulae to predict outcomes. True AI (whilst maybe using these for some of its capabilities) is so much more than this and still so far off it's laughable.

99.9% as predicted by A.I. ?

A.I. modelling is quite accurate when done right

Expert human analysis predicted that there is a 98.5% chance that Windows 10 was created primarily as a weaponized spyware platform

Proper A.I. modelling calculates that the primary purpose for creating Windows 10 as a weaponized spyware platform, without secondary value to the end user was at least 99.973% (but may be higher)

seems accurate enough for me!
 
Last edited:
So A.I. hasn't advanced much past defeating human chess players where all the known possible moves are programmed. Who knew instilling spatial awareness, efficient wideband aggregation of dynamic data and making an accurate best cost-benefit decision based on said data, would be this hard? Just can't compete with organic technology.
 
Back