Wrestling with AI and the AIpocalypse we should be worried about

Jay Goldberg

Posts: 75   +1
Staff
Editor's take: Like almost everyone in tech today, we have spent the past year trying to wrap our heads around "AI". What it is, how it works, and what it means for the industry. We are not sure that we have any good answers, but a few things have been clear. Maybe AGI (artificial general intelligence) will emerge, or we'll see some other major AI breakthrough, but focusing too much on those risks could be overlooking the very real – but also very mundane – improvements that transformer networks are already delivering.

Part of the difficulty in writing this piece is that we are stuck in something of a dilemma. On the one hand, we do not want to dismiss the advances of AI. These new systems are important technical achievements, they are not toys only suited for generating pictures of cute kittens dressed in the style of Dutch masters contemplating a plate of fruit as in the picture shown below (generated by Microsoft Copilot). They should not be easily dismissed.

Editor's Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.

On the other hand, the overwhelming majority of the public commentary about AI is nonsense. No one actually doing work in the field today who we have spoken with thinks we are on the cusp of Artificial General Intelligence (AGI). Maybe we are just one breakthrough away, but we cannot find anyone who really believes that is likely. Despite this, the general media is filled with all kinds of stories that conflate generative AI and AGI, with every kind of wild, unbased opinions on what this means.

Setting aside all the noise, and there is a lot of noise, what we have seen over the past year has been the rise of Transformer-based neural networks. We have been using probabilistic systems in compute for years, and transformers are a better, or more economical method, for performing that compute.

This is important because it opens up the problem space that we can tackle with our computers. So far this has largely fallen in the realm of natural language processing and image manipulation. These are important, sometimes even useful, but they apply to what is still a fairly small piece of user experience and applications. Computers that can efficiently process human language will be very useful, but does not equate to some kind of universal compute breakthrough.

This does not mean that "AI" only provides a small amount of value, but it does mean that much of that value will come in ways that are fairly mundane. We think this value should be broken into two buckets – generative AI experiences and low-level improvements in software.

Take the latter – improvements in software. This sounds boring – it is – but that does not mean it is unimportant. Every major software and Internet company today is bringing transformers into their stacks. For the most part, this will go totally unnoticed by users.

We imagine Microsoft may have some really cool features to add to MS Word, PowerPoint and Visual Basic. Sure, go ahead and impress us with AI Excel. But that is a lot of hope for a company that is not well known for delivering great user interfaces.

Security companies can make their products a little bit better at detecting threats. CRM systems may get a little better at matching user requests to useful results. Chip companies will improve processor branch prediction by some amount. All of these are tiny gains, 10% or 20% boosts in performance, or reductions in cost. And that is ok, that is still tremendous value when compounded across all the software out there. For the moment, we think the vast bulk of "AI" gains will come in these unremarkable but useful forms.

Generative AI may turn out to be more significant. Maybe. Part of the problem we have today with this field is that much of the tech industry is waiting to see what everyone else will do on this front.

In all their recent public commentary, every major processor company has pointed to Microsoft's upcoming AI update as a major catalyst for adoption of AI semis. We imagine Microsoft may have some really cool features to add to MS Word, PowerPoint and Visual Basic. Sure, go ahead and impress us with AI Excel. But that is a lot of hope to hang onto to a single company, especially a company like Microsoft that is not well known for delivering great user interfaces.

For their part, Google seems to be a deer in the headlights when it comes to transformers, ironic given that they invented them. When it comes down to it, everyone is really waiting for Apple to show us all how to do it right. So far, they have been noticeably quiet about generative AI. Maybe they are as confused as everyone else, or maybe they just do not see the utility yet.

Apple has had neural processors in their phones for years. They were very quick to add transformer support to M Series CPUs. It does not seem right to say they are falling behind in AI, when maybe they are just laying in wait.

Taking this back to semiconductors, it may be tempting to build big expectations and elaborate scenarios of all the ways in which AI will drive new business. Hence the growing amount of commentary about AI PCs and the market for inference semiconductors. We are not convinced, it is not clear any of those companies will really be able to build massive markets in these areas.

Instead, we tend to see the advent of transformer-based AI systems in much simpler terms. The rise of transformers largely seems to mean a transfer of influence and value-capture to Nvidia at the expense of Intel in the data center. AMD can carve out its share of this transfer, and maybe Intel can stage the comeback-of-all-comebacks, but for the foreseeable future there is no need to complicate things.

That said, maybe we are getting this all wrong. Maybe there are big gains just hovering out there, some major breakthrough from a research lab or deca-unicorn pre-product startup. We will not eliminate that possibility. Our point here is just that we are already seeing meaningful gains from transformers and other AI systems. All those "under the fold" improvements in software are already significant, and we should not agonize over waiting for emergence of something even bigger.

Some would argue that AI is a fad, the next bubble waiting to burst. We are more upbeat than that, but it is worth thinking through what the downside case for AI semis might look like...

We are fairly optimistic about the prospects for AI, albeit in some decidedly mundane places. But we are still in early days of this transition, with many unknowns. We are aware that there is a strain of thinking among some investors that we are in an "AI bubble", and the hard form of that thesis holds that AI is just a passing fad, and once the bubble deflates the semis market will revert to the status quo of two years ago.

Somewhere between the extremes of AI is so powerful it will end the human race and AI is a useless toy sits a much more mild downside case for semiconductors.

As far as we can gauge right now, the consensus seems to hold that market for AI semis will be modestly additive to overall demand. Companies will still need to spend billions on CPUs and traditional compute, but now need to AI capabilities necessitating the purchase of GPUs and accelerators.

At the heart of this case is the market for inference semis. As AI models percolate into widespread usage, the bulk of AI demand will fall in this area, actually making AI useful to users. There are a few variations within this case. Some CPU demand will disappear in the transition to AI, but not a large stake. And investors can debate how much of inference will be run in the cloud versus the edge, and who will pay for that capex. But this is essentially the base case. Good for Nvidia, with lots of inference market left over for everyone else in a growing market.

The downside case really comes in two forms. The first centers on the size of that inference market. As we have mentioned a few times, it is not clear how much demand there is going to be for inference semis. The most glaring problem is at the edge. As much as users today seem taken with generative AI, willing to pay $20+/month for access to OpenAI's latest, the case for having that generative AI done on device is not clear.

People will pay for OpenAI, but will they really pay another extra dollar to run it on their device rather than the cloud? How will they even be able to tell the difference. Admittedly, there are legitimate reasons why enterprises would not want to share their data and models with third parties, which would require on device inference. On the other hand, this seems like a problem solved by a bunch of lawyers and a tightly worded License Agreement, which is surely much more affordable than building up a bunch of GPU server racks (if you could even find any to buy).

All of which goes to say that companies like AMD, Intel and Qualcomm, building big expectations for on-device AI are going to struggle to charge a premium for their AI-ready processors. On their latest earnings call, Qualcomm's CEO framed the case for AI-ready Snapdragon as providing a positive uplift for mix shift, which is a polite way of saying limited price increases for a small subset of products.

The market for cloud inference should be much better, but even here there are questions as to the size of the market. What if models shrink enough that they can be run fairly well on CPUs? This is technically possible, the preference for GPUs and accelerators is at heart an economic case, but change a few variables and for many use cases CPU inference is probably good enough for many workloads. This would be catastrophic, or at least very bad, to expectations for all the processor makers.

Probably the scariest scenario is one in which generative AI fades as a consumer product. Useful for programming and authoring catchy spam emails, but little else. This is the true bear case for Nvidia, not some nominal share gains by AMD, but a lack of compelling use cases. This is why we get nervous at the extent to which all the processor makers seem so dependent on Microsoft's upcoming Windows refresh to spark consumer interest in the category.

Ultimately, we think the market for AI semis will continue to grow, driving healthy demand across the industry. Probably not as much as some hope, but far from the worst-case, "AI is a fad" camp.

It will take a few more cycles to find the interesting use cases for AI, and there is no reason to think Microsoft is the only company that can innovate here. All of which places us firmly in the middle of expectations – long time structural demand will grow, but there will be ups and downs before we get there, and probably no post-apocalyptic zombies to worry about.

Permalink to story.

 
IMO, AI is great in areas where it has made, and has the capability to make, significant advancements: namely, the medical field and materials science. However, if I have to verify its answers in other areas, such as getting things wrong in programming, trying to tell people that Australia does not exist, or getting things wrong in the area of general knowledge, then I have absolutely no use for AI. To me, AI getting things wrong places AI squarely in the realm of FAD - and I define FAD as companies chasing profits through trying to be the first to introduce a product, or jumping on introducing a product because "everyone else produces one, why don't we produce one, too?"

Maybe the media has over-emphasized the garbage that AI has produced without a corresponding emphasis on the good things that AI has produced, but that is what the media does these days. The media is out for clicks and often, IMO, throws caution to the wind.

But still, if I have to verify AI answers for accuracy, then how is it any different from any search engine? I cannot see wasting my time with "consumer" AI at this time. Its just not worth it to me, and I have no interest in making doctored images of any sort or other trivial pursuits that AI enables.
 
As pointed out in the excellent video essay ‘AI doesn’t exist’ on YouTube… AI doesn’t exist. Pattern recognition algorithms trained on various types of data exist. ChatGPT is trained on human language. Various image amalgamation softwares are trained on images. Etc. none of these softwares have anything at all resembling intelligence. They are simply very good at; assembling words in structures that look like sentences; assembling pixels into patterns resembling other pictures; etc.

Thus, the problem of the AIpocalypse is one not of AI, but of science communication, as well as of capitalism / scammers and grifters eternal chase for the next hype train to; pump up share prices; defraud people.

These tools (albeit more useful versions of them trained on smaller more specific datasets) have existed for around two decades. The current hype train isn’t due to a breakthrough in the type of applications where these tools are applicable. They are due to two flashy non-use cases (text and image generation respectively) that have received the full force of hype from the people that previously brought us crypto, and really needed something new to push…
 
AI is unsustainable because the only reason why it was invented was to make it easier for parasites to leech off of preexisting internet content. Once it becomes more obvious to creatives that their work is being pilfered by and for AI to benefit Big Tech platforms and side hustlers, they will stop posting stuff on the web, kind of like how in the rise of listicles and Wikipedia, tons of webmasters pulled their websites in protest (like the editors of TruCrime Library).

How sustainable will AI be when artists, writers, photographers and other contributors either yank or paywall content? Remember, AI is based on GIGO. Short of straight up copyright theft, there will be at some point be practically nothing really valuable for AI to pilfer.
 
Except it's not actuall AI; it is neither self aware nor independently intelligent. It is a program limited by the if this than that parameters of the programmer. Even an ant has more comprehension of life.


 
Last edited:
I feel that 99% of the public really does not understand what these technologies are, and they are just either spouting nonsense or repeating something they read elsewhere. Today, generative AI is a game-changing tool for certain tasks. The most obvious ones (for me) are helping to write code, correcting grammar in your writing, summarizing complex articles/papers either from input or from its learned internet-based model, and now generating images to convey concepts/ideas.

I think that most people just don't have a need for the tasks that generative AI excels at. These are the tasks that I find invaluable at work, but not in my home life. Most people just play around with ChatGPT with no actual purpose, and thus don't see the value. The toy cases are "neat," but translating that into something useful takes practice. I could write an entire article on just one of the use cases that I find valuable.

Yes, generative AI is fallible, and I need to ALWAYS fix its output. That does not mean it is not useful. For me, in some cases, it can do 80-90% of the boring work (coding, sentence filler) in a fraction of the time that I could. It allows me to focus on the important items where I, as a human, am much better, and my intelligence is of value. I don't need to spend my hours writing all of the annoying syntax for C++; I want to write the algorithm. I don't want to write out the extra text to make perfect sentences and paragraphs; I want to focus on the ideas.

The true value of generative AI is yet to be seen outside the basic use case for the web-style ChatGPT generative AI. There is significant power in the ability to "automatically" convert (aka transform) knowledge from one "language" to another. It allows "linking" of technologies together in a way that we could only dream about before. Is there a killer app for generative AI? I don't know. However, I do know that it will be a critical technology in whatever does come next.

AGI is another discussion for another day. Generative AI is not AGI. It may be a critical component or springboard, but I think true AGI is still a ways off. That, however, does not diminish the power of generative AI now and in the future.
 
I feel that 99% of the public really does not understand what these technologies are, and they are just either spouting nonsense or repeating something they read elsewhere. Today, generative AI is a game-changing tool for certain tasks. The most obvious ones (for me) are helping to write code, correcting grammar in your writing, summarizing complex articles/papers either from input or from its learned internet-based model, and now generating images to convey concepts/ideas.

I think that most people just don't have a need for the tasks that generative AI excels at. These are the tasks that I find invaluable at work, but not in my home life. Most people just play around with ChatGPT with no actual purpose, and thus don't see the value. The toy cases are "neat," but translating that into something useful takes practice. I could write an entire article on just one of the use cases that I find valuable.

Yes, generative AI is fallible, and I need to ALWAYS fix its output. That does not mean it is not useful. For me, in some cases, it can do 80-90% of the boring work (coding, sentence filler) in a fraction of the time that I could. It allows me to focus on the important items where I, as a human, am much better, and my intelligence is of value. I don't need to spend my hours writing all of the annoying syntax for C++; I want to write the algorithm. I don't want to write out the extra text to make perfect sentences and paragraphs; I want to focus on the ideas.

The true value of generative AI is yet to be seen outside the basic use case for the web-style ChatGPT generative AI. There is significant power in the ability to "automatically" convert (aka transform) knowledge from one "language" to another. It allows "linking" of technologies together in a way that we could only dream about before. Is there a killer app for generative AI? I don't know. However, I do know that it will be a critical technology in whatever does come next.

AGI is another discussion for another day. Generative AI is not AGI. It may be a critical component or springboard, but I think true AGI is still a ways off. That, however, does not diminish the power of generative AI now and in the future.
You really shouldn’t use it for summaries of papers. There’s absolutely no way to guarantee that the output of the summary actually matches the contents of the papers properly, without error, that may be significant. Better to just read the authors summary if you don’t have time to read the whole thing…
 
You really shouldn’t use it for summaries of papers. There’s absolutely no way to guarantee that the output of the summary actually matches the contents of the papers properly, without error, that may be significant. Better to just read the authors summary if you don’t have time to read the whole thing…
Yeah, this goes into the 80% helpful for ChatGPT and 20% I still need to work. You need to take everything from ChatGPT with a grain of salt and you must verify it. That does not negate the value of ChatGPT as a tool. You just need to work with it enough to learn where it excels and it does not. It does not replace human thought and reasoning.

I can't say enough that it is a tool. Treat it like one, just like spelling checking and grammar checking in Word.

Does it write perfect code? Nope. I still need to iteratively work with it to get things just right, but I can achieve more work in less time. That is a win.

Do I always agree with its critique of my writing and modifications that it makes? Nope. I always, always, need to go back and edit to be exactly right, but it really does a good job cleaning up grammar, sentence structure/flow, and negative/harsh tone. Again, I achieve substantially better writing and much faster when I use it.

 
I think the current AI is good as a better search "engine" and producer of things not critical and has human oversight. It's new tools for the toolkit, not a mechanic.
 
You really shouldn’t use it for summaries of papers. There’s absolutely no way to guarantee that the output of the summary actually matches the contents of the papers properly, without error, that may be significant. Better to just read the authors summary if you don’t have time to read the whole thing…
Depends how deep of a summary you want to get. And you can always fine tune your prompt to instruct it not to miss any details. ChatGPT4 is much better at providing you what you want than 3.5, although on some days it feels that Open AI has swapped the engine behind the scenes because it spurs out text very fast but the quality is lower that it used to be, forcing me to improve my prompts in order for it to not to ignore what has been said just a few small messages ago.
 
I’m actually more worried about what targeted narrow AI can do.
I'm increasingly worried about the misuse of AI on social media, especially by groups like the Russian Internet Research Agency, or "troll factories." These used to hire young people, paying them to sway public opinion against most Western leaders, boosting far right populist politicians to polarize opinions to weaken the countries, on platforms like YouTube, Twitter, TikTok, Facebook, and Telegram. My Russian father-in-law, who has lived outside of Russia since 1967 but is still deeply influenced by Russian media, finds it hard to recognize Russian state-sponsored fake narratives as what they are. He's no exception. As far as I can tell by channels like 1420 from YouTube and Levada Center studies, 75% of Russians have the same opinions as he does. Despite my efforts, my father-in-law is convinced by these trumped up narratives about impending Ukraine's attack on Russia in 2022 which Russia preemptively avoided, about US nuclear warheads in Ukraine, about Ukraine being run by Nazis, about the US directly controlling the politics in Ukraine and in the EU, about Ukrainians bombing their own people in Donbass for 8 years. He really is struggling to tell apart AI-generated disinformation from actual journalism on YouTube. This issue is exacerbated by the constant stream of Russian propaganda from official channels and the rise of new, anonymous YouTube channels powered by AI that spread the same messages, reinforcing his beliefs about supposed Nazis in Ukraine and US interference in Ukraine and the EU etc. Although the IRA, known for spreading fake news on Western social media, was officially shut down in July, I believe its operations have merely shifted shape and continue under a different guise.
 
Thank you for the article, really gives a reasonable perspective on the current AI frenzy.

Nvidia's stock reached a low of $110/share back in the market lows of Oct 2022... it's now an astonishing $725/share. All of it on talk of AI madness. Is it just a big bubble? Time will tell, but ChatGPT was the utility that drove this AI frenzy...was that app really that impressive? I tried it as a teacher to see what my students have access to, and realized it will literally make things up if it doesn't know the answer. Not very impressive and really reigned in my expectations.
 
Back