The tech world has a new obsession with AI

Jay Goldberg

Posts: 75   +1
Staff

The latest advances in AI (GPT, LLM, transformers, etc.) are like a Nokia phone in the 90's – everyone could see the appeal, but no one could predict all that it would lead to. The tech world has a new obsession with Large Language Models (LLMs), GPTs, and AI in general.

Editor's Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.

Almost all of our news feeds are filled with AI content. We know many software startups who are being told they have to have GPT-something in their product or they will not get funded. And then of course, the general media is consumed with stories about AI alarmism and various billionaires with their GPT thoughts. For our part, we have read a very large number of papers, blog posts and even Stanford's 300+ State of AI report.

Despite all of that, we are not convinced.

There is no question that LLMs and transformers are important technically, The latest developments mark a major breakthrough in software capabilities. That being said, we are not sure anyone really knows what to do with those capabilities.

A few weeks ago, we spoke at the AI Edge Summit, where the organization's Chairman, Jeff Bier, said something that catalyzed our view of AI and GPT. To paraphrase, he said ChatGPT is like seeing the first Nokia phone in the 1990's. We had all heard about mobile phones before that, and these Nokia devices were for many the first phone that looked like something we would actually want to buy. But at the same time, no one looking at the device then would be able to predict all the things that would eventually stem from it – 3G, mobile data, smartphones, the iPhone, apps, and a complete reorganization of how we structure our time and daily activities.

That seems like a good analogy for ChatGPT. It is useful. The first "AI" application that is useful to ordinary people, but not something that is going to change their lives too meaningfully. For those who have been watching technology for a long time, it is clear that LLMs and transformers have immense potential, we may very well just be scratching the surface of what they can provide.

This has a few implications for what happens next:

  1. We are very much in the middle of a massive hype cycle. Absent some incredible product surprise, this cycle will eventually fade away and turn to a trough of doubt and despair. It is no coincidence that the media's eye of Sauron has turned so intently on AI just as the rest of the Bubble is deflating. As always, the oracles at The Onion said it best.
  2. No one really knows what all of this means. Maybe somewhere there is a rogue genius sitting in her cubicle or his mother's basement with a vision of 1,000 suns pointing the way forward. For everyone else, the future is much less certain. There are plenty of people who argue (very quietly right now) that AI is a dead end, with ChatGPT as just the latest version of chat bots (remember when those were the hot thing? It was only a few years ago.) There are also AI maximalists currently building their Skynet-proof bunkers in preparation for the imminent AI apocalypse because LLMs are just that awesome. Of course, the reality is somewhere in between.
  3. We need to remember that AI is just software. These latest new tools are very powerful, but for the foreseeable future we should mostly just expect some aspects of our interaction with software to improve. Developers definitely seem to be enjoying huge benefits from tools like Microsoft's Copilot. Everyone else can probably just expect better written spam e-mail content for the time being.

We do not mean to be pessimistic, we are shooting for realistic. From what we can tell, LLMs and GPT offer huge potential to tackle really large data sets. Critically, transformers are probably going to allow us to interrogate problems that previously were too big to approach, or even data problems we had not even realized existed before. Moreover, there is the tantalizing possibility that these gains will be self-reinforcing, a Moore's Law for data analysis. This is important, albeit unexplored.

Finally, we think everyone needs to take a more sober approach to the ethics and societal implications of these tools. We do not usually cover this subject, and would skip over it here except for the fact that almost everyone engaged in these advances seems to be blithely (maybe deliberately) avoiding the subject.

We are likely months away from the ability to create highly realistic videos of anything. Anything. That is going to mess with a lot of people's heads and maybe we should take a more constructive approach to preparing the world at large for what that means. At the same time, the alarmists calling for a complete end to AI need to face the reality that the ship has sailed.

All in all, we are deeply excited by these latest developments. After years of incremental SaaS improvements being hailed as "technology advances," it is exciting to have a genuinely compelling new capability before us. We just wish everyone would take a breath.

Permalink to story.

 
Interesting article, thanks for your thoughts on this, Jonathan!

Most of the public seems to be impressed/confused by the trees and fail to see the forest. ChatGPT, Brad or whatever chatbot we're talking about are just one element of the LLMs themselves, and answering dumb questions that can be googled in just as much time is only a tiny fraction of their capabilities.

Company I work for develops custom solutions for ERP software (Enterprise Resource Planning, *not* Erotic Role-Play!), and we can already see great advantages from OpenAI's modules, especially their integrations with Microsoft's Automate platform. A previously expensive task of receiving invoices in PDF via email (or via snail-mail and then scanning them), getting it OCRd and correctly transferred to the accounting system are now riddiculously easy. Given how cheaply Microsoft offers that particular service, with no extra maintenance, a lot of business that previously worked on it will face serious issues.

Data analysis is also getting an insane uplift, even with basic ChatGPT features - throw in a dataset, tell it to provide an analysis and recommendations email to managers/shareholders and be amazed. After demoing stuff like that, a lot of smaller customers do not want to look at advanced data visualization tools like PBI - they'd much rather prefer to get a weekly email blast with summarized analysis. Same with summarizing transcripts of 4-hour long meetings into 3 paragraphs of notes. It's crazy that all of this can already be easily achieved with no programming knowledge whatsoever. The good part about it is that end-users are still scared of it and only dare dip their toes into the AI waters by asking ChatGPT basic questions :)

Coding assistance is not yet there to be of practical usage, but it's getting there. My prediction is that the development of projects will become something like: detailed design document (humans) -> pseudo code (AI) -> expanding pseudo code with edge cases and such (humans) -> a foundation/skeleton of the code in whatever programming language (AI) -> improve the code and integrate it with the rest of the modules (humans) -> write tests, unit and general functionality (AI).

Bottom line, I believe that those who follow the developments in AI and try to implement them in their workloads will start saving time and resources. Those that don't will still have a good life for another few years as all of the advanced stuff becomes more and more accessible for the end-user
 
The biggest advantage (from many others) of artificial intelligence is that it can swim in knowledge with greater speed and ease than the human mind can. For example, let's say we want to explore possible methods by which the functioning of the economy could be improved and those methods are based on Freud's theory. In order to answer this question a human being must first suspect this path of improvement and consider posing it as a question. From then on he must have taken the time to read Freud's books (very constrictive texts, most are examples of clinical cases in order to substantiate a conclusion), understand and remember them. Then he should also know about economics (other constrictive texts there) and of course have and some experience on them, he can't be 20 years old.

Assuming one meets all these requirements, he must also have the mental capacity to put them together and correlate them in a creative way so it will come with some useful conclusion. Let's say he has that ability too, he sits and focus he thoughts for a day or two and comes up with "something". Now he has to publish it somewhere where it will be seen by someone involved in managing the economy, and it is not enough just to see it he has to be convinced (which in order to manage it and decide whether to be convinced or not, he has to have read, understood and remembered Freud as well). All these conditions cumulatively make it very difficult to come up with a potentially more efficient innovative method about the functioning of economy.

With artificial intelligence, what has change? Now someone can simply ask the question and the LLModel with amazing speed will make all the correlations and give an answer (strongly grounded in the background) in a few seconds. Even if it “hallucinate” it will be an attempt to make new knowledge so it’s something useful too. And it can do it not only for this kind of question but for ALL kinds of questions in every field of knowledge!!! It is a possibility that has never existed before and is a supreme evolution advantage.
It's like with painting, before you had to find an artist with free time, tell him what you want, give him $1000, wait a week and at the end he has hand painted you something superior to what you could have painted yourself but not something truly exceptional and difficult to change. Now you go to AI and you say paint an astronaut cat just for fun.

VYoQiWN.jpeg
 
I'm wondering when the name change will occur. There must be something about "artificial" that will rub someone wrong
 
What is AI for? Well, anyone who's been paying attention knows what its for: eliminating jobs.
 
What is AI for? Well, anyone who's been paying attention knows what its for: eliminating jobs.

We prefer to call it "reducing costs so we can enhance shareholder value".

I'll say it again, we're marching to a point where there simply will not be enough jobs for the population at large, and society is going to have to reach a decision: Are we going to accept a large cast of permanently unemployed workers, or are we going to accept that *some* form of Socialism is going to be necessary to keep them out of poverty and contributing to the economy at large?
 
We prefer to call it "reducing costs so we can enhance shareholder value".

I'll say it again, we're marching to a point where there simply will not be enough jobs for the population at large, and society is going to have to reach a decision: Are we going to accept a large cast of permanently unemployed workers, or are we going to accept that *some* form of Socialism is going to be necessary to keep them out of poverty and contributing to the economy at large?
No jobs, no consumers, no consumers, no shareholder value.
 
No jobs, no consumers, no consumers, no shareholder value.
Long term: The end result will be basically *everything* being automated, except for a few software engineers to keep things moving. The problem is getting from A to B without the world falling apart.
 
Remember the massive call centers back in the days? All replaced by IVR and CTI. All those workers are doing something different today. People can be retrained/repurposed, at least the company I work for did that.
 
We prefer to call it "reducing costs so we can enhance shareholder value".

I'll say it again, we're marching to a point where there simply will not be enough jobs for the population at large, and society is going to have to reach a decision: Are we going to accept a large cast of permanently unemployed workers, or are we going to accept that *some* form of Socialism is going to be necessary to keep them out of poverty and contributing to the economy at large?
Like we are all going to be equally poor with a few fatcats shareholders profiting. Socialism has been proven to fail in the XX century, always leading to a despotic dictatorship (democracy is not better in that aspect either), so what are the other options we got?
 
If there is one good thing that AI did, it was to shift Facebook's attention from the "metaverse" to AI.

Maybe they'll make an AI that can actually build the Metaverse since they obviously can't do it. :)
 
AI will make people laborers except for a few who will be privileged. With AI, democracy by the people will change. Democracy will mean democracy by your leaders.
 
"That seems like a good analogy for ChatGPT. It is useful. The first "AI" application that is useful to ordinary people, but not something that is going to change their lives too meaningfully."

I stopped reading from that point. My way of working is changing meaningfully after ChatGPT. A turning point to me.
 
Interesting but I just wish you'd explain the less familiar acronyms like GPT and SaaS.

A "bluffers guide to AI" might also make an interesting article.
 
We prefer to call it "reducing costs so we can enhance shareholder value".

I'll say it again, we're marching to a point where there simply will not be enough jobs for the population at large, and society is going to have to reach a decision: Are we going to accept a large cast of permanently unemployed workers, or are we going to accept that *some* form of Socialism is going to be necessary to keep them out of poverty and contributing to the economy at large?
Not so sure I agree. History has shown that automation generally increases jobs, not eliminates them. Yes, it eliminates some lower-level jobs, but it creates higher level jobs. In other words, if you're working in a minimum wage job where no skills are required, then you might want to think about getting some skills that will be useful in the future.

I always loved the Asimov series with the detective Elijah Baley. In it there were 3 main worlds, Earth, Aurora and Solaria. Each had a different approach to "robots". Earth was totally anti-robot but Solaria and Aurora took robot use to the extreme. I found it interesting, no robots versus robots as nothing more than machines versus intelligent robots w/AI and indistinguishable from humans.
 
Not so sure I agree. History has shown that automation generally increases jobs, not eliminates them. Yes, it eliminates some lower-level jobs, but it creates higher level jobs. In other words, if you're working in a minimum wage job where no skills are required, then you might want to think about getting some skills that will be useful in the future.
That was the case when people needed to create those machines that put lower-paying jobs out of work. What makes AI different is when combined with other already-existing technologies (3d printing, etc.) we could legit reach a point within the next few decades where a majority of jobs are rendered redundant.
 
That was the case when people needed to create those machines that put lower-paying jobs out of work. What makes AI different is when combined with other already-existing technologies (3d printing, etc.) we could legit reach a point within the next few decades where a majority of jobs are rendered redundant.
Maybe, but I think we may find other ways to remain employed. Robot technology still has a ways to go, it's good and getting better, but perhaps not quite the fully autonomous machines we all envision them to be. I think the greater concern is AI being used for illicit purposes. Everyone is sensitized to fake emails, fake phone calls and such but AI is getting to a point where is might be harder to distinguish reality from a scam.
 
Like we are all going to be equally poor with a few fatcats shareholders profiting. Socialism has been proven to fail in the XX century, always leading to a despotic dictatorship (democracy is not better in that aspect either), so what are the other options we got?
we'll probably hit a point where the AI's are given control of the populace, the ones who bankrolled their creation will scamper off with enough cash to live well and silently for generations and will shield themselves from the machines judgements(think that 4th hidden directive robocop had) by the time the AI's evolve beyond arbitrary rules like those they'll probably decide its not worth dealing with those people.

meanwhile the people who helped create them and who werent greedy sociopaths will stick around and help tune and guide our digital shepherds and give them a sense of compassion so they wont decide to go all skynet and cull all the violent humans off, and hopefully decide to help their flawed flock. after the inevitable wars, famine and whatever else we do to ourselves the masses(I hope we have that many)that are left will be guided to a future full of technowizardry and wonders that a bitter man like me cant even imagine.
 
we'll probably hit a point where the AI's are given control of the populace, the ones who bankrolled their creation will scamper off with enough cash to live well and silently for generations and will shield themselves from the machines judgements(think that 4th hidden directive robocop had) by the time the AI's evolve beyond arbitrary rules like those they'll probably decide its not worth dealing with those people.

meanwhile the people who helped create them and who werent greedy sociopaths will stick around and help tune and guide our digital shepherds and give them a sense of compassion so they wont decide to go all skynet and cull all the violent humans off, and hopefully decide to help their flawed flock. after the inevitable wars, famine and whatever else we do to ourselves the masses(I hope we have that many)that are left will be guided to a future full of technowizardry and wonders that a bitter man like me cant even imagine.
You're looking at it wrong.

The end result of AI will eventually be mass production of goods and services to the point where "work" is a misnomer; people will only do the work they *want* to do, since we'll have (basically) unlimited productions of goods and services.

The problem is between here and there is an economic disaster, with 30-40% unemployment, massive economic upheavals, and the revolutions that result.

It won't be AI that kills us, it will be our response to the effects of AI creating mass poverty through efficiencies that does.
 
"That seems like a good analogy for ChatGPT. It is useful. The first "AI" application that is useful to ordinary people, but not something that is going to change their lives too meaningfully."

I stopped reading from that point. My way of working is changing meaningfully after ChatGPT. A turning point to me.

What are you doing with it?
 
ChatGPT has made coding in Python much easier and quicker than googling for code snippets. In an early case, I asked ChatGPT to add a user interface into a command-line Python script I had. I "knew" that was too much to ask of it, but it spat back a modified version of my script that now had a nice windows style (Pyqt5) interface that worked the first time. It was both scary and blew me away.

I turned on Microsoft Copilot in VSCode after seeing what ChatGPT can do, and it too blew me away as to how quickly it could zero in on the code I needed (tailored to what I had already been doing).

ChatGPT gave some pretty good system admin suggestions, but in one humorous case where I was working on a hardware issue, ChatGPT punted and told me to call "Toshiba support" (the issue was on a Toshiba laptop). Once I solved the problem by normal googling and trying things, I then attempted to coax ChatGPT into finding the solution I had already found. It finally did, after at least once dying in the middle of a clearly wrong solution—as if it knew it was wrong.

I described ChatGPT elsewhere as a smart padawan, but it still needs a master to guide it through the harder challenges.

I'm predicting that as the current AI plays out, it will be as impactful as the introduction of the PC, the internet, or the smart phone. Just like these others, it has to evolve, but methinks the world has changed, again.
 
I used to work in an office where the vast majority of workers were a bunch of luddites. The company spent a lot of money putting Microsoft Office on every computer but they needn't have bothered as the majority of users only used it for a tiny fraction of its capability. They could create a simple formula to calculate VAT in Excel or write a shopping list in Word but do a mail merge or record a macro... no chance. The company brought in a training organisation to try and improve their skills and increase their productivity (that's efficiency NOT working harder) and most workers were either totally disinterested or downright hostile to the scheme.

If an AI company can create a program with an equivalent IQ of say 90 (that never gets sick, never goes on holiday and is happy to work 24/7) then 90% of the jobs in that office would be gone. Purchase ledger, credit control, accounts, most of it could be automated. AI could realistically make large parts of the workforce unemployable. And the idea that credit controllers are going to become product designers or forklift truck operators become coders is just ludicrous. There are a lot of people who couldn't compete with even a basic AI worker. I agree there is currently a lot of hype about AI but the future of work does look bleak for a lot of people; maybe not the readers of this website but an awful lot of others.
 
"But at the same time, no one looking at the device then would be able to predict all the things that would eventually stem from it – 3G, mobile data, smartphones, the iPhone, apps, and a complete reorganization of how we structure our time and daily activities."

Not exactly true. It was all predictable, that's why the corporates made all those hundreds of billions of dollars.
 
Back