OpenAI CEO Sam Altman warns that the world might not be far from "potentially scary" artificial...

midian182

Posts: 9,740   +121
Staff member
What just happened? It's understandable why many people are concerned about artificial intelligence becoming a threat to humanity; Hollywood has pumped out plenty of movies about rogue AIs over the years. But when a warning that the world is close to "potentially scary" AI comes from Sam Altman, the CEO of ChatGPT creator OpenAI, maybe it's time to listen.

Altman, co-founder and CEO of OpenAI and the former president of Y Combinator, posted several tweets about generative AI over the weekend. He wrote that the benefits of integrating AI tools into society mean the world will likely adapt to the technology very quickly. He believes they will help us become more productive, healthier, smarter, and entertained.

Altman says this sort of transition is "mostly good" and can happen fast, comparing it to the way the world moved from the pre-smartphone to the post-smartphone era, but it will be tempting to make the move "super quickly," which he says is a frightening prospect as society needs time to adapt.

There was also a warning about the need for industry regulation. "We also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren't very scary, I think we are potentially not that far away from potentially scary ones," Altman tweeted.

The tweets highlighted some of the problems with generative AIs, such as Microsoft's GPT-powered Bing Chat calling users liars and being overly aggressive or rude to them. Microsoft responded to this by limiting users to 50 chat turns - a conversation exchange which contains both a user question and a reply - per day and 5 chat turns per session Altman said there would be challenges like these, which he said can leave people feeling unsettled. He also wants to ensure there are no biased results from chatbots.

Those disturbing conversations come from AIs being limited by what they're trained on and unable to "think" for themselves. It's what allowed an amateur Go player to beat a top artificial intelligence recently using a technique that humans would easily be able to identify.

Generative AI isn't the only type of artificial intelligence where regulation is becoming a priority. AI's use in warfare is under the spotlight right now and has led to more than 60 nations agreeing to put the responsible use of artificial intelligence higher on the political agenda.

Permalink to story.

 
I think the AI is being controlled by the people who run the company anyways, so if the AI is making rude,unsettling or even racist remarks, it is likely because it was coded to.
 
IMO, the only problem with humanity doing something stupid is that humanity has to do something stupid first and then figures out, after is has harmed far too many people, that it was stupid in the first place. Then humanity comes to the conclusion far too late "Maybe we should not have done that." Take lead cups, for one example.

As I see it, it will be the same with this AI crap. So far, I think we've seen that AI is just another stupid computer program. We have enough of those already, but humanity has not figured that out yet since this AI fad is driven by the quest for ever bigger profits at ever lower costs.
 
It's funny that people who haven't produced anything in their lives think they can regulate what other people at the forefront of innovation have produced.

No regulation is needed, neural networks are "angelic" forms of existence and among the safest inventions.

Someone who wants to do damage doesn't need neural networks to take two Uzis with 100 magazines and go to a gathering of people or use gas in the metro. This has been happening for decades and no law can stop someone who really wants to do damage.
 
AI designed by racist, medically ignorant, gullible humans who do not interact with large numbers of normal people.. what could go wrong.
 
"He believes they will help us become more productive, healthier, smarter, and entertained."

And immediately after:

"Altman says this sort of transition is "mostly good" and can happen fast, comparing it to the way the world moved from the pre-smartphone to the post-smartphone era.."

Talk about destroying your own argument.
 

"He believes they will help us become more productive, healthier, smarter, and entertained."

Ah yes, that classic optimist naivety of people who just invented something potentially revolutionary.
People are still dumb as ever.

I'll see you later when students write their essays with AI and become even dumber, my spambox becomes more intimidating than ever and when the internet's toxicity is amplified by AI tenfold.

Have a great life!
 
The thing is what they call "AI" is not actually intelligent, just like the devices called "smartphones" are neither smart nor just phones.
Exactly, it can't go beyond the bounds of the parameters and concepts set by its creator, as at the end of the day it's many sets of machine algorithms working together, when you ask them the answer to a question they serve the answer based on their parameters it's not like they have any idea what said answer really means and nor can they expand their parameters through that info
Of course, you can still get creators who make the parameters and bounds way too wide and end up with systems that start getting creepy or assume too much (see self driving tech and the relevant issues connected like the moral dilemma, the systems seeing decisions as absolute paths (I hit this pedestrian, or they live and the vehicle driver dies) l, not considering alternatives like humans would as they don't know those parameters, they're not actually "intelligent", otherwise a self driving car would be aware of rain is for example and be able to automatically react accordingly and look at new parameters, where instead the creator has to tell it to watch for those changes and understand how it affects the vehicle
 
The "scary" part is how quickly companies will rush this type of technology into products to make as much money as possible, and only once they are filthy rich, allow society to deal with the consequences of their poor implementations. It is clear that learning from the internet is a horrible way to teach an AI. I am sure there will be some minor benefits, but in the long run it will just be another tool for companies to squeeze another dollar out of the consumer and drive us a little more crazy.
 
I think the AI is being controlled by the people who run the company anyways, so if the AI is making rude,unsettling or even racist remarks, it is likely because it was coded to.

Not at all. AI isn't even really coded in the normal sense. It's a program that use a massive relational database to create associations between different pieces of data. As it does this it builds a model that it checks new data and associations against. The more data it receives the better it's model. The big problem is while it creates a model, it doesn't understand or even know what it's a model of. So it has no way to apply internal checks and balances to what the model tells it.

It also doesn't have the ability to use critical thinking or self knowledge to check the validity of its results. It's the perfect example of garbage in - garbage out. Feed it the right input and it can turn around and state something like the holocaust is the same as fighting an insect infestation. Not because it's racist, but because on a very superficial level they are the same, just one is killing ants, the other people. And most "normal" persons give human life a totally different value than insect life.
 
According to the amount of news we get about AI, the next big fake catastrophe will be about it. Then they'll give us the fake solution they planned, which will be : less freedom and more control, as was the case with terrorism and health related staged disasters.
 
I feel too many of the comments are short-sighted. They view AI as a more sophisticated version of Eliza. It would be nice to believe that is all it was and ever will be, but as computing power grows and circuitry shrinks more will be achievable. Then throw in quantum computing. In addition, AI will be used to improve itself such that each generation will be significantly better than its previous until it reaches a point where it will be "smarter than its human creators". Now create an AI system programmed with nefarious intentions that builds upon itself for a few generations, think of the havoc that can be created in a connected world.

Would you trust certain countries and their leaders with technology like that to play be the "rules"? It reminds me of a TV show I watched a few years back called "Person of Interest". At the beginning, the AI system was used to help people. By the end, it was a war between two AI systems, "good vs bad", with all the repercussions. Could this happen in reality? Hopefully, never, but you never know. Who would have thought COVID would have been produced in a lab and then kill off a few million people and still may be mutating. Power and money are strong motivators to get people to do evil. Power begets money and money begets more power.
 
The "scary" part is how quickly companies will rush this type of technology into products to make as much money as possible, and only once they are filthy rich, allow society to deal with the consequences of their poor implementations. It is clear that learning from the internet is a horrible way to teach an AI. I am sure there will be some minor benefits, but in the long run it will just be another tool for companies to squeeze another dollar out of the consumer and drive us a little more crazy.
Fast or not, the result is the same.

Look at the railroad industry in the US. It has been around for a very long time. It has nonetheless become vastly worse recently.

'Precision' collapse in slow motion, with simultaneous $10-billion stock buybacks.
 
I feel too many of the comments are short-sighted. They view AI as a more sophisticated version of Eliza. It would be nice to believe that is all it was and ever will be, but as computing power grows and circuitry shrinks more will be achievable. Then throw in quantum computing.
Much more is already achievable.

Look at how most people saw the mouse when the Lisa came out. 'Oh... ha ha! It's a toy! What kind of serious person would ever want to use that? It's not efficient at all!'

Many people continued to claim that CLI interfaces are faster and better for productivity, even though the actual research showed the opposite. The same fallacy was in play there ('It's worth doing if it's more difficult to do.').
 
'There was also a warning about the need for industry regulation. "We also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out'

Translation = make sure the monopolists can keep full control of AI to maintain their privilege.
 
Back