Gpt articles

ascii hacking large language model gpt chatbots

If you teach a chatbot how to read ASCII art, it will teach you how to make a bomb

In context: Most, if not all, large language models censor responses when users ask for things considered dangerous, unethical, or illegal. Good luck getting Bing to tell you how to cook your company's books or crystal meth. Developers block the chatbot from fulfilling these queries, but that hasn't stopped people from figuring out workarounds.
security bing malware privacy viruses computer worm amazon alexa chatgpt bard generative ai gpt copilot with video

Researchers prove they can exploit chatbots to spread AI worms

Hackers could deploy the worms in plain text emails or hidden in images
In context: Big Tech continues to recklessly shovel billions of dollars into bringing AI assistants to consumers. Microsoft's Copilot, Google's Bard, Amazon's Alexa, and Meta's Chatbot already have generative AI engines. Apple is one of the few that seems to be taking its time upgrading Siri to an LLM and hopes to compete with an LLM that runs locally rather than in the cloud.
openai gpt-4 ai big tech chatgpt

As the AI race unfolds, OpenAI keeps the lead and unveils GPT-4

Why it matters: OpenAI launched GPT-4 this week, an update to its popular language model and technology that aims to improve precision and is designed to act as an underlying engine for chatbots, search engines, online tutors, and more. GPT-4 is now available to paid subscribers and there's a waitlist to use the model via API. Furthermore, the AI race is on, with "AI startups" raising funds like there is no tomorrow and big tech companies like Google scrambling to make it known that they are not so far behind.