Opera starts offering local access to LLMs and AI chatbots

Alfonso Maruccia

Posts: 1,028   +302
Staff
A hot potato: The most popular AI algorithms available today require constant internet access to interact with powerful data centers. Opera Software claims it can perform the same functions using just a properly configured browser and an initial, massive download.

A recent update to Opera's development version adds the ability to download and run powerful large language models (LLMs) on a user's PC. The AI algorithm runs in a local environment, doesn't need to send data to external "AI servers" and, according to the software company, is designed to protect users' privacy.

Opera One Developer is the first browser to provide built-in support for local LLMs, says Opera. As a part of the company's AI Feature Drops program, the Opera browser is adding experimental support for 150 LLM algorithms (in their local variants) belonging to around 50 different AI model families.

LLMs can be easily accessed and managed through the familiar browser interface, and they include many well-known names in the AI business such as Llama (Meta), Vicuna, Gemma (Google), Mixtral (Mistral AI), and more. Opera emphasized that when language models are confined in a local environment, data is kept on a user's device with no need to send any information to a remote server.

The AI Feature Drops program allows enthusiasts of the Chromium-based browser to test early versions of the new features Opera is developing. The company said the local LLM option is "so bleeding edge" that it might even "break." Experimentation is fun anyway, even though running a local LLM will require some significant commitments in terms of storage space.

Language models for AI chatbots tend to be very large chunks of data, and Opera warns about these requirements. Each LLM will likely take between 2 and 10 gigabytes of local storage, while an outstanding offering like Megadolphin will need a hefty 67GB download. Local LLMs can also be "considerably slower" compared to remote services as they cannot exploit an AI server infrastructure. Performance will depend on the user's hardware computing capabilities.

The company offers advice about some interesting LLMs to explore through the Opera One Developer experimental browser, quoting Code Llama as an extension of Llama aimed at generating and discussing code. Programmers can work more efficiently, generating code snippets in Python, C++, Java, and more. Phi-2 is a Microsoft Research LLM with "outstanding" reasoning and language understanding capabilities, while Mixtral excels in natural language processing tasks.

Permalink to story:

 
And what exactly do these LLM models do to enhance my browsing experience?

Like real life scenarios of actual usage?
 
Back