If you ask your everyday smartphone voice assistant if it fears ChatGPT, you don’t generally receive an interesting or informative answer. So is that a problem?
Natural language processing (NLP), a field of AI research that led to practical applications like voice assistants and language translators, now seems threatened by the arrival of large language models (LLMs), such as OpenAI’s GPT-4. A recent post on the r/Machinelearning Reddit summarized the sentiment, asking if others were “witnessing a panic inside NLP orgs of big tech companies?”
Yangfeng Ji, an assistant professor of computer science at the University of Virginia, has observed similar distress among academics and students, and recently tried to quell those fears by pointing out fields of NLP research that LLMs aren’t suited for. “Even if it is not panic, at least we can say the feeling is complicated,” says Ji.
Researchers fear LLMs’ unknowns.
Ji has faith in researchers’ ability to pick up new methods as they appear, but the success of recent LLM models from OpenAI has thrown a wrench in the works. LLMs are capable of many tasks, but the most successful LLMs are kept behind closed doors. OpenAI hasn’t detailed the capabilities of GPT-4, OpenAI’s recent LLM model, and developers can access it only through an application programming interface (API).
“Personally, what’s worse is that we don’t even know whether it is just a language model,” says Ji. He points out that LLM-powered chatbots such as ChatGPT, Bing Chat, and Google Bard produce results that extend beyond an LLM. They appear to update their capabilities over time and ingest new data from the Internet (often despite the model’s claim, when asked, that it lacks this ability). AI models usually require training to act on new data. “But, if it is a software system with LLM as one core component, then those things can be easily fixed,” says Ji.
Bing Chat can response to prompts with information recently posted online. The methods used to accomplish this are opaque. Microsoft
The opaqueness of LLMs from OpenAI and Google leaves researchers in a bind. These models clearly outperform past NLP research in many tasks, but outsiders are left to guess how they achieve this. Ji describes this as the “mysterious performance gap” between closed-source and open-source models.
Despite this, Ji sees plenty of space for NLP research that falls outside the capabilities of LLMs. He points out how LLMs continue to struggle with ethical concerns that make them unsuitable for some organizations. They’re also difficult to fine-tune and can demonstrate unexpected results. These problems aren’t likely to cause harm when used to brainstorm cake recipes or write an email to a friend, but they “become the major obstacle when people start to treat these systems seriously and use them to do real work.”
Siri is dead. Long live Siri!
The meteoric rise of LLMs is not just academic. Apple, Microsoft, and Amazon invested billions into their voice assistants, each of which promised an intelligent, voice-activated assistant that would grow into a useful companion. The effort hasn’t paid off. Amazon’s recent rounds of layoffs included major cuts to teams working on Alexa, which, according to reports, lost US $10 billion in 2022. Microsoft’s CEO, Satya Nadella, recently called voice assistants “dumb as a rock,” and Cortana is all but abandoned. The team working on Google Assistant is reportedly being reorganized to assist with Bard. Only Apple’s Siri endures, though improvements have slowed to a trickle in recent years.
Just as researchers were caught off guard by the power of LLMs, tech companies were unprepared for their vast range of applications. LLM-powered chatbots accomplish tasks that voice assistants have never been able to pull off (like authoring an email from scratch), and do so with more lifelike and engaging language than the canned responses voice assistants provide.
Cortana was a key component of the Windows ecosystem, but CEO Satya Nadella recently described voice assistants as “dumb as a rock.” Maurizio Pesce
Noah Gift, Founder of Pragmatic AI Labs, sees this as a fundamental shift. “For years there has been a focus in data science on tuning hyperparameters, cleaning data, and essentially focusing on research and technique versus business value, as evidenced with sites like Stack Overflow,” says Gift. “In a recent book I wrote, Practical MLOps, I predicted there would be less data science and more models built by large organizations, and this is largely happening. If you are at a company doing NLP work that hasn’t made it into production, then yes, you are probably rightly very concerned that your work isn’t important anymore.”
But don’t engrave Siri’s tombstone just yet. NLP research remains critical, even if the strategy for implementing it is evolving.
Microsoft’s quick pivot toward AI is an example of this in action. The company’s partnership with OpenAI has led to multiple GPT-powered product announcements including Github Copilot, Bing Chat, and Microsoft 365 Copilot. Microsoft hasn’t announced a new voice assistant yet, but third-party developers have introduced browser plugins that shoehorn this capability into ChatGPT. OpenAI’s offical release of ChatGPT plugins, already in limited release, is likely to open the floodgates for custom-tailored voice assistants—among other things.
“If you are at a company doing NLP work that hasn’t made it into production, then yes, you are probably rightly very concerned that your work isn’t important anymore.”
—Noah Gift, Founder of Pragmatic AI Labs
“I don’t believe voice is a dead end at all, and in fact it will dramatically improve as new LLMs move into consumer products,” says Gift. “The key issue with voice initially may have been that these projects where simply not as good as the technology used by OpenAI and other emerging LLM technology providers. I see both text and voice LLM usage creating huge markets for their technology.”
Original Source: https://spectrum.ieee.org/siri-meets-chatgpt-llm-nlp