Skip links

The Impact of Artificial Intelligence on Languages

As you may know, the foremost computing trend today is, without a shadow of a doubt, artificial intelligence. It’s not news that every industry is reimagining how to use technology to transform business. However, did you know that the next massive, world-changing developments in AI are expected to revolve around language? That’s what this article by Harvard Business Review claims, at least.

So What Is GPT-3?

You may have come across The Guardian’s article about a robot that wrote a text aimed at convincing us humans that robots come in peace. Here’s a fragment:

“I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!”

The Guardian, A robot wrote this entire article. Are you scared yet, human?

Scary, isn’t it? The language technology used to produce the essay is called GPT-3, which stands for Generative Pre-trained Transformer 3. It is the largest and most advanced AI language model in the world to date, and a successor of wildly-successful language-processing AI GPT-2.

Developed by OpenAI, GPT-3 is a deep-learning, autoregressive model for Natural Language Processing (NLP) trained to come up with plausible-sounding text based on a few simple prompts, or even a single sentence.

How Is It Different from Previous Systems?

Previous language processing models worked based on hand-coded rules, statistical techniques, and artificial neural networks. Artificial neural networks can learn from raw data, but they require massive amounts of data to learn (remember we discussed this at the HelloWorld conference in 2019?) and considerable computing.

GPTs (generative pre-trained transformers) go much deeper than artificial neural networks. They rely on a transformer —an attention mechanism that learns contextual relationships between words in a text. In other words, they can look at parts of a sentence and predict the next word, for example.

Ok, Impressive, but Why The Fuss Over It?

When OpenAI released their new paper, Language Models Are Few-Shot Learners, jaws dropped worldwide. The model they introduced (GPT-3) is pre-trained on nearly half a trillion words, with 175 billion parameters, in an unsupervised manner, and can be further fine-tuned to perform specific tasks. It achieves state-of-the-art performance on several NLP benchmarks.

Researchers were able to induce the model to produce short stories, songs, press releases, technical manuals, text in the style of particular writers, guitar tabs, and even computer code. All the fuss is because GPT-3 has the potential to advance AI as a domain.

As explained in the paper by OpenAI:

Humans can generally perform a new language task from only a few examples or from simple instructions —something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches.

OpenAI, Language Models Are Few-Shot Learners

Potential Impact of GPT-3

The discussion about GPT-3 is still in its early stages, but one thing is certain: the model is such an impressive leap, and such a huge deal for the machine learning community, that even Microsoft, Google, Alibaba, and Facebook are all working on their own versions. Even more, Microsoft has teamed up with OpenAI to exclusively license the GPT-3 language model.

In light of these developments, the nature of jobs will soon start to change. Companies will need to rethink IT resources and human resources. Bundles of tasks in current roles that can be performed by these models will allow the unleashing of humans to innovate faster.

Some tasks will be automated, but also human productivity will be amplified. To quote H. James Wilson and Paul R. Daugherty (authors of the Harvard Business Review article mentioned above):

For example, communications professionals will see the majority of their work tasks involving routine text generation automated, while more critical communications like ad copy and social media messages will be augmented by GPT-3’s ability to help develop lines of thought. Company scientists might use GPT-3 to generate graphs that inform colleagues about the product development pipeline. Meanwhile, to augment basic research and experimentation, they could consult GPT-3 to distill the findings from a specific set of scientific papers. The possibilities across disciplines and industries are limited only by the imagination.

Harvard Business Review, The Next Big Breakthrough in AI Will Be Around Language

Final Remarks

GPT-3 is definitely a significant technological achievement. While it still has issues around generating nonsensical or insensitive text, it has revolutionised language models to solve specific NLP domain tasks compared to conventional models.

The concerns are there too, however. Could it be misused to create content that looks human-written and spread hate and discriminatory ideas? Could it cause businesses to end the jobs of their creatives? Could it be used to spread misinformation, spam, phishing, fraud? It remains to be seen.

Several of these questions were answered at The Creative Language Conference, so we encourage you to purchase the recordings and enjoy some food for thought around this topic.

Leave a comment

5 + thirteen =