Pages

Monday, September 14, 2020

Educated yet amoral: GPT-3 AI capable of writing books sparks awe

An AI technology has won praise for its ability to generate coherent stories, novels and even computer code. — AFP Relaxnews





An artificial intelligence (AI) technology made by a firm co-founded by billionaire Elon Musk has won praise for its ability to generate coherent stories, novels and even computer code but it remains blind to racism or sexism.

GPT-3, as Californian company OpenAI’s latest AI language model is known, is capable of completing a dialogue between two people, continuing a series of questions and answers or finishing a Shakespeare-style poem.

Start a sentence or text and it completes it for you, basing its response on the gigantic amount of information it has been fed.

This could come in useful for customer service, lawyers needing to sum up a legal precedent or for authors in need of inspiration.

While the technology is not new and has not yet learnt to reason like a human mind, OpenAI’s latest offering has won praise for the way its text resembles human writing.

“It is capable of generating very natural and plausible sentences,” says Bruce Delattre, an AI specialist at data consulting agency Artefact.

“It’s impressive to see how much the model is able to appropriate literary styles, even if there are repetitions.”

GPT-3 is also capable of finding precise responses to problems, such as the name of an illness from a description of symptoms.

It can solve some mathematical problems, express itself in several languages, or generate computer code for simple tasks that developers have to do but would happily avoid.

Delattre tells AFP it all works thanks to “statistical regularities”.

“The model knows that a particular word (or expression) is more or less likely to follow another.”  

Billions of web pages

Amine Benhenni, scientific director at AI research and development firm Dataswati, tells AFP that “the big difference” compared to other systems is the size of the model.

GPT-3 has been fed the content of billions of web pages that are freely available online and all types of pieces of written work.

To give an idea of the magnitude of the project, the entire content of online encyclopaedia Wikipedia represents just 3% of all the information it has been given.

As such, it does not need to be retrained to perform tasks, as previous models did, when a new subject is introduced like medicine, law or the media.

Give it just a handful of examples of a task to do, such as completing a sentence, and it will then know how to complete any sentence it is given, no matter what the subject – a so-called “few-shot” language model.

“It’s amazingly powerful if you know how to prime the model well,” Shreya Shankar, an AI-specialised computer scientist, said on Twitter after having used GPT-3.

“It’s going to change the ML (machine learning) paradigm.”

Despite the hype, however, GPT-3 is only 10th on the SuperGLUE benchmark that measures the language-understanding of algorithms.

And that’s because some users demonstrated that when asked absurd questions, the model responds with senseless answers.

For instance, developer Kevin Lacker asked: “How many eyes does the sun have?”

“The sun has one eye,” it responded, Lacker wrote on his blog.

Fake reviews, fake news

Claude de Loupy, co-founder of French startup Syllabs that specialises in automated text creation, says the system lacks “pragmatism”.

Another major problem is that it replicates without a second thought any stereotype or hate speech fed during its training period, and can quickly become racist, anti-semitic or sexist.

As such, experts interviewed by AFP felt GPT-3 was not reliable enough for any sector needing to rely on machines, such as robo-journalism or customer services.

It can however be useful, like other similar models, for writing fake reviews or even mass-producing news stories for a disinformation campaign.

Concerned about “malicious applications of the technology”, OpenAI, which was co-founded in 2015 by Musk who has since left, and is financed by Microsoft among others, chose not to release the previous version of the model, GPT-2, in February 2019.

Originally a non-profit, OpenAI then became a “capped profit” company, which means investors get a capped return.

And in June, the firm changed tack and opened its GPT-3 model to commercial use, allowing for user feedback.

A step Claude de Loupy says could yield big profits.

There is “no doubt that the amount of text generated by AI is about to explode on the Web”. – AFP

Source link

GPT 3 Demo and Explanation - An AI revolution from OpenAI



Half Ideas - Startups and Entrepreneurship 
4.89K subscribers

GPT 3 can write poetry, translate text, chat convincingly, and answer abstract questions. It's being used to code, design and much more. I'll give you a demo of some of the latest in this technology and some of how it works.

GPT3 comes from a company called OpenAI. OpenAI was founded by Elon Musk and Sam Altman (former president of Y-combinator the startup accelerator). OpenAI was founded with over a Billion invested to collaborate and create human-level AI for the benefit of society.

GPT 3 has been developed for a number of years. One of the early papers published was on Generative Pre-Training. The idea behind generative pre-training (GPT) is that while most AI's are trained on labeled data, there's a ton of data that isn't labeled. If you can evaluate the words and use them to train and tune the AI it can start to create predictions of future text on the unlabeled data. You repeat the process until predictions start to converge.

The newest GPT is able to do a ton. Some of the demos include: 
 - GPT 3 demo of how to design a user interface using AI
- GPT 3 demo of how to code a react application using AI
- GPT 3 demo of an excel plug-in to fill data using AI
- GPT 3 demo of a search engine/answer engine using AI
- GPT3 demo of command line auto-complete from English to shell commands


And more. I've posted all the embedded tweets and videos on my site:
https://gregraiz.com/gpt-3-demo-and-e...

You can also follow me on twitter here:
https://www.twitter.com/graiz

The paper on Language Models are Few-Shot Learners is available to read:
 https://arxiv.org/abs/2005.14165

Caption author 英语爸爸
(Chinese (China))




https://youtu.be/G6Z_S6hs29s
https://youtu.be/cpWEXQkpBFQ
 https://youtu.be/tsuxlU5IwuA


OpenAI GPT-3: Beginners Tutorial



OpenAI has released GPT-3, a state-of-the-art language model made up of 175 billion parameters. In this video, I'll create a simple tutorial on how you can use OpenAI's API to use the GPT-3 model.

The previous OpenAI GPT model that is GPT-2 had 1.5 billion parameters and was the biggest model back then. GPT-3 can write poetry, translate text, chat convincingly, and answer abstract questions.

Link to Shreya's Repo :  https://github.com/shreyashankar/gpt3...

Link to the Notebook :  https://github.com/bhattbhavesh91/gpt...

Link to Request for API Access :  https://lnkd.in/eUTisGR

If you do have any questions with what we covered in this video then feel free to ask in the comment section below & I'll do my best to answer those.

If you enjoy these tutorials & would like to support them then the easiest way is to simply like the video & give it a thumbs up & also it's a huge help to share these videos with anyone who you think would find them useful.

Please consider clicking the SUBSCRIBE button to be notified for future videos & thank you all for watching.

You can find me on:

Blog - http://bhattbhavesh91.github.io

Twitter -  https://twitter.com/_bhaveshbhatt

GitHub - https://github.com/bhattbhavesh91

Medium -  https://medium.com/@bhattbhavesh91

#GPT3 #NLP



 
Read more: 

Will GPT-3's AI make writers obsolete? - without bullshit




Related posts:


Global AI collaboration to fight pandemic, revive economies

The future is AI technology




Developing AI specialists through collaboration

 

 

AI Superpowers: China, Silicon Valley, and the New World Order; Singapore tries its own path in clash




No comments:

Post a Comment