December 24, 2024

Ferrum College : Iron Blade Online

Complete Canadian News World

Artificial intelligence is mastery of language.  Should we trust what he says?

Artificial intelligence is mastery of language. Should we trust what he says?

“I think it allows us to be more thoughtful and more reflective about safety issues,” Altman says. “Part of our strategy is: incremental change in the world is better than sudden change.” Or, as OpenAI VP Mira Moratti put it, when I asked her about the work of the security team restricting open access to the software, “If we are going to learn how to deploy these powerful technologies, let’s start when the stakes are very low.”

While GPT-3 itself Running on 285,000 CPU cores at a supercomputer cluster in Iowa, OpenAI operates out of the Mission area of ​​San Francisco, in a refurbished baggage factory. In November of last year, I met Ilya Sotskefer there, trying to elicit an layman’s explanation of how GPT-3 really works.

“That’s the basic idea of ​​GPT-3,” Sotskever said intently, leaning forward in his chair. He has an interesting way of answering questions: Some false starts – “I can give you a description that roughly matches the description you asked for” – interrupted by long meditative pauses, as if he’s charting the entire response in progress.

He finally said: “The basic idea of ​​GPT-3 is a way of relating an intuitive notion of comprehension to something that can be measured and understood mechanically, and that is the task of predicting the next word in the text. Other forms of artificial intelligence attempt to encode information about the world: the great masters’ chess strategies, the principles of climatology. But GPT-3 intelligence, if intelligence is the word for it, comes from the bottom up: through the elemental verb to predict the next word. For GPT-3 training, the model is given a “prompt”—a few sentences or paragraphs of text from a newspaper article, for example, or a novel or a scientific paper—and then asked to suggest a list of possible words that it might complete the sequence, ordered by probability. In the early stages of training, the proposed words are nonsense. Ask the algorithm with a sentence like “The writer has deleted the last word from the first word . . . ”” and guesses would be sort of a stream of nonsense: list – the correct missing word appears: “paragraph.” Then the program strengthens and weakens any random neural connections that generate that particular suggestion All connections that generated incorrect guesses. Then it moves to the next router. Over time, with enough iterations, the program learns.

Back in January, OpenAI added a feature that allows users to give direct GPT-3 instructions as a prompt, rather than just requiring them to extend a sample segment of text. For example, using the Help mode, GPT-3 was once given the prompt: “Write an essay discussing the role of metafiction in Italo Calvino’s work.” A brief paragraph that began as follows:

The Italian author Italo Calvino is considered an expert in metafiction, a type of writing in which the author breaks the fourth wall to discuss the act of writing itself. For Calvino, metafiction is a way of exploring the nature of reality and the ways in which stories can shape our perceptions of the world. His novels often involve playful structures and labyrinths that manipulate the boundaries between reality and fantasy. In the case if the traveler is on a winter’s night, for example, the reader is constantly interrupted by discussions at the meta-level of the act of reading and the nature of storytelling. . . .

You can give GPT-3 the exact same prompt, over and over, and each time it will generate a unique response, some more persuasive than others but almost all remarkably straightforward. Indicative prompts can take all kinds of forms: “Give me a list of all the ingredients in Bolognese sauce,” “Write a poem about a French seaside village in John Ashbery style,” “Explain the Big Bang in language that an 8-year-old understands.” At times The first few I fed GPT-3 stimuli of this kind, I felt real goosebumps running down my spine.It seemed almost impossible for a machine to generate so clear and responsive text entirely based on initial training to predict the next word.

But AI has a long history of creating the illusion of intelligence or understanding without actually delivering the goods. at much discussed paper Published last year, University of Washington linguistics professor Emily Bender, former Google researcher Timnit Gebru and a group of co-authors declared that large language models were just ‘random parrots’: that is, the program was using randomization just to remix sentences written by humans. . Bender recently told me via email: “What has changed is not some step above a certain threshold toward ‘artificial intelligence.’ Instead, she said, what has changed are ‘hardware, software, and economic innovations that allow the accumulation and processing of huge sets of data’ — as well as culture Technology that “people who build and sell such things can have” is far from building it on foundations of inaccurate data.”