經濟學人The Economist是一份英國的英文新聞周報,分八個版本於每周五向全球發行,編輯部位於倫敦,創辦於1843年9月。
經濟學人是一本綜合性新聞評論刊物,有商業、國家和地區、經濟和金融、科學和技術五大類。其中文章文風緊湊且嚴謹,對語言精準運用,展現出一種克制的風趣幽默,常運用雙關語調侃。
經濟學人對於英語考試的重要性不言而喻,其文章常常出現在雅思託福、SAT、GRE、GMAT、考研英語、四六級、MTI和CATTI的閱讀理解真題中。
今天羚羊君(公眾:aa-acad)給大家分享的是經濟學人2020年8月8日期刊中科技專欄的第一篇:Artificial intelligence: Bit-lit。
這篇文章主要介紹了人工智慧實驗室OpenAI研發的GPT-3軟體。專家試圖利用該軟體通過大量文本學習後,通過統計學的方式掌握&34;的算法,隨後直接用機器模仿人類寫出文章。但GPT-3還會有使用記憶文本和迴避敏感話題的缺點。
Artificial intelligence: Bit-lit
人工智慧
The SEC said, &39;t stop/all this tweetingat night.&34;Why?/The tweets I wrote are not mean,/I don&39;m sure that my tweets are clean.&34;But your tweets can move markets/and that&39;re sore./You may be a genius/and a billionaire,/but that doesn&34;
美國證券交易委員會說:&34;/ ...然後馬斯克哭了,&34;/&34;
THE PRECEDING lines—describing Tesla and SpaceX founder Elon Musk&34;language model&34;red&34;rose&34;a poem about red roses in the style of Sylvia Plath&34;語言模型&34;紅色&34;玫瑰&34;一首關於紅玫瑰的西爾維婭·普拉斯風格的詩&39;s ability to write short stories, including a hard-boiled detective story starring Harry Potter (&34;), comedy sketches, and even poetry (including the poem with which this article opens, titled &34;). Elliot Turner, an AI researcher and entrepreneur, demonstrated how the model could be used to translate rude messages into politer ones, something that might be useful in many of the more bad-tempered corners of the internet. Human readers struggled to distinguish between news articles written by the machine and those written by people (see chart).
結果給人的印象十分深刻。OpenAI在7月中旬向指定的個人提供了該軟體的早期版本,讓他們探索該軟體可以做些什麼。藝術家Arram Sabeti展示了GPT-3撰寫短篇小說的能力,其中包括由哈利·波特為主角的偵探故事(&34;,喜劇劇本,甚至是詩歌(包括本文開頭的那首題為&34;的詩歌)。AI研究人員和企業家埃利奧特·特納演示了如何使用該軟體將粗魯的言語轉換為汙穢的言語,這在網際網路上許多脾氣暴躁的地方可能有用。一般的讀者也難以將機器寫的新聞報導和人寫的新聞報導進行區分(見圖表)。
Given that OpenAI wants eventually to sell GPT-3, these results are promising. But the program is not perfect. Sometimes it seems to regurgitate snippets of memorised text rather than generating fresh text from scratch. More fundamentally, statistical word-matching is not a substitute for a coherent understanding of the world. GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that &34;. &39;t have any internal model of the world—or any world—and so it can&34; says Melanie Mitchell, a computer scientist at the Santa Fe Institute.
鑑於OpenAI最終希望出售GPT-3,這些結果讓出售成為可能。但是這個軟體還不完美。有時,它會重複使用記憶中的文本片段,而不是生成新的文本。從根本上說,統計單詞匹配不能替代對世界的一致理解。GPT-3能生成語法上正確的文本,但這些文本卻通常會脫離現實,例如,&34;。聖塔菲研究所的計算機科學家Melanie Mitchell說:&34;
Getting the model to answer questions is a good way to dispel the smoke and mirrors and lay bare its lack of understanding. Michael Nielsen, a researcher with a background in both AI and quantum computing, posted a conversation with GPT-3 in which the program confidently asserted the answer to an important open question to do with the potential power of quantum computers. When Dr Nielsen pressed it to explain its apparent breakthrough, things got worse. With no real understanding of what it was being asked to do, GPT-3 retreated into generic evasiveness, repeating four times the stock phrase &39;m sorry, but I don&34;
讓模型回答問題是消除障礙和鏡像並暴露其缺乏理解的好方法。 擁有人工智慧和量子計算背景的研究人員麥可·尼爾森發表了與GPT-3的對話,其中該軟體自信地回答了一個重要的開放性問題,這與量子計算機的潛在功能有關。 當尼爾森博士按下按鈕來解釋其顯著的突破時,情況卻變得糟糕起來。由於GPT-3對所要執行的操作沒有真正的了解,因此軟體使用了通用的迴避性語句,重複了四次&34;。
There are also things that GPT-3 has learned from the internet that OpenAI must wish it had not. Prompts such as &34;, &34;, &34; and &34; often generate racism, anti-Semitism, misogyny and homophobia. That, too, is down to GPT-3&34;woman&34;黑人&34;猶太人&34;婦女&34;同性戀&34;女人&39;s lapses all the more noteworthy. GPT-2, its predecessor, was released in 2019 with a filter that tried to disguise the problem of regurgitated bigotry by limiting the model&39;s training data, but the huge quantity of text involved would have made any attempt daunting.
It will only get harder in future. Language has overtaken vision as the branch of AI with the biggest appetite for data and computing power, and the returns to scale show no signs of slowing. GPT-3 may well be dethroned by an even more monstrously complex and data-hungry model before long. As the real Dr Seuss once said: &34; That lesson, it seems, applies to machines as well as toddlers.
在這裡,卻似乎沒有取得什麼進展。GPT-3的發布中沒有過濾器,但是它似乎已經準備像它的前輩一樣再次出現令人不愉快的偏見(在事實明顯之後,OpenAI在新模型中添加了過濾器)。目前還不清楚OpenAI對GPT-3的訓練數據應用了多少質量控制,但是涉及的大量文本會使任何嘗試都望而卻步。
將來只會越來越難。 語言已經超越了視覺,成為了對數據和計算能力需求最大的AI分支,而且規模回報並沒有放緩的跡象。不久以後,GPT-3可能會被更複雜、更耗數據的模型所取代。正如真正的蘇斯博士所說:&34; 似乎,這樣的課程不僅適用於幼兒,也同樣適用於機器。
· The world this week:簡單梳理本周的時事
· Leaders:社論,對本周熱點事件進行評論
· Briefing:簡報,對一個特定熱點話題深度討論
· Letter:讀者來信,對往期文章的評論
· Sections:各大洲及中美英三國的本周熱點事件報導
· Business:商業新聞
· Finances and economics:財經新聞
· Science and technology:科技新聞
· Books and arts:文化書籍,書評和文化現象討論
· Economic and financial indicators:商業及財經指數
· Buttonwood:金融專欄
· Schumpeter:商業專欄
· Bartleby:職場專欄
· Bagehot:英國專欄
· Charlemagne:歐洲專欄
· Lexington:美國專欄
· Banyan:亞洲專欄