原文地址:https://www.cnbc.com
原創翻譯:龍騰網 翻譯:飛雪似煬花
正文翻譯:
Stephen Hawking says A.I. could be 'worst event in the history of our civilization'
史蒂芬·霍金說人工智慧將會是「我們文明歷史上最糟糕的事件」
The emergence of artificial intelligence (AI) could be the "worst event in the history of our civilization" unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday.
周一,備受矚目的物理學家史蒂芬·霍金說:人工智慧的出現將會成為「我們文明歷史上最糟糕的事件」,除非社會能夠找到控制它發展的辦法。
He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, "computers can, in theory, emulate human intelligence, and exceed it."
他在葡萄牙裡斯本召開的網際網路峰會技術論壇的一次談話中做出了這一評論,他在其中說道「在理論上,計算機能模仿人類的智慧,然後超越它」。
Hawking talked up the potential of AI to help undo damage done to the natural world, or eradicate poverty and disease, with every aspect of society being "transformed."
霍金談及了人工智慧在幫助消除對自然世界造成的損害或者根除貧困與疾病方面的潛能,通過人工智慧,社會的各個層面都會「得到改變」。
But he admitted the future was uncertain.
但是他承認未來是不確定的。
"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech.
霍金在這次講話中說「成功製造出高效的人工智慧可能是我們文明歷史上最重大的事件,或者也是最糟糕的事件。我們只是沒有意識到。所以我們無法知道我們是否將必然得到人工智慧的幫助,或者遭到它的忽視,或者持觀望態度,或者遭到它的毀滅」。
"Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."
「除非我們知道如何準備、避免潛在的威脅,否則人工智慧將會是我們文明歷史上最糟糕的事件。它會帶來威脅,就像威力巨大的自動化武器,或者少數人用來壓迫大多數人的新方法。它能夠給我們的經濟造成嚴重的破壞」。
Hawking explained that to avoid this potential reality, creators of AI need to "employ best practice and effective management."
霍金解釋稱,為了避免這種潛在的可能性,人工智慧的製造者們需要「採用最實際和最有效的操縱方法」。
The scientist highlighted some of the legislative work being carried out in Europe, particularly proposals put forward by lawmakers earlier this year to establish new rules around AI and robotics. Members of the European Parliament said European Union-wide rules were needed on the matter.
這位科學家強調了正在歐洲進行的某些立法工作,特別是立法者們在今年早些時候提供的一些建議,這些建議旨在圍繞著人工智慧和機器人設置一些新的規定。歐洲議會的成員們說在這一事件上,我們需要歐盟範圍內的法規。
Such developments are giving Hawking hope.
這樣的事態發展給霍金帶來了希望。
"I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance," Hawking said.
霍金說「我是一個樂觀主義者,我相信我們能夠製造給世界帶來好處的人工智慧。它能夠與我們和諧共處。我們只是需要警惕威脅的存在,確定它們,利用最好的手段和操縱方法,事先準備好應對相應的結果」。
It's not the first time the British physicist has warned on the dangers of AI. And he joins a chorus of other major voices in science and technology to speak about their concerns. Tesla and SpaceX CEO Elon Musk recently said that AI could cause a third world war, and even proposed that humans must merge with machines in order to remain relevant in the future.
這不是這位英國物理學家首次對人工智慧的威脅提出警告。他的表態和科學技術領域的其他主流觀點是一致的。特斯拉與SpaceX的執行長埃隆馬斯克最近說,人工智慧能夠引發第三次世界大戰,甚至建議人類必須與機器融合在一起,從而在未來維持自己的重要地位。
And others have proposed ways to deal with AI. Microsoft founder Bill Gates said robots should face income tax.
其他人則提出了很多與人工智慧打交道的方法。微軟創建者比爾·蓋茨認為應該對機器人徵收所得稅。
Some major figures have argued against the doomsday scenarios. Facebook Chief Executive Mark Zuckerberg said he is "really optimistic" about the future of AI.
一些重要人物已經表態不贊同這一世界末日式的場景。臉書執行長馬克扎克伯格說他對人工智慧的未來「真的感到樂觀」。
評論翻譯:
ITT: many who didn't read the article.
致那些還沒有讀過這篇文章的人。
Hawking simply says he's optimistic and thinks AI is the way to go, but society needs to be ready for its arrival or it could cause a lot of damage. An analogy would be the use of nuclear energy, which was also used as weapon of mass destruction. Effectively simply creating AI wouldn't destroy society, it's how humans chose to use the AI or the mistakes humans fail to see, that could be harming to society. For the case of weapons, he isn't saying it will be an AI uprising, but that automated weaponry (which already exists) is a serious risk, like nuclear bombs are.
霍金只是說他對此報以樂觀的態度,認為人工智慧是發展的方向,但是社會需要為它的到來或它可能將會引發的巨大災難做好尊卑。我們可以類比核能的使用,它也會被用於製造大規模殺傷性武器。只是創造出人工智慧實際上不會毀滅社會,能夠傷害社會的是人們如何選擇使用人工智慧,或者是無法發現這些錯誤。就拿武器舉個例子,他沒有說人工智慧將會發動叛亂,但是自動化武器(已經存在了)是一個嚴重的風險,就像核彈一樣。
The article's title is slightly misleading.
這篇文章的標題有一點誤導性
So the ever present fear is that if you put the ai on the internet it will copy itself everywhere. Fair enough. But I have a solution.
那麼總是存在的恐懼便是如果你將人工智慧放在網際網路上,它將會四處複製自己。這是很有道理的。但是我有一個解決辦法。
You see, the software to make an ai go has to be massive. Obviously once hitting singularity it will refine its own code as much as possible, but you can only refine something so far, and sure it's a gamble but I'm willing to bet that a fully functional, self aware ai can't be any smaller than a couple of gigabytes. All we have to do is build it somewhere with terribly slow and unreliable internet. If it tries to get out we would have plenty of time to notice and simply pull the plug.
你瞧,軟體讓一個圍棋人工智慧都變得非常強大。顯然,一旦觸及到奇點,它將會儘可能地改善自己的密碼,但是到現在為止你只能改善某些事情,可以確定的是這是一場賭博,但是我願意打賭,一個具備完全功能的、有著自我意識的人工智絕對要小於幾十億的字節。我們所要做的便是在一個非常慢而且不可靠的網際網路中製造它。如果它試圖突破牢籠,我們就有足夠的事件注意到,然後拔下插頭。
That's right gentlemen, I propose we build the ai in Australia, and connect it to the NBN. It's perfect.
這是正確的,我建議我們在澳大利亞建造這個人工智慧,然後把它連入國家寬帶網絡。完美。
Maybe it is not Steven Hawking talking, but his computer.
可能這不是史蒂芬·霍金在講話,而是他的計算機在講話。
Maybe the computer doesn't want a competitor.
也許是這臺計算機不想讓競爭者出現。
You guys make AI sound like deus ex machina, which it’s not.
你們這些人讓人工智慧聽起來像是天降之神,但是它不是。
Hopefully it finds a gentle way to kill us all.
希望它能夠找到一種溫柔的方式殺死我們所有人。
Assuming it’s possible to create a sentient AI...everyone is just skimming over the actually effort and breakthrough required to get there, for all we know we may never get there
假設能夠創造出一個有情感的人工智慧……每個人都忽略了要創造出人工智慧所需要的努力和突破,儘管我們知道自己可能永遠都無法實現這一目標。
Every time Stephen Hawking comes up in an AI conversation I question why people pay so much attention to what he thinks about it.
每當史蒂芬·霍金說一些和人工智慧相關的話,我都要問為什麼人們要對他想什麼這麼關注。
He's a genius sure, but he's a physicist who specializes in cosmology. The connection to AI is tenuous at best.
他當然是一個天才,但是他是一個物理學家,專業是宇宙學。他和人工智慧的聯繫真的很少。
Yet every time he says something about AI people just think "hey this guy is a well known genius, guess his opinion is a big deal!"
但是每次他發表對人工智慧的看法的時候,人們就會想「嘿,這個人是一個名人,他的觀點關係重大」。
If Stephen Hawking started to warn you about the dangers of gluten would you also be all ears? If Tom Hanks spoke out against post-modernist West German folk-dance would you also take him for his word?
如果史蒂芬·霍金開始警告你麵筋的危險性,你也會洗耳恭聽嗎?如果湯姆·漢克斯出言反對後現代的西方式德國民間舞蹈,你也會聽他的話嗎?
AI isnt what people think it is. It essentially boils down to a mathematical formula(at it's core) and it tries to minimize it's output.
人工智慧不是人們想像的那樣。它基本上可以歸結為一個數學公式(位於它的核心),它試圖最小化它的輸出。
There is no consciousness. No moral code. It just finds patterns and acts on patterns it's been trained on millions and millions of times. That's why it's dangerous to trust AI, it's not always 100% correct and can have unforseen results in the end
它沒有意識。沒有道德代碼。它只是找到了模式和基於模式的行為——為此它已經被訓練了幾百萬次。這就是為什麼信任人工智慧是危險的,它不會總是百分百正確,最終可能會導致不可預見的結果。