霍金演講:人工智慧也可能是人類文明的終結者

2020-11-25 手機鳳凰網

原標題:霍金演講:人工智慧也可能是人類文明的終結者

4月27日,著名物理學家史蒂芬·霍金在北京舉辦的全球移動網際網路大會上做了視頻演講。在演講中,霍金重申人工智慧崛起要麼是人類最好的事情,要麼就是最糟糕的事情。他認為,人類需警惕人工智慧發展威脅。因為人工智慧一旦脫離束縛,以不斷加速的狀態重新設計自身,人類由於受到漫長的生物進化的限制,將無法與之競爭,從而被取代。

霍金演講稿雙語全文&採訪對話稿

一、人工智慧,要麼是最好的,要麼是最糟的

在我的一生中,我見證了很多社會深刻的變化。其中最深刻,同時也是對人類影響與日俱增的變化就是人工智慧的崛起。

簡單來說,我認為強大的人工智慧的崛起,要麼是人類歷史上最好的事,要麼是最糟的。

是好是壞,我不得不說我們依然不能確定。但我們應該竭盡所能,確保其未來發展對我們的後代和環境有利。

我們別無選擇。我認為人工智慧的發展,本身是一種存在著問題的趨勢,而這些問題必須在現在和將來得到解決。

人工智慧的研究與開發正在迅速推進。也許科學研究應該暫停片刻,從而使研究重點從提升人工智慧能力轉移到最大化人工智慧的社會效益上面。

基於這樣的考慮,美國人工智慧協會(AAAI)於2008至2009年,成立了人工智慧長期未來總籌論壇。他們近期在目的導向的中性技術上投入了大量的關注。但人工智慧系統的原則依然必須要按照我們的意志工作。

跨學科研究可能是一條可能的前進道路:從經濟、法律、哲學延伸至計算機安全、形式化方法,當然還有人工智慧本身的各個分支。

文明所提產生的一切都是人類智能的產物,我相信生物大腦總有一天會達到計算機可以達到的程度,沒有本質區別。因此,它遵循了「計算機在理論上可以模仿人類智能,然後超越」這一原則。

但我們並不確定,所以我們無法知道我們將無限地得到人工智慧的幫助,還是被藐視並被邊緣化,或者很可能被它毀滅。的確,我們擔心聰明的機器將能夠代替人類正在從事的工作,並迅速地消滅數以百萬計的工作崗位。

在人工智慧從原始形態不斷發展,並被證明非常有用的同時,我也在擔憂這樣這個結果,即創造一個可以等同或超越人類的智能的人工智慧:人工智慧一旦脫離束縛,以不斷加速的狀態重新設計自身。

人類由於受到漫長的生物進化的限制,無法與之競爭,將被取代。這將給我們的經濟帶來極大的破壞。未來,人工智慧可以發展出自我意志,一個與我們衝突的意志。

很多人認為人類可以在相當長的時間裡控制技術的發展,這樣我們就能看到人工智慧可以解決世界上大部分問題的潛力。但我並不確定,儘管我對人類一貫持有樂觀的態度。

二、人工智慧對社會所造成的影響,需要認真調研

2015年1月份,我和科技企業家埃隆·馬斯克,以及許多其他的人工智慧專家籤署了一份關於人工智慧的公開信。目的是提倡就人工智慧對社會所造成的影響,做認真的調研。

在這之前,埃隆·馬斯克就警告過人們:超人類人工智慧可能帶來不可估量的利益。但如果部署不當,則可能給人類帶來相反的效果。我和他同在「生命未來研究所」擔任科學顧問委員會的職務,這是一個旨在緩解人類所面臨的存在風險的組織。

之前提到的公開信也是由這個組織起草的。這個公開信號召展開可以阻止潛在問題的直接研究,同時也收穫人工智慧帶給我們的潛在利益,同時致力於讓人工智慧的研發人員更關注人工智慧安全。

對於決策者和普通大眾來說,這封公開信內容翔實,並非危言聳聽。人人都知道人工智慧,我們認為這一點非常重要。比如,人工智慧具有根除疾患和貧困的潛力,但是研究人員必須能夠保證創造出可控的人工智慧。

那封只有四段文字,題目為《應優先研究強大而有益的人工智慧》的公開信,在其附帶的十二頁文件中對研究的優先次序作了詳細的安排。

在過去的20年或更長時間裡,人工智慧一直專注於建設智能代理所產生的問題,即:在特定環境下可以感知並行動的各種系統。

智能是一個與統計學和經濟學相關的理性概念。通俗地講,這是一種能做出好的決定、計劃和推論的能力。基於這些工作,大量的整合和交叉孕育被應用在人工智慧、機器學習、統計學、控制論、神經科學以及其它領域。

共享理論框架的建立,結合數據的供應和處理能力,在各種細分的領域取得了顯著的成功。

例如語音識別、圖像分類、自動駕駛、機器翻譯、步態運動和問答系統。

隨著這些領域的發展,從實驗室研究到有經濟價值的技術形成了良性循環。哪怕很小的性能改進,都會帶來巨大的經濟效益,進而鼓勵更長期、更偉大的投入和研究。目前人們廣泛認同,人工智慧的研究正在穩步發展,而它對社會的影響很可能還在擴大。

潛在的好處是巨大的,甚至文明所產生的一切,都可能是人類智能的產物。但我們無法預測我們會取得什麼成果,這種成果可能是被人工智慧工具放大過的。

正如我說的,根除疾病和貧窮並不是完全不可能,由於人工智慧的巨大潛力,研究如何(從人工智慧)獲益並規避風險是非常重要的。

三、從短期和長期看人工智慧

現在,關於人工智慧的研究正在迅速發展,這一研究可以從短期和長期來分別討論。

短期擔憂:

1.無人駕駛。

從民用無人機到自主駕駛汽車。在緊急情況下,一輛無人駕駛汽車不得不在小風險的大事故和大概率的小事故之間進行選擇。

2.致命性智能自主武器。

它們是否該被禁止?如果是,那麼「自主」該如何精確定義。如果不是,任何使用不當和故障的過失應該如何被問責。

3.隱私的擔憂。
 

由於人工智慧逐漸開始解讀大量監控數據,會造成隱私上的擔憂,以及如何管理因人工智慧取代工作崗位帶來的經濟影響。

長期擔憂主要是人工智慧系統失控的潛在風險。

隨著不遵循人類意願行事的超級智能的崛起,強大的系統可能會威脅到人類發展。這種錯位是否會發生?如果會,那些情況是如何出現的?我們應該投入什麼樣的研究,以便更好的理解和解決危險的超級智能崛起的可能性,或智能爆發的出現?

當前控制人工智慧技術的工具,例如強化學習,簡單實用的功能,還不足以解決這系統失控的問題。因此,我們需要進一步研究來找到和確認一個可靠的解決辦法來掌控這一問題。

近來的裡程碑,比如之前提到的自主駕駛汽車,以及人工智慧贏得圍棋比賽,都是未來趨勢的跡象,巨大的投入傾注到這項科技。我們目前所取得的成就,和未來幾十年後可能取得的成就相比必然相形見絀。

而且我們遠不能預測我們能取得什麼成就。當我們的頭腦被人工智慧放大以後,也許在這種新技術革命的輔助下,我們可以解決一些工業化對自然界造成的損害。關乎到我們生活的各個方面都即將被改變。

簡而言之,人工智慧的成功有可能是人類文明史上最大的事件。

但人工智慧也有可能是人類文明史的終結,除非我們學會如何避免危險。我曾經說過,人工智慧的全方位發展可能招致人類的滅亡。比如最大化使用智能性自主武器。今年早些時候,我和一些來自世界各國的科學家共同在聯合國會議上支持其對於核武器的禁令。我們正在焦急的等待協商結果。

目前,九個核大國可以控制大約一萬四千個核武器,它們中的任何一個國家都可以將城市夷為平地,放射性廢物會大面積汙染農田,最可怕的危害是誘發核冬天,火和煙霧會導致全球的小冰河期。

這一結果使全球糧食體系崩塌,末日般動蕩,很可能導致大部分人死亡。我們作為科學家,對核武器承擔著特殊的責任。正是科學家發明了核武器,並發現它們的影響比最初預想的更加可怕。

我對人工智慧的災難探討可能驚嚇到了各位。很抱歉。但是作為今天的與會者,重要的是,在影響當前技術的未來研發中,你們要清楚自己所處的位置。

我相信我們團結在一起,來呼籲國際條約的支持或者籤署呈交給各國政府的公開信,科技領袖和科學家正極盡所能避免不可控的人工智慧的崛起。

去年10月,我在英國劍橋建立了一個新的機構,試圖解決一些在人工智慧研究快速發展中出現的尚無定論的問題。「利弗休姆智能未來中心」是一個跨學科研究所,致力於研究智能的未來,這對我們文明和物種的未來至關重要。

過去我們花費大量時間學習歷史,雖然深入去看,可能大多數是關於愚蠢的歷史。所以現在人們轉而研究智能的未來,是令人欣喜的變化。

我們對潛在危險有所意識,我內心仍秉持樂觀態度,我相信創造智能的潛在收益是巨大的。也許藉助這項新技術革命的工具,我們將可以削減工業化對自然界造成的傷害。

我們生活的每一個方面都會被改變。我在研究所的同事休·普林斯承認,「利弗休姆中心」能建立,部分是因為大學成立了「存量風險中心」。後者更加廣泛地審視人類潛在問題,而「利弗休姆中心」的重點研究範圍則相對狹窄。

四、人工智慧的最新進展

人工智慧的最新進展,包括歐洲議會呼籲起草一系列法規,以管理機器人和人工智慧的創新。令人些許驚訝的是,這裡涉及到了一種形式的電子人格。以確保最有能力和最先進的人工智慧盡到其應盡的權利和責任。

歐洲議會發言人評論說,隨著日常生活中越來越多的領域日益受到機器人的影響,我們需要確保機器人無論現在還是將來,都為人類而服務。

向歐洲議會議員提交的報告,明確認為世界正處於新的工業機器人革命的前沿。報告中分析了是否應該給機器人提供作為電子人的權利。這等同於法人(的身份),也許有可能。

報告強調,在任何時候,研究和設計人員都應確保每一個機器人設計都包含有終止開關。在庫布裡克的電影《2001太空漫遊》中,出故障的超級電腦哈爾沒有讓科學家們進入太空艙,但那是科幻。我們要面對的則是事實。

奧斯本·克拉克跨國律師事務所的合伙人,洛納·布拉澤爾在報告中說,我們不承認鯨魚和大猩猩有人格,所以也沒有必要急於接受一個機器人人格。但是擔憂一直存在。

報告承認在幾十年的時間內,人工智慧可能會超越人類智力範圍,人工智慧可能會超越人類智力範圍,進而挑戰人機關係。

報告最後呼籲成立歐洲機器人和人工智慧機構,以提供技術、倫理和監管方面的專業知識。如果歐洲議會議員投票贊成立法,該報告將提交給歐盟委員會。它將在三個月的時間內決定要採取哪些立法步驟。

在人工智慧發展進程中,我們還應該扮演一個角色,確保下一代不僅僅有機會還要有決心。在早期階段充分參與科學研究,以便他們繼續發揮潛力,幫助人類創造一個更加美好的的世界。這就是我剛談到學習和教育的重要性時,所要表達的意思。我們需要跳出「事情應該如何」這樣的理論探討,並且採取行動,以確保他們有機會參與進來。

我們站在一個美麗新世界的入口。這是一個令人興奮的、同時充滿了不確定性的世界,而你們是先行者。我祝福你們。

問答環節

中國科技大咖、科學家、投資家和網友發問霍金(在百萬網友的關注中篩選出的七個問題)

Q1:創新工場的CEO李開復:

網際網路巨頭擁有巨量的數據,而這些數據會給他們各種以用戶隱私和利益換取暴利的機會。在巨大的利益誘惑下,他們是無法自律的。而且,這種行為也會導致小公司和創業者更難創新。您常談到如何約束人工智慧,但更難的是如何約束人本身。您認為我們應該如何約束這些巨頭?

A1:

據我了解,許多公司僅將這些數據用於統計分析,但任何涉及到私人信息的使用都應該被禁止。會有助於隱私保護的是,如果網際網路上所有的信息,均通過基於量子技術加密,這樣網際網路公司在一定時間內便無法破解。但安全服務會反對這個做法。

Q2:獵豹移動CEO傅盛:

「靈魂會不會是量子的一種存在形態?或者是高維空間裡的另一個表現?

A2:

近來人工智慧的發展,比如電腦在西洋棋和圍棋的比賽中戰勝人腦,都顯示出人腦和電腦並沒有本質差別。這點上我和我的同事羅傑·彭羅斯正好相反。會有人認為電腦有靈魂嗎?對我而言,靈魂這個說法是一個基督教的概念,它和來世聯繫在一起。我認為這是一個童話故事。

Q3:百度總裁張亞勤:

「人類觀察和抽象世界的方式不斷演進,從早期的觀察和估算,到牛頓定律和愛因斯坦方程式, 到今天數據驅動的計算和人工智慧,下一個是什麼?」

A3:

我們需要一個新的量子理論,將重力和其他自然界的其它力量整合在一起。許多人聲稱這是弦理論,但我對此表示懷疑,目前唯一的推測是,時空有十個維度。

Q4:史丹福大學物理學教授張首晟:

「如果讓你告訴外星人我們人類取得的最高成就,寫在一張明信片的背面,您會寫什麼?」

A4:

告訴外星人關於美,或者任何可能代表最高藝術成就的藝術形式都是無益的。因為這是人類特有的。我會告訴他們哥德爾不完備定理和費馬大定理。這才是外星人能夠理解的事情。

Q5:

「我們希望提倡科學精神,貫穿GMIC全球九站,清您推薦三本書,讓科技屆的朋友們更好的理解科學及科學的未來。」

A5:

他們應該去寫書而不是讀書。只有當一個人關於某件事能寫出一本書,才代表他完全理解了這件事。

Q6:微博用戶:

「您認為一個人一生中最應當做的一件事和最不應當做的一件事是什麼?」

A6:

我們絕不應當放棄,我們都應當儘可能多地去理解這個世界。

Q7:微博用戶:

「人類在漫漫的歷史長河中,重複著一次又一次的革命與運動。從石器、蒸汽、電氣……您認為下一次的革命會是由什麼驅動的?」

A7:

(我認為是)計算機科學的發展,包括人工智慧和量子計算。科技已經成為我們生活中重要的一部分,但未來幾十年裡,它會逐漸滲透到社會的每一個方面,為我們提供智能地支持和建議,在醫療、工作、教育和科技等眾多領域。但是我們必須要確保是我們來掌控人工智慧,而非它(掌控)我們。

Q8:音樂人、投資者胡海泉:

「如果星際移民技術的成熟窗口期遲到,有沒有完全解決不了的內發災難導致人類滅絕?拋開隕星撞地球這樣的外來災難。」

A8:

是會有這個內發災難的。人口過剩、疾病、戰爭、饑荒、氣候變化和水資源匱乏, 人類有能力解決這些危機。但很可惜,這些危機還嚴重威脅著我們在地球上的生存,這些危機都是可以解決的,但目前還沒有。

英文演講全文:

Over my lifetime, I have seen very significant societal changes. Probably one of the most significant, and one that is increasingly concerning people today, is the rise of artificial intelligence.

In short, I believe that the rise of powerful AI, will be either the best thing, or the worst, ever to happen to humanity.  

I have to say now, that we do not yet know which. But we should do all we can, to ensure that its future development benefits us, and our environment. We have no other option. I see the development of AI, as a trend with its own problems that we know must be dealt with, now and into the future.

The progress in AI research and development is swift. And perhaps we should all stop for a moment, and focus our research, not only on making AI more capable, but on maximizing its societal benefit.

Such considerations motivated the American Association for Artificial Intelligence's, two thousand and eight to two thousand and nine, Presidential Panel on Long-Term AI Futures, which up to recently had focused largely on techniques, that are neutral with respect to purpose. 

But our AI systems must do what we want them to do. Inter-disciplinary research can be a way forward: ranging from economics, law, and philosophy, to computer security, formal methods, and of course various branches of AI itself.

Everything that civilization has to offer, is a product of human intelligence, and I believe there is no real difference between what can be achieved by a biological brain, and what can be achieved by a computer. 

It therefore follows that computers can, in theory, emulate human intelligence, and exceed it. But we don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. 

Indeed, we have concerns that clever machines will be capable of undertaking work currently done by humans, and swiftly destroy millions of jobs.

While primitive forms of artificial intelligence developed so far, have proved very useful, I fear the consequences of creating something that can match or surpass humans. AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded. It will bring great disruption to our economy.

And in the future, AI could develop a will of its own, a will that is in conflict with ours. Although I am well-known as an optimist regarding the human race, others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. I am not so sure.

In January 2015, I, along with the technological entrepreneur, Elon Musk, and many other AI experts, signed an open letter on artificial intelligence, calling for serious research on its impact on society. 

In the past, Elon Musk has warned that super human artificial intelligence, is possible of providing incalculable benefits, but if deployed incautiously, will have an adverse effect on the human race. 

He and I, sit on the scientific advisory board for the Future of Life Institute, an organization working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems, while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety.

In addition, for policymakers and the general public, the letter is meant to be informative, but not alarmist. We think it is very important, that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues. 

For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled. The four-paragraph letter, titled Research Priorities for Robust and Beneficial Artificial Intelligence, an Open Letter, lays out detailed research priorities in the accompanying twelve-page document.

For the last 20 years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in some environment. In this context, intelligence is related to statistical and economic notions of rationality. Colloquially, the ability to make good decisions, plans, or inferences.

As a result of this recent work, there has been a large degree of integration and cross-fertilisation among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As development in these areas and others, moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance, are worth large sums of money, prompting further and greater investments in research. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer, is a product of human intelligence; we cannot predict what we might achieve, when this intelligence is magnified by the tools AI may provide. 

But, and as I have said, the eradication of disease and poverty is not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits, while avoiding potential pitfalls.

Artificial intelligence research is now progressing rapidly. And this research can be discussed as short-term and long-term. Some short-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident, and a large probability of a small accident. 

Other concerns relate to lethal intelligent autonomous weapons. Should they be banned. If so, how should autonomy be precisely defined. If not, how should culpability for any misuse or malfunction be apportioned. Other issues include privacy concerns, as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.

Long-term concerns, comprise primarily of the potential loss of control of AI systems, via the rise of super-intelligences that do not act in accordance with human wishes, and that such powerful systems would threaten humanity. Are such dystopic outcomes possible. 

If so, how might these situations arise. What kind of investments in research should be made, to better understand and to address the possibility of the rise of a dangerous super-intelligence, or the occurrence of an intelligence explosion.

Existing tools for harnessing AI, such as reinforcement learning, and simple utility functions, are inadequate to solve this. Therefore more research is necessary to find and validate a robust solution to the control problem.

Recent landmarks, such as the self-driving cars already mentioned, or a computer winning at the game of Go, are signs of what is to come.  Enormous levels of investment are pouring into this technology. 

The achievements we have seen so far, will surely pale against what the coming decades will bring, and we cannot predict what we might achieve, when our own minds are amplified by AI.  

Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one, industrialisation. Every aspect of our lyves will be transformed. In short, success in creating AI, could be the biggest event in the history of our civilisation.

But it could also be the last, unless we learn how to avoid the risks.  I have said in the past that the development of full AI, could spell the end of the human race, such as the ultimate use of powerful autonomous weapons. Earlier this year, I, along with other international scientists, supported the United Nations convention to negotiate a ban on nuclear weapons. 

We await the outcome with nervous anticipation. Currently, nine nuclear powers have access to roughly 14,000 nuclear weapons, any one of which can obliterate cities, contaminate wide swathes of land with radioactive fall-out, and the most horrible hazard of all, cause a nuclear-induced winter, in which the fires and smoke might trigger a global mini-ice age. 

The result is a complete collapse of the global food system, and apocalyptic unrest, potentially killing most people on earth. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them, and discovered that their effects are even more horrific than first thought.

At this stage, I may have possibly frightened you all here today, with talk of doom. I apologise. But it is important that you, as attendees to today's conference, recognise the position you hold in influencing future research and development of today's technology. 

I believe that we join together, to call for support of international treaties, or signing letters presented to individual governmental powers. Technology leaders and scientists are doing what they can, to obviate the rise of uncontrollable AI.

In October last year, I opened a new center in Cambridge, England, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. The Leverhulme Centre for the Future of Intelligence, is a multi-disciplinary institute, dedicated to researching the future of intelligence, as crucial to the future of our civilisation and our species. We spend a great deal of time studying history, which let's face it, is mostly the history of stupidity. 

So it's a welcome change, that people are studying instead the future of intelligence. We are aware of the potential dangers, but I am at heart an optimist, and believe that the potential benefits of creating intelligence are huge. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world, by industrialisation.

Every aspect of our lives will be transformed. My colleague at the institute, Huw Price, has acknowledged that the center came about partially as a result of the university’s Centre for Existential Risk. That institute examines a wider range of potential problems for humanity, while the Leverhulme Centre has a more narrow focus.

Recent developments in the advancement of AI, include a call by the European Parliament for drafting a set of regulations, to govern the yooss and creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI. 

A European Parliament spokesman has commented, that as a growing number of areas in our daily lyves are increasingly affected by robots, we need to ensure that robots are, and will remain, in the service of humans.

The report as presented to MEPs, makes it clear that it believes the world is on the cusp of a new industrial robot revolution. It examines whether or not providing legal rights for robots as electronic persons, on a par with the legal definition of corporate personhood, would be permissible. 

But stresses that at all times, researchers and designers should ensure all robotic design incorporates a kill switch. This didn't help the scientists on board the spaceship with Hal, the malfunctioning robotic computer in Kubrick’s two thousand and one, a Space Odyssey, but that was fiction. We deal with fact. Lorna Brazell, a partner at the multinational law firm Osborne Clarke, says in the report, that we don’t give whales and gorillas personhood, so there is no need to jump at robotic personhood. 

But the wariness is there. The report acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity, and challenge the human robot relationship. Finally, the report calls for the creation of a European agency for robotics and AI, that can provide technical, ethical, and regulatory expertise. If MEPs vote in favor of legislation, the report will go to the European Commission, which has three months to decide what legislative steps it will take. 

We too, have a role to play in making sure the next generation has not just the opportunity, but the determination, to engage fully with the study of science at an early level, so that they can go on to fulfil their potential, and create a better world for the whole human race. 

This is what I meant, when I was talking to you just now about the importance of learning and education. We need to take this beyond a theoretical discussion of how things should be, and take action, to make sure they have the opportunity to get on board. We stand on the threshold of a brave new world. It is an exciting, if precarious place to be, and you are the pioneers. I wish you well.

Professor Hawking, we have learned so much from your insight.

Next I’m going to ask some questions. These are from Chinese scientists and entrepreneurs.

Kai-Fu Lee, CEO of Sinovation Ventures:

"The large internet companies have access to massive databases, which allows them to make huge strides in AI by violating user's privacy. These companies can’t truly discipline themselves as they are lured by huge economic interests. This vastly disproportionate access to data could cause small companies and startups to fail to innovate. You have mentioned numerous times that we should restrain artificial intelligence, but it’s much harder to restrain humans. What do you think we can do to restrain the large internet companies?"

As I understand it,the companies are using the data only for statistical purposes,but use of any personal information should be banned. It would help privacy, if all material on the internet, were encrypted by quantum cryptography with a code, that the internet companies could not break in a reasonable time. But the security services would object.

Professor, the second question is from Fu Sheng, CEO, Cheetah Mobile:

「Does the human soul exist as a form of quantum or another form of higher dimensional space?"

I believe that recent advances in AI, such as computers winning at chess and Go, show that there is no essential difference between the human brain and a computer. Contrary to the opinion of my colleague Roger Penrose. Would one say a computer has a soul. In my opinion, the notion of an individual human soul is a Christian concept, linked to the afterlife which I consider to be a fairy story.

Professor,the third question is from Ya-Qin Zhang, President, Baidu:  

「The way human beings observe and abstract the universe is constantly evolving, from observation and estimation to Newton's law and Einstein’s equation , and now data-driven computation and AI . What is next」

We need a new quantum theory, which unifies gravity with the other forces of nature. Many people claim that it is string theory, but I have my doubts. So far about the only prediction is that space-time has ten dimensions.

Professor, the forth question is from Zhang Shoucheng , Professor of Physics, Stanford University:

「If you were to tell aliens about the highest achievements of our human civilization on the back of one envelope, what would you write ?」

It is no good telling aliens about beauty or any other possible art form that we might consider to be the highest artistic achievement,because these are very human specific. Instead I would write about Godel’s Incompleteness Theorems and Fermat’s Last Theorem. These are things aliens would understand

The next question is from myself:

「We wish to promote the scientific spirit at all 9 GMIC conferences globally. What three books do you recommend technology leaders read to better understand the coming future and the science that is driving it?」

They should be writing books not reading them. One fully understands something only when one has written a book about it.

The next question is from Weibo user:

What is the one thing we should never do in life, and the one thing we should all do?

We should never give up, and we should all strive to understand as much as we can.

The next question is also from Weibo user:

「Human beings have experienced many evolutions ,for example, the Stone Age, the age of steam to the age of electricity. What do you think will drive the next evolution?」

Advances in computer science, including artificial intelligence and quantum computing. Technology already forms a major part of our lives but in the coming decades, it will permeate every aspect of our society .intelligently supporting and advising us in many areas , including healthcare work education and science. But we must make sure we control AI not it us.

Professor Hawking,the last question is from Hai Quan , Musician and VC:

「If the technology is not mature yet for interstellar immigrants, do human beings have unsolvable challenges that could lead to human extinction apart from external catastrophes like asteroid hitting earth?」

Yes. over-population, disease, war, famine, climate change and lack of water. It is within the power of man to solve these crises, but unfortunately these remain serious threats to our continued present on earth. These are all solvable, but so far have not been.

·END·

相關焦點

  • 人工智慧將成為人類終結者?
    原標題:人工智慧將成為人類終結者? 史蒂芬·霍金   今日視點   著名科學家史蒂芬·霍金12月2日表示,創造「能思考」的機器的努力將威脅人類自身生存。他認為:「對完全人工智慧的發展可能會招致人類歷史的終結。」   在上世紀60年代上映的科幻電影《2001太空漫遊》中,超級電腦HAL9000憑藉自己的「智慧」謀殺了四位太空人。半個世紀過去,霍金的預言再次引發人們對人工智慧未來發展前景的深思。
  • 霍金:人工智慧或是人類歷史上最後事件
    到那時,人工智慧可能是人類文明史的終結。霍金建議:跨學科霍金在演講中說到,人工智慧的崛起,是好是壞目前仍不確定。霍金認為,人工智慧的發展,本身是一種存在著問題的趨勢,而這些問題必須在現在和將來得到解決。
  • 史蒂芬霍金:警惕!人工智慧將會反抗人類,很可能造成人類滅絕
    我們經常看的科幻電影裡都會有人工智慧殺人甚至消滅人類的事件,比如終結者,機器人啟示錄,機械公敵等。人工智慧發展到一定程度後,具備了自我思維能力,思想會突破人類定製的規則,會產生一些危險且極端的行為,給人類帶來人身安全,甚至毀滅。
  • 霍金:人工智慧或終結人類文明,但讓人類滅絕的,遠不止它
    4月27日,GMIC 2017北京大會拉開帷幕,英國劍橋大學著名物理學家史蒂芬·威廉·霍金遠程發表了主題演講。演講中,霍金仍保持了其對人工智慧一貫的謹慎性,提醒AI科研者們在利用AI造福人類的同時,還需注意消除可能的威脅。
  • 再見霍金!對於人工智慧,這位偉人給世人留下這樣的忠告
    如果那些設備的設計非常成功,就能給人類帶來巨大的好處,他說那將是人類歷史上最大的事件。然而他同時也提醒說,人工智慧也有可能是人類歷史上最後的事件。他說:「人工智慧技術發展到極致程度時,我們將面臨著人類歷史上的最好或者最壞的事情。」霍金以前就曾試圖提醒人們注意一點,即科幻的魅力會蒙蔽我們的雙眼。
  • 霍金最新演講:AI或帶來人類文明終結,我們該盡力確保其發展方向
    大會以「天·工·開·悟」為主題,上午領袖論壇首場Keynote由霍金帶來。霍金先生通過視頻的方式對現場觀眾做了題為《讓人工智慧造福人類及其賴以生存的家園》的演講,他對未來人工智慧過度發展可能帶來的負面影響表達了擔憂,同時也和與會者探討了關於未來是否要為人工智慧賦予「人格」以期限定其權利責任問題。視頻最後,霍金回答了李開復和傅盛等業內領袖與網友的問題。
  • 霍金的擔憂是多餘的?人工智慧是定時炸彈,人類需要提防
    支持者認為,這是時代發展的趨勢,人類智能是未來的發展方向之一,它將會服務於人類,將人類文明推向更高的層次。反對者則聲稱,濫用人工智慧,無疑是將一顆定時炸彈帶在身邊,誰也無法預計它什麼時候會爆炸。事實上,如今的人類似乎已經離不開人工智慧了,舉一個很簡單的例子:智慧型手機。
  • 霍金對人類的最後警告:當心人工智慧!
    作為當代最為著名的物理學家之一,霍金的一生似乎都離不開黑洞,最主要的貢獻就是「奇點定理」和「霍金輻射」。至於預言,最令人關注的大概就是「人工智慧威脅論」,霍金曾公開表示「我們並不知道,人工智慧會輔佐人類,還是支配人類,或者是人類徹底被其摧毀」。
  • 霍金生前留下三大預言,實現任何一個,都可能導致人類文明的衰敗
    一、奉勸人類不要試圖接觸地外文明霍金生前曾經不止一次的警告過人們,不要試圖接觸地外文明,很多人都猜測霍金可能提前得知了什麼事情,也有些人則認為霍金這樣說純粹是為了博取在人們眼中的存在感。但是霍金所說的其實也不無道理,原因有兩個,首先是就目前我們所得到的信息來看,地外文明的發達程度一定比我們人類文明要高出很多的,一旦我們的探索成功引起了地外文明的關注,那麼我們面臨的將很有可能是被地外文明奴役的下場。再者就是我們對於地外文明的研究其實並沒有太大的意義,並且這樣的研究還是非常消耗研究經費的,有這些精力和研究經費,我們不如去想想該如何進一步的發展。
  • 霍金:人工智慧造福人類賴以生存的家園
    【手機中國新聞】在此次GMIC領袖峰會上,史蒂芬霍金也發表了視頻演講,就人工智慧和人類的關係進行了深刻的分析。霍金表示,在他的一生中,見證了社會深刻的變化。其中最深刻的,同時也是對人類影響與日俱增的變化,是人工智慧的崛起。簡單來說,強大的人工智慧的崛起,要麼是人類歷史上最好的事,要麼是最糟的。不得不說,是好是壞我們仍不確定。
  • 「人工智慧」會威脅人類嗎?霍金攜科技巨匠探索宇宙奧秘
    幸運的是,如今帶領人類掌握和創造宇宙的人不僅出現了,且已經想出值得一試的計劃,他就是在輪椅上久坐不起的天才科學家霍金。更準確地說,霍金是要到更廣闊的宇宙空間中,去探索一個像地球一樣適宜人類居住的星球。這兩天,霍金在We大會上的獨家演講視頻已經在騰訊視頻上收穫了超10萬次的播放量,而他的「突破攝星」計劃也再次引起人們的關注。
  • 霍金再拋人工智慧威脅論:或招致人類滅亡
    會上,通過視頻發表演講的英國著名物理學家史蒂芬·霍金,再次指出人工智慧的全方位發展,可能招致人類的滅亡。人工智慧的崛起,要麼是人類歷史上最好的事,要麼是最糟的。這一記警鐘引起了在場專家學者的廣泛討論。提到人工智慧滅絕人類,似乎還是和普通人有一些距離。
  • 一定有外星人、人工智慧會終結人類……歷數霍金的預言
    一直致力於探索宇宙奧秘的霍金,曾經作出過哪些預言呢?資料圖:英國物理學家霍金。  霍金語錄一:「外星人」在看著你  在2015年倫敦皇家科學學會尋找外星生命跡象的啟動儀式上,霍金就說過:「在無限的茫茫宇宙中,一定存在著其它形式的生命。」  他說:「或許外星生命會正盯著我們看呢。」
  • 霍金生前的3個建議,可能會關乎人類命運,你是否也贊同呢?
    霍金生前的3個建議,可能會關乎人類命運,你是否也贊同呢?科技水平能夠發展到如今的這種情況,離不開很多科學家的奉獻,比如黑洞的出現,就離不開某位科學家的努力,他就是霍金。按照常理來講,霍金並不是一個十分幸運的人。他因為患上漸凍症,終身只能在輪椅上進行活動,好在這並不影響他探索科學的決心。或許這就是科學家的特別之處,他們擁有著別人沒有的毅力以及能力,能夠洞悉到很多神秘的事情。霍金憑藉自己的努力,受到了很多人的追捧。很多人對他說的話深信不疑,甚至覺得霍金就是個預言家。
  • 霍金預言地球上最後一個人將是最強人工智慧
    &nbsp&nbsp&nbsp&nbsp4月20日消息,《終結者》大家都看過,其中講述人工智慧「天網」計算機,妄想用智能機器控制人類,於是引發了人與智能機器的搶奪未來之戰,看似科幻卻真實存在著危險,霍金就曾預言,人類未來的敵人很有可能是人工智慧而不是外星人。
  • 「人工智慧可能毀滅人類」 霍金留給人類最後的警告 丨尋找中國創客
    「人工智慧威脅論」在生命的最後幾年,霍金則頻繁發出對人工智慧的警告:「人工智慧可能毀滅人類。」他與埃隆·馬斯克、比爾·蓋茨大概是最著名的三個「人工智慧威脅論」支持者。霍金在多個場合,表達了對人工智慧將會全面取代人類的擔憂。2014年霍金接受BBC採訪時說:「人工智慧的全面發展將宣告人類的滅亡。」
  • 核戰爭、外星人、智慧機器人……那些年霍金預言過的人類毀滅方式
    和霍金「江湖地位」類似的愛因斯坦曾說:「我不知道第三次世界大戰會用什麼武器,但第四次人類一定使用的是石頭和木棍。」他發出預言時是1950年代,核武器還在萌芽狀態,而以「古巴飛彈危機」為標誌,1960年代人類核對抗曾瀕臨核戰爭,可見愛因斯坦還是相當有見地。至於霍金,他預言的是如今還不怎麼靠譜的人工智慧。
  • 提防人工智慧,霍金生前留下三個警告,哪個最讓你害怕?
    而霍金作為世界著名物理的科學家,顯然他是一個極為聰明的人,所以他在生前四處演講的時候,曾對許多事情提出過警告,而這些警告中要數三個最為出名,下面就一起來看看吧,順便看看這三個警告中哪一個最讓你害怕。
  • 霍金預言目前正在被印證,人工智慧威脅人類地位還有多久?
    霍金,最偉大的科學家之一,他生前為人類做出了無數的貢獻,他去世後留下的預言也被人們津津樂道,甚至還經常引起熱議,一時間有懷疑霍金預言的,但更多的人相信了霍金留下的預言。霍金曾留下關於未來人工智慧的預言,儘管這個預言很晚才能實現,但科學家們推測,霍金的這個預言現在正在證實,有可能威脅到人類的生存狀態,未來人類該怎麼辦?
  • AI或用核戰毀滅人類!霍金再次發出警告,世界末日正在逼近
    繼2015年9月拋出「人工智慧威脅論」之後,近日著名物理學家史蒂芬·霍金(Stephen Hawking)再次表達了他對人工智慧(AI)的擔憂:AI可能會通過核戰爭或生物戰爭讓人類滅亡。AI可能摧毀人類,人類必須儘早防止威脅人工智慧技術一直是霍金關注的重點。