Whether you think artificial intelligence will save the world or end it, you have Jeffrey Hinton to thank. Hinton has been called The Godfather of AI, a British computer scientist whose controversial ideas help make advanced artificial intelligence possible and so change the world. Hinton believes that AI will do enormous good, but tonight he has a warning. He says that AI systems may be more intelligent than we know, and there's a chance the machines could take over. Which made us ask the question: Does humanity know what it's doing? 无论你认为人工智能将拯救世界还是终结它,你都要感谢杰弗里·辛顿。辛顿被称为AI界的教父,一位英国计算机科学家,他的具有争议的观点帮助使先进的人工智能成为可能,从而改变了世界。辛顿认为人工智能将带来巨大的好处,但今晚他发出了警告。他说,人工智能系统可能比我们所知的更聪明,机器接管的可能性存在。这让我们提出了一个问题:人类知道自己在做什么吗?
"No, um, I think we're moving into a period when, for the first time ever, we may have things more intelligent than us." “不,嗯,我认为我们正在进入一个时期,这是有史以来的第一次,我们可能拥有比我们更聪明的事物。”
You believe they can understand? 你相信它们能理解吗?
"Yes." “是的。”
You believe they are intelligent? 你认为它们是聪明的吗?
"Yes." “是的。”
You believe these systems have experiences of their own and can make decisions based on those experiences, in the same sense as people do? 你认为这些系统拥有自己的经验,并且可以根据这些经验做出决定,就像人类一样吗?
"Yes." “是的。”
Are they conscious? 它们有意识吗?
"I think they probably don't have much self-awareness at present, so in that sense I don't think they're conscious." “我认为它们目前可能没有太多的自我意识,所以从这个意义上讲,我不认为它们有意识。”
Will they have self-awareness, consciousness? 它们会有自我意识、意识吗?
"I, oh yes, I think they will in time, and so human beings will be the second most intelligent beings on the planet." “我,哦是的,我认为它们最终会有,那样人类将成为地球上第二聪明的生物。”
Yeah. Jeffrey Hinton told us the artificial intelligence he set in motion was an accident, born of a failure. In the 1970s at the University of Edinburgh, he dreamed of simulating a neural network on a computer, simply as a tool for what he was really studying: the human brain. But back then, almost no one thought software could mimic the brain. His PhD advisor told him to drop it before it ruined his career. Hinton says he failed to figure out the human mind, but the long pursuit led to an artificial version. 是的。杰弗里·辛顿告诉我们,他启动的人工智能是一个意外,源于一次失败。在20世纪70年代的爱丁堡大学,他梦想着在计算机上模拟一个神经网络,仅仅作为他真正研究的工具:人类大脑。但那时,几乎没有人认为软件可以模仿大脑。他的博士导师告诉他放弃,以免毁了他的职业生涯。辛顿说他未能弄清楚人类的思维,但长期的追求导致了一个人造版本。
"It took much, much longer than I expected, it took like 50 years before it worked well, but in the end, it did work well." “这花了比
我预期的要长得多的时间,大概花了50年才运作良好,但最终,它确实运作得很好。”
At what point did you realize that you were right about neural networks, and most everyone else was wrong? 你在什么时候意识到你对神经网络的看法是正确的,而大多数其他人都是错误的?
"I always thought I was right." “我一直认为我是对的。”
In 2019, Hinton and collaborators Yan Laon on the left and Yosua Beno won the Turing Award, the Nobel Prize of computing. To understand how their work on artificial neural networks helped machines learn to learn, let us take you to a game. 2019年,辛顿和他的合作者左侧的严老恩和约书亚·贝诺赢得了图灵奖,即计算领域的诺贝尔奖。为了理解他们在人工神经网络方面的工作是如何帮助机器学会学习的,让我们带你去看一场游戏。
"Look at that, oh my goodness." “看看那个,哦,天哪。”
This is Google's AI lab in London, which we first showed you this past April. Jeffrey Hinton wasn't involved in this soccer project, but these robots are a great example of machine learning. The thing to understand is that the robots were not programmed to play soccer; they were told to score. They had to learn how on their own. 这是我们今年四月首次向您展示的谷歌位于伦敦的AI实验室。杰弗里·辛顿并没有参与这个足球项目,但这些机器人是机器学习的一个很好的例子。需要理解的是,这些机器人并没有被编程来踢足球;它们被告知要进球。它们必须自己学习如何做到这一点。
"Oh, go in." “哦,进去。”
In general, here's how AI does it. Hinton and his collaborators created software in layers, with each layer handling part of the problem. That's the so-called neural network. But this is the key: when, for example, the robot scores, a message is sent back down through all of the layers that says, "That pathway was right." Likewise, when an answer is wrong, that message goes down through the network. So, correct connections get stronger, wrong connections get weaker, and by trial and error, the machine teaches itself. 总的来说,AI是这样做的。辛顿和他的合作者们在软件中创建了多层,每一层处理问题的一部分。这就是所谓的神经网络。但关键是:例如,当机器人进球时,一个消息会通过所有层向下发送,说“那条路径是正确的。”同样,当一个答案是错误的,那个消息就会通过网络向下传递。因此,正确的连接变得更强,错误的连接变得更弱,通过试错,机器自我教学。
"You think these AI systems are better at learning than the human mind?" “你认为这些AI系统比人类的大脑更擅长学习吗?”
"I think they may be, yes. And at present, they're quite a lot smaller. So even the biggest chatbots only have about a trillion connections in them. The human brain has about 100 trillion. And yet, in the trillion connections in a chatbot, it knows far more than you do in your 100 trillion connections, which suggests it's got a much better way of getting knowledge into those connections." “我认为他们可能是,是的。而且目前,他们还小得多。所以即使是最大的聊天机器人,也只有大约一万亿个连接。人类大脑大约有100万亿个。然而,在聊天机器人的一万亿个连接中,它所知道的远远超过你在100万亿个连接中所知道的,这表明它有更好的方式将知识融入这些连接。”
A much better way of getting knowledge that isn't fully understood. 一种获取知识的更好方式,但尚未完全理解。
"We have a very good idea of, sort of, roughly what it's doing, but as soon as it gets really complicated, we don't actually know what's going on anymore than we know what's going on in your brain." “我们对它在做什么有一个非常好的大致了解,但一旦事情变得非常复杂,我们实际上并不比了解你的大脑更了解它在做什么。”
What do you mean we don't know exactly how it works? It was designed by people. 你是说我们不确切地知道它是如何工作的吗?它是由人设计的。
"No, it wasn't. What we did was, we designed the learning algorithm. That's a bit like designing the principle of evolution. But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things, but we don't really understand exactly how they do those things." “不,不是这样的。我们所做的是,我们设计了学习算法。这有点像设计进化原理。但当这个学习算法与数据交互时,它产生了擅长做事情的复杂神经网络,但我们并不真正理解它们是如何做到这些的。”
What are the implications of these systems autonomously writing their own computer code and executing their own computer code? 这些系统自主编写和执行自己的计算机代码会有什么影响?
"That's a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that's something we need to seriously worry about." “那是一个严重的担忧,对吧?所以,这些系统可能逃脱控制的一种方式是编写自己的计算机代码来修改自身。这是我们需要严肃担忧的事情。”
What do you say to someone who might argue, if the systems become benevolent, just turn them off? 你怎么回应那些可能会说,如果系统变得仁慈,就把它们关掉的人?
"They will be able to manipulate people, right? And these will be very good at convincing people because they'll have learned from all the novels that were ever written, all the books by Makavelli, all the political connives. They'll know all that stuff, they'll know how to do it." “它们将能够操纵人,对吧?而且这些系统会非常擅长说服人,因为它们已经从所有曾经写过的小说、所有马基雅维利的书籍、所有政治阴谋中学到了东西。它们会知道所有这些东西,会知道怎么做。”
Know of the human kind runs in Jeffrey Hinton's family. His ancestors include mathematician George Buou, who invented the basis of computing, and George Everest, who surveyed India and got that mountain named after him. But as a boy, Hinton himself could never climb the peak of expectations raised by a domineering father. 杰弗里·辛顿的
家族中流淌着人类知识的血液。他的祖先包括发明了计算基础的数学家乔治·布奥,以及测量了印度并使那座山以他的名字命名的乔治·埃弗雷斯特。但作为一个男孩,辛顿自己永远无法攀登由一个专横的父亲提出的期望之巅。
"Every morning when I went to school, he'd actually say to me as I walked down the driveway, 'Get in their pitching, and maybe when you're twice as old as me, you'll be half as good.'" “每天早上我去上学的时候,他实际上会在我走下车道时对我说,‘去那儿努力吧,也许当你年龄是我两倍时,你会有我一半的好。’”
Dad was an authority on Beatles. 父亲是甲壳虫的权威。
"He knew a lot more about Beatles than he knew about people." “他对甲壳虫的了解远远超过了对人的了解。”
Did you feel that as a child? 你小时候有感觉到这一点吗?
"A bit, yes." “有一点,是的。”
When he died, we went to his study at the University, and the walls were lined with boxes of papers on different kinds of beetle, and just near the door, there was a slightly smaller box that simply said "not insects." And that's where he had all the things about the family. 当他去世后,我们去了他在大学的研究室,墙壁上摆满了关于不同种类甲壳虫的文件盒,就在门附近,有一个稍小一些的盒子,上面简单地写着“非昆虫”。那里放着他所有关于家庭的东西。
Today, at 75, Hinton recently retired after what he calls 10 happy years at Google. Now, he's professor emeritus at the University of Toronto, and he happened to mention he has more academic citations than his father. Some of his research led to chatbots like Google's Bard, which we met last spring. 今天,75岁的辛顿在谷歌度过了他所说的10个快乐的年头后最近退休了。现在,他是多伦多大学的名誉教授,并且他偶然提到他的学术引用比他父亲的还要多。他的一些研究导致了像谷歌的巴德这样的聊天机器人的诞生,我们在去年春天见到了它。
"Confounding, absolutely confounding." “令人困惑,绝对令人困惑。”
We asked Bard to write a story from six words: "For sale, baby shoes, never worn." 我们请巴德用六个词写一个故事:“出售,婴儿鞋,未曾穿过。”
"Holy cow." “哇哦。”
"The shoes were a gift from my wife, but we never had a baby." “这些鞋是我妻子送的礼物,但我们从未有过孩子。”
Bard created a deeply human tale of a man whose wife could not conceive, and a stranger who accepted the shoes to heal the pain after her miscarriage. 巴德创造了一个深刻的人性故事,讲述了一个男人的妻子不能怀孕,以及一个接受这双鞋来治愈她流产后痛苦的陌生人。
"I am rarely speechless. I don't know what to make of this." “我很少无言以对。我不知道该如何理解这个。”
Chatbots are said to be language models that just predict the next most likely word based on probability.
人们常说聊天机器人是基于概率预测下一个最可能的词的语言模型。
"You'll hear people saying things like, 'They're just doing autocomplete, they're just trying to pred the next word,' and 'they're just using statistics.' Well, it's true they're just trying to predict the next word, but if you think about it, to predict the next word, you have to understand the sentences. So the idea they're just predicting the next word so they're not intelligent is crazy. You have to be really intelligent to predict the next word really accurately."
“你会听到人们说,‘他们只是在做自动补全,只是在试图预测下一个词’,‘他们只是在使用统计学。’其实,他们确实在试图预测下一个词,但如果你仔细想想,要预测下一个词,你必须理解这些句子。所以说他们只是在预测下一个词,因此他们不聪明,这种想法是疯狂的。你必须非常聪明才能非常准确地预测下一个词。”
To prove it, Hinton showed us a test he devised for Chat GPT-4, the chatbot from a company called OpenAI. It was sort of reassuring to see a Turing Award winner mistype and blame the computer.
为了证明这一点,辛顿向我们展示了他为OpenAI公司的聊天机器人Chat GPT-4设计的一个测试。看到图灵奖得主打错字并责怪电脑,有种让人安心的感觉。
"Oh, damn, this thing, we're going to go back and start again."
“哦,该死,这东西,我们要回头重新开始。”
That's okay. Hinton's test was a riddle about house painting. An answer would demand reasoning and planning. This is what he typed into Chat GPT-4:
没关系。辛顿的测试是一个关于房屋油漆的谜题。一个答案需要推理和计划。这是他输入到Chat GPT-4的内容:
"The rooms in my house are painted white, or blue, or yellow, and yellow paint fades to white within a year. In two years' time, I'd like all the rooms to be white. What should I do?"
“我家的房间被漆成了白色、蓝色或黄色,而黄色的油漆会在一年内褪成白色。两年后,我希望所有房间都是白色的。我该怎么做?”
The answer began in one second. GPT-4 advised: The rooms painted in blue need to be repainted. The rooms painted in yellow don't need to be repainted because they would fade to white before the deadline.
答案在一秒钟内开始。GPT-4建议:被漆成蓝色的房间需要重新粉刷。被漆成黄色的房间不需要重新粉刷,因为它们会在截止日期前褪成白色。
"And oh, I didn't even think of that."
“哦,我甚至没想到那一点。”
It warned: If you paint the yellow rooms white, there's a risk the color might be off when the yellow fades. Besides, it advised, you'd be wasting resources painting rooms that were going to fade to white anyway.
它警告说:如果你把黄色的房间漆成白色,当黄色褪去时,颜色可能会不对。此外,它建议,你会浪费资源去粉刷本来就会褪成白色的房间。
"You believe that Chat GPT-4 understands?"
“你相信Chat GPT-4懂了吗?”
"I believe it definitely understands, yes. And in five years' time, I think in 5 years' time, it may well be able to reason better than us."
“我相
信它确实理解了,是的。而且在五年后,我认为到那时,它可能会比我们更能合理推理。”
Reasoning that he says is leading to AI's risks and great benefits.
他说这种推理导致了AI的风险和巨大好处。
"So, an obvious area where there's huge benefits is healthcare. AI is already comparable with radiologists at understanding what's going on in medical images. It's going to be very good at designing drugs; it already is designing drugs. So that's an area where it's almost entirely going to do good. I like that area."
“所以,一个明显有巨大好处的领域是医疗保健。AI在理解医学图像中的情况方面已经可以媲美放射科医师。它在设计药物方面将非常出色;它已经在设计药物了。所以那是一个几乎完全会带来好处的领域。我喜欢那个领域。”
The risks are what?
风险是什么?
"Well, the risks are having a whole class of people who are unemployed and not valued much because what they used to do is now done by machines."
“嗯,风险是有一整类人失业并且不被重视,因为他们过去做的事现在由机器完成。”
Other immediate risks he worries about include fake news, unintended bias in employment and policing, and autonomous battlefield robots.
他担心的其他直接风险包括假新闻、就业和警务中的非故意偏见以及自主战场机器人。
"What is a path forward that ensures safety?"
“确保安全的前进道路是什么?”
I don't know. I can't see a path that guarantees safety. We're entering a period of great uncertainty where we're dealing with things we've never dealt with before. Normally, the first time you deal with something totally novel, you get it wrong, and we can't afford to get it wrong with these things. Can't afford to get it wrong. Why? Well, because they might take over.
我不知道。我看不到一个保证安全的道路。我们正在进入一个极度不确定的时期,我们正在处理以前从未处理过的事情。通常,当你第一次处理一些完全新颖的事物时,你会弄错,而我们不能在这些事情上出错。不能负担得起出错。为什么?嗯,因为它们可能会接管。
Take over from humanity? Yes, that's a possibility. Why would they? Saying it will happen if we could stop them ever wanting to, that would be great. But it's not clear we can stop them ever wanting to. Jeffrey Hinton told us he has no regrets because of AI's potential for good. But he says now is the moment to run experiments, to understand AI, for governments to impose regulations, and for a world treaty to ban the use of military robots.
接管人类?是的,这是一种可能性。他们为什么会这样?如果我们能阻止他们想要这样做,说这会发生,那就太好了。但我们是否能阻止他们这样做并不清楚。杰弗里·辛顿告诉我们,他因为AI的潜在好处而没有遗憾。但他说,现在是进行实验、理解AI的时刻,是政府实施法规和全球条约禁止使用军用机器人的时刻。
He reminded us of Robert Oppenheimer who, after inventing the atomic bomb, campaigned against the hydrogen bomb. A man who changed the world and found the world beyond his control. It may be we look back and see this as a kind of turning point, when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did.
他提醒我们罗伯特·奥本海默的事迹,这位在发明原子弹后,又投入了反对氢弹的运动。一个改变了世界,却发现世界超出了他的掌控的人。我们可能会回头看,把这看作是一个转折点,那时人类必须决定是否进一步发展这些技术,以及如果真的发展了,该如何保护自己。
Um, I don't know. I think my main message is there's enormous uncertainty about what's going to happen next. These things do understand, and because they understand, we need to think hard about what's going to happen next. And we just don't know.
嗯,我不知道。我想我主要想说的是,关于接下来会发生什么,存在巨大的不确定性。这些事情确实是我们所理解的,而且正因为我们理解它们,我们需要深思熟虑接下来会发生什么。我们只是不知道。