通过搜索功能可以查询本站的所有文章
网站首页 本站动态 资源共享 美文妙乐 教学教案 双语新闻 论文相关 辅助教学 教学软件 广东高考

2017北京卷高考英语D篇阅读翻译


来源:高中英语教学交流网 发布时间:2018-11-27 22:36:00 查看次数:

内容提要:最近在讲评2017年北京D篇阅读,我要求学生试着全文翻译,主要是因为这篇文章涉及到了不少的知识点。如长难句、固定搭配。

最近在讲评2017年北京D篇阅读,文章讨论了人工智能随着科技的发展,可能超出人类控制以及人类应如何应对此类安全问题。
我要求学生试着全文翻译,主要是因为这篇文章涉及到了不少的知识点。如长难句、固定搭配。
对于高考阅读理解其实不需要全文翻译,不过通过全文翻译可以锻炼英语综合运用能力。
对于长难句,只需要把握主干(主谓宾),修饰成分可以忽略不计。
翻译讲究信达雅,对于有一词多义的词,能够翻译通顺就可以了。
有些没有见过的表达,在不影响主干的情况下可以不用翻译。
Hollywood’s theory that machines with evil(邪恶) minds will drive armies of killer robots is just silly. The real problem relates to the possibility that artificial intelligence (AI) may become extremely good at achieving something other than what we really want. In 1960 a well-known mathematician Norbert Wiener, who founded the field of cybernetics(控制论), put it this way: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot effectively interfere(干预), we had better be quite sure that the purpose which we really desire.”
好莱坞有着邪恶思想的机器会驱使杀人机器人军队的理论是愚蠢的。真正的问题是与人工智能可能变得非常擅长于实现一些除了我们真正想要的这个问题有关。1960年,一位著名的创立了控制论领域的数学家诺伯特·维纳,说:如果使用,在实现目的且我们不能有效地干预机械代理的情况下,我们最好确保这个目的就是我们真正想要的。
A machine with a specific purpose has another quality, one that we usually associate with living things: a wish to preserve its own existence. For the machine, this quality is not in-born, nor is it something introduced by humans; it is a logical consequence of the simple fact that the machine cannot achieve its original purpose if it is dead. So if we send out a robot with the single instruction of fetching coffee, it will have a strong desire to secure success by disabling its own off switch or even killing anyone who might interfere with its task. If we are not careful, then, we could face a kind of global chess match against very determined, super intelligent machines whose objectives conflict with our own, with the real world as the chessboard.
具有特定目的的机器有另一种能力,我们通常把它与生物联系在一起: 保护它们存在的意愿。对于机器来说,这种能力不是与生俱来的,也不是人类引进的。这是一个简单事实的逻辑结果:如果机器是死的,就无法完成最初的目的。因此,如果我们给机器人发出“取咖啡”的指,它将有强烈的愿望去通过禁用开关甚至杀死任何可能干扰任务的人来确保成功。如果我们不小心,那么,我们可能会面对一场和非常坚定、超级智能的机器为对手的国际象棋比赛,它的目标与我们的是冲突的,现实世界是棋盘。
The possibility of entering into and losing such a match should concentrate the minds of computer scientists. Some researchers argue that we can seal the machines inside a kind of firewall, using them to answer difficult questions but never allowing them to affect the real world. Unfortunately, that plan seems unlikely to work: we have yet to invent a firewall that is secure against ordinary humans, let alone super intelligent machines.
进入和输掉这场比赛的可能性应集中在计算机科学家的头脑。一些研究人员主张,我们可以把封印机器在一种防火墙内,用它们来回答一些棘手的问题,但绝不允许它们影响现实世界。不幸的是,这个计划似乎不可能有效:我们还没发明出一种对付普通人类的安全防火墙,更不用说超级智能机器了。
Solving the safety problem well enough to move forward in AI seems to be possible but not easy. There are probably decades in which to plan for the arrival of super intelligent machines. But the problem should not be dismissed out of hand, as it has been by some AI researchers. Some argue that humans and machines can coexist as long as they work in teams—yet that is not possible unless machines share the goals of humans. Others say we can just “switch them off” as if super intelligent machines are too stupid to think of that possibility. Still others think that super intelligent AI will never happen. On September 11, 1933, famous physicist Ernest Rutherford stated, with confidence, “Anyone who expects a source of power in the transformation of these atoms is talking moonshine.” However, on September 12, 1933, physicist Leo Szilard invented the neutron-induced (中子诱导) nuclear chain reaction.
解决人工智能的安全问题似乎是可能的,但不却不容易。也许有几十年的时间计划超级智能机器的到来。但这个问题不应该像一些人工智能研究人员那样被置之不理。一些人认为人类和机器可以共存,只要它们在团队中工作——但这是不可能的,除非机器和人类共享目标。另一些人则说,我们可以“关掉它们”,好像超级智能机器太愚蠢以致不会考虑到这个可能性。还有一些其他人认为超级智能的人工智能永远不会发生。1933年9月11日,著名的物理学家欧内斯特·卢瑟福 满怀信心地说:“任何人期望在原子转变中获取力量的来源就是空谈。”“然而,1933年9月12日,物理学家西拉德发明了中子诱发核连锁反应。
扫一扫使用手机浏览或分享本页
相关文章
最新更新
阅读排行
快速导航
关于我们
联系我们
微信公众号 站长微信
版权所有 2008-2019 高中英语教学交流网