This year marks exactly two centuries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, 31.the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.
Today the rapid growth of artificial intelligence (AI) raises fundamental questions:”What is intelligence, identify, or consciousness? What makes humans humans?”
What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans”.
32.Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”
But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI “vision” today is not nearly as sophisticated as that of humans. 33.And to anticipate every imaginable driving situation is a difficult programming problem.
Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.
On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights.
34.While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair.
To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.
31. Mary Shelley’s novel Frankenstein is mentioned because it .
A. fascinates AI scientists all over the world.
B. has remained popular for as long as 200 years.
C. involves some concerns raised by AI today.
D. has sparked serious ethical controversies.
32. In David Eagleman’s opinion, our current knowledge of consciousness .
A. helps explain artificial intelligence.
B. can be misleading to robot making.
C. inspires popular sci-fi TV series.
D. is too limited for us to reproduce it.
33. The solution to the ethical issues brought by autonomous vehicles .
A. can hardly ever be found.
B. is still beyond our capacity.
C. causes little public concern.
D. has aroused much curiosity.
34. The author’s attitude toward Google’s pledge is one of .
A. affirmation.
B. skepticism.
C. contempt.
D. respect.
35. Which of the following would be the best title for the text?
A. AI’s Future: In the Hands of Tech Giants
B. Frankenstein, the Novel Predicting the Age of AI
C. The Conscience of AI: Complex But Inevitable
D. AI Shall Be Killers Once Out of Control
答案:CDBAC
今年正好是Mary Shelley的《科学怪人--现代普罗米修斯的故事》出版以来的两个世纪。甚至是在电灯泡发明之前,作者就创作了一部出色的推理小说作品,预示着未来技术将提出许多伦理问题。
今天,人工智能 (AI) 的快速增长提出了一些基本问题:“什么是智能、身份识别或意识?是什么让人类成为人类?”
所谓的通用人工智能,即模仿人类思维方式的机器,继续躲避科学家。然而,人类仍然对机器人的外观、移动和反应像人类一样着迷,类似于最近在流行科幻电视剧《西部世界》和《人类》中描绘的机器人。
斯坦福大学神经科学家David Eagleman 表示,人们的思维方式仍然太复杂而无法理解,更不用说复制了。 “我们只是处于一种没有好的理论来解释意识究竟是什么以及如何建造一台机器来实现意识的情况。”
但这并不意味着涉及人工智能的关键伦理问题不存在。例如,自动驾驶汽车的即将使用带来了棘手的伦理问题。人类司机有时必须在瞬间做出决定。他们的反应可能是即时反应、过去驾驶经验的输入以及当时他们的眼睛和耳朵告诉他们的复杂组合。今天的人工智能“视觉”远没有人类那么复杂。预测每一种可以想象的驾驶情况是一个困难的编程问题。
新加坡一家机构的首席执行官 Tan Kiat How 指出,每当基于大量数据做出决策时,“你很快就会陷入很多道德问题”,该机构正在帮助政府制定人工智能道德使用自愿准则。与新加坡一样,其他政府和大型企业也开始制定自己的指导方针。英国正在建立一个数据伦理中心。今年春天,印度发布了人工智能伦理战略。
6 月 7 日,谷歌承诺不会“设计或部署会造成“全面伤害”的人工智能,也不会开发人工智能导向的武器或使用人工智能进行违反国际规范的监视。它还承诺不部署使用会违反国际法或人权的人工智能。
虽然声明含糊不清,但它代表了一个起点。人工智能系统做出的决策应该是可解释的、透明的和公平的,这一想法也是如此。
换句话说:我们如何确保智能机器的思维反映了人类的最高价值?只有这样,他们才能成为有用的仆人,而不是弗兰肯斯坦失控的怪物。
31. 玛丽雪莱的小说弗兰肯斯坦被提及是因为它。
A. 令全世界的 AI 科学家着迷。
B. 已经流行了 200 年之久。
C. 涉及当今 AI 提出的一些问题。
D. 引发了严重的道德争议。
32. 在David Eagleman看来,我们目前对意识的了解。
A. 有助于解释人工智能。
B. 可能会误导机器人制造。
C. 启发流行的科幻电视剧。
D. 太有限了,我们无法重现它。
33. 自动驾驶汽车带来的伦理问题的解决方案。
A. 几乎找不到。
B. 仍然超出我们的能力范围。
C. 引起公众关注的很少。
D. 引起了极大的好奇。
34. 作者对谷歌承诺的态度是其中之一。
A. 肯定。
B. 怀疑论。
C. 蔑视。
D. 尊重。
35. 下列哪一项是文本的最佳标题?
A. 人工智能的未来:掌握在科技巨头手中
B. 科学怪人,预测人工智能时代的小说
C. 人工智能的良心:复杂但不可避免
D. 人工智能一旦失控将成为杀手
以上是阅读题各个问题的答案所对应的句子及全文翻译,祝各位考生都能考上心仪的学校