“SHOULD WE AUTOMATE away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an NGO. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence(AI), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in AI has sparked anxiety about the potential dangers of the technology.
“我们是否应该将所有的工作都自动化,包括有意义的工作?我们是否应该开发那些最终超过、胜过...并取代我们的非人类大脑?我们是否有失去人类文明的风险?”上个月,非政府组织“未来生命研究所”在一封公开信中提出了这些问题。它呼吁在创造最先进的人工智能方面“暂停”6个月,公开信由埃隆·马斯克等多位科技名人签署。这是迄今为止最突出的例子,说明人工智能的快速发展已经引发了对该技术潜在危险的担忧。
In particular, new “large language models” (LLMs)—the sort that powers ChatGPT, a chatbot made by OpenAI, a startup—have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji.
特别是,新的“大型语言模型”(LLM)——为初创公司OpenAI开发的聊天机器人 ChatGPT提供动力的那种——在规模扩大时以其意想不到的才能让其创造者感到惊讶。这种“新兴”能力包括解决逻辑难题的能力、编写计算机代码的能力、从表情包的情节摘要中识别电影的能力。
These models stand to transform humans’ relationship with computers, knowledge and even with themselves. Proponents of AI argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that AIs’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.
这些模型将改变人类与计算机、知识、甚至与人类自身的关系。人工智能的支持者认为它有解决重大问题的潜力,如开发新药、设计新材料以帮助应对气候变化、解开核聚变发电的复杂问题。反对者认为,人工智能的能力已经超过了其创造者的理解力,这有可能使科幻片中机器胜过其发明者的灾难场景成为现实,通常会带来致命的后果。
This bubbling mixture of excitement and fear makes it hard to weigh the opportunities and risks. But lessons can be learned from other industries, and from past technological shifts. So what has changed to make AI so much more capable? How scared should you be? And what should governments do?
这种兴奋和恐惧的混合体使我们难以权衡机会和风险。但是可以从其他行业及曾经的技术变革中吸取教训。那么是什么样的改变让AI变得如此强大?你应该有多恐惧呢?政府应该做什么呢?
In a special Science section, we explore the workings of LLMs and their future direction. The first wave of modern AI systems, which emerged a decade ago, relied on carefully labelled training data. Once exposed to a sufficient number of labelled examples, they could learn to do things like recognise images or transcribe speech. Today’s systems do not require pre-labelling, and as a result can be trained using much larger data sets taken from online sources. LLMs can, in effect, be trained on the entire internet—which explains their capabilities, good and bad.
在特别的科学章节中,我们研究了大型语言模型的工作原理及其未来方向。10年前出现的第一波现代AI系统,依靠的是精心标记的训练数据。一旦接触到足够数量的标记实例,它们就能学会做一些事情,比如识别图像或转录语音。如今的AI系统不需要预先标记,因此可以使用在线资源里的大规模数据集进行训练。实际上,大型语言模型可以在整个互联网上进行训练——这就是他们为什么如此强大的原因,有好有坏。
节选自《经济学人》:How to worry wisely about AI
1. fulfilling 表示让人感觉有意义的;令人满足的
例:a fulfilling experience 有成就感的经历
2. luminary 泰斗;权威
例:...the political opinions of such luminaries as Sartre or de Beauvoir.
...诸如萨特、波伏娃等大家的政见。
3. start-up 新兴公司
例: Gold gave an example — an energy startup company called Scottish Bioenergy.
以一家新兴能源公司——苏格兰生物能源公司为例。
4. scale up 增大;增加;提高(规模或数量)
例:Since then, Wellcome has been scaling up production to prepare for clinical trials. 从那以后,威康公司一直在增加产量,为临床试验作准备。
温馨提示:文章由作者233网校-chenjing独立创作完成,未经著作权人同意禁止转载。