2020年catti口译三级练习资料:机器人能理解喜怒哀乐吗
来源 :中华考试网 2020-08-04
中Can a robot read your emotions? Apple, Google, Facebook and other technology companies seem to think so. They are collectively spending billions of dollars to build emotion-reading devices that can interact meaningfully (and profitably) with humans using artificial intelligence.
机器人能读懂你的情绪吗?苹果(Apple)、谷歌(Google)、Facebook等科技公司的答案似乎是肯定的。它们共斥资数十亿美元用于研发能读懂情绪的设备,让设备利用人工智能与人类进行有意义(并可带来利润)的互动。
These companies are banking on a belief about emotions that has held sway for more than 100 years: smiles, scowls and other facial movements are worldwide expressions of certain emotions, built in from birth. But is that belief correct? Scientists have tested it across the world. They use photographs of posed faces (pouts, smiles), each accompanied by a list of emotion words (sad, surprised, happy and so on) and ask people to pick the word that best matches the face. Sometimes they tell people a story about an emotion and ask them to choose between posed faces.
这些企业正寄望于一种流行了100多年的有关情绪的看法:微笑、愤怒和其他面部活动都是在表达某种情绪,这是与生俱来的,而且是全球相通的。但这种看法正确吗?科学家在全球各地进行了实验。他们利用面部动作的图片(噘嘴、微笑),每张图片的后面都列出一些描述情绪的词汇(悲伤、惊讶、高兴等等),然后要求实验对象选择与面部动作最匹配的词汇。有时,他们会讲述一个有关情绪的故事,然后让实验对象在不同的面部表情中做出选择。
Westerners choose the expected word about 85 per cent of the time. The rate is lower in eastern cultures, but overall it is enough to claim that widened eyes, wrinkled noses and other facial movements are universal expressions of emotion. The studies have been so well replicated that universal emotions seem to be bulletproof scientific fact, like the law of gravity, which would be good news for robots and their creators.
西方人大约有85%选择了预期词汇。东方人的得分较低,但总的来说,这足以说明眼睛睁大、皱鼻和其他面部动作都是全球通用的情绪表达方式。这些研究重复了多次,结果都一样,通用的情绪似乎成了刀枪不入的科学事实,就像重力法则一样,这对于机器人和他们的创造者来说是个好消息。
But if you tweak these emotion-matching experiments slightly, the evidence for universal expressions dissolves. Simply remove the lists of emotion words, and let subjects label each photo or sound with any emotion word they know. In these experiments, US subjects identify the expected emotion in photos less than 50 per cent of the time. For subjects in remote cultures with little western contact, the results differ even more。
然而,如果你稍微调整一下这些情绪匹配实验,表情具有普适性的证据就消失了。如果去掉情绪词汇列表,让实验对象用他们知道的情绪词汇来描述图片或声音。在这些实验中,美国实验对象的正确率不到50%,对于与西方接触不多的遥远文化的实验对象而言,结果就更不同了。
Overall, we found that these and other sorts of emotion-matching experiments, which have supplied the primary evidence for universal emotions, actually teach the expected answers to participants in a subtle way that escaped notice for decades — like an unintentional cheat sheet. In reality, you’re not “reading” faces and voices. The surrounding situation, which provides subtle cues, and your experiences in similar situations, are what allow you to see faces and voices as emotional.
总的来说,我们发现,这些实验以及其他各种情绪匹配实验(提供了情绪具有普适性的主要证据)以一种微妙的方式把预期的答案教给了实验参与者,而这是几十年来人们未曾注意到的——就像无意中的打小抄。在现实中,你并没有在“阅读”面部和声音。提供细微提示的周围环境以及你在类似情境下的经验,让你把面部活动和声音视为是情绪的表达。
A knitted brow may mean someone is angry, but in other contexts it means they are thinking, or squinting in bright light. Your brain processes this so quickly that the other person’s face and voice seem to speak for themselves. A hypothetical emotion-reading robot would need tremendous knowledge and context to guess someone’s emotional experiences.
双眉紧锁可能意味着一个人生气了,但在其他背景下,这可能意味着他们在思考问题或因为光照强烈而眯着眼。你的大脑处理速度很快,以至于别人的面部和声音似乎在表达一种情绪。假想中的能读懂情绪的机器人需要大量知识和背景来猜测一个人的情绪体验。
So where did the idea of universal emotions come from? Most scientists point to Charles Darwin’s The Expression of the Emotions in Man and Animals (1872) for proof that facial expressions are universal products of natural selection. In fact, Darwin never made that claim. The myth was started in the 1920s by a psychologist, Floyd Allport, whose evolutionary spin job was attributed to Darwin, thus launching nearly a century of misguided beliefs.
那么通用情绪的观点从何而来?多数科学家举出查尔斯?达尔文(Charles Darwin)1872年的著作《人与动物的情绪表达》(The Expression of the Emotions in Man and Animals)作为证据,证明面部表情是自然选择的通用产物。实际上,达尔文从未这么说过。这种说法源于上世纪20年代的心理学家弗洛伊德?奥尔波特(Floyd Allport),他的进化论解释工作被认为是出自达尔文,这致使错误的观点延续了近一个世纪。
Will robots become sophisticated enough to take away jobs that require knowledge of feelings, such as a salesperson or a nurse? I think it’s unlikely any time soon. You can probably build a robot that could learn a person’s facial movements in context over a long time. It is far more difficult to generalise across all people in all cultures, even for simple head movements. People in some cultures shake their head side to side to mean “yes” or nod to mean “no”. Pity the robot that gets those movements backwards. Pity even more the human who depends on that robot.
机器人会变得足够复杂以至于夺走需要理解情绪的工作吗?例如销售人员或护士。我认为,这不太可能很快出现。你或许可以制造一台能够在特定环境下经过长期学习从而理解人类面部表情的机器人。但把所有文化中所有人的面部表情概括出来就困难多了,即便是简单的头部动作。在一些文化中,摇头的意思是“是”,点头的意思是“不”。把这些动作搞反的机器人会很可怜。那些依赖这些机器人的人类就更可怜了。
Nevertheless, tech companies are pursuing emotion-reading devices, despite the dubious scientific basis There is no universal expression of any emotion for a robot to detect Instead, variety is the norm.
尽管如此,科技公司正寻求研发能读懂情绪的设备,尽管其科学基础可疑。任何情绪都没有通用的表达方式来供机器人识别,多样性才是常态。
资料来源中华考试网校老师主讲教材精讲班课程,完整讲义下载进入个人中心>>
下载焚题库APP——翻译资格考试——题库——做题,包括章节练习、每日一练、模拟试卷、历年真题、易错题等,可随时随地刷题。【在线做题】>>】【下载APP掌上刷题】