The ancient Chinese board game Go was invented long before there was any writ

游客2023-12-21  12

问题    The ancient Chinese board game Go was invented long before there was any writing to record its rules. A game from the impossibly distant past has now brought us closer to a moment that once seemed part of an impossibly distant future: a time when machines are cleverer than we are. For years, Go was considered the last redoubt against the march of computers. Machines might win at chess, draughts, Othello, Monopoly, bridge and poker. Go, though, was different. The game requires intuition, strategising, character reading, along with vast numbers of moves and permutations. It was invented to teach people balance and patience, qualities unique to human intelligence.
   This week a computer called AlphaGo defeated the world’s best player of Go. It did so by "learning" the game, crunching through 30 million positions from recorded matches, reacting and anticipating. It evolved as a player and taught itself. That single game of Go marks a milestone on the road to "technological singularity", the moment when artificial intelligence becomes capable of self-improvement and learns faster than humans can control or understand. Fear of the super-intelligent, over-mighty machine is embedded in our psyche. Technological advance brings with it the anxiety that the machines will eventually threaten humanity, a dread underpinned by the attribution to machines of our own evolutionary instinct to survive at the expense of lesser species.    Artificial intelligence is advancing in ways that were once the preserve of science fiction. Scientists are competing to build robot footballers, with a prediction that would once have sounded barmy: "By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, against the winner of the most recent World Cup. " Pepper, an affectionate humanoid robot, was unveiled last year. It is designed to "make people happy" by reading human emotions using a 3D depth sensor and lasers which analyse the facial expressions and voice tones of the people around it.
   Robot comes from the Czech robota meaning forced labour. Machines are increasingly working with humans. They even make financial decisions: one Hong Kong firm recently appointed an algorithm to its board, with an equal vote on investment decisions. Entrenched in our culture is the idea that when Man overreaches himself by playing God, he faces disaster. In Mary Shelley’ s Frankenstein, the monster made by man is an offence against religion and nature that turns on its creator. Its alternative title was The Modern Prometheus: a reference to the figure from Greek mythology who was punished for displaying arrogance towards the gods. It is a short step from Frankenstein to HAL, the softly spoken computer in 2001: A Space Odyssey, which also turns on its human masters. The film Ex Machina is the latest expression of that terror.
   Underlying this staple sci-fi plot is the assumption that a machine with comparable or greater abilities than ours will inevitably become an enemy. The theory goes that a robot will eventually take over and throw off the "forced labour" reflected in its name. Yet machines do only what they are created to do, and no robot could be built that shares our evolutionary biology. For AlphaGo to represent a danger, it would have to know that it had won, and to like winning. As drone technology shows, intelligent machines can be programmed to endanger humans. All inventions can be turned to nefarious ends, and the advance of artificial intelligence requires human intelligence to frame a set of robotic ethics. While the machines do not need regulation, the people who invent and use them do.
   In 1942, the great science fiction writer Isaac Asimov drew up three laws governing robot behavior: 1. Never harm a human being through action or inaction: 2. Obey human orders(subject to rule 1): 3. A robot must protect its own existence,(subject to 1 and 2). It is no accident that Asimov’s code was drawn up at a time when unfettered power, based on superior technology, was causing untold suffering across the world. He later added a fourth: that robots should not allow humanity in general to come to harm. To Asimov’s rules might be added requirements to comply with existing international law and human rights, to design robots in a way that their function is clear and apparent, and to ensure that human beings bear direct legal responsibility for robot behavior.
   A code of ethics for roboticists would be complex, but no harder to frame than the regulations governing existing relationships between man and machines: speed limits, safety rules, arms treaties. Drawing up the new robot laws would require patience, foresight and adaptability: the very human qualities to be instilled when Go was invented. [br] The sentence "For AlphaGo to represent a danger, it would have to know that it had won, and to like winning. "(para. 5)implies that______.

选项 A、it would soon be a real danger to humanity
B、it would never become a real danger to mankind
C、it would only be a danger when it has human competitiveness
D、it would be invincible when it shares most of human evolutionary biology

答案 C

解析
转载请注明原文地址:https://www.tihaiku.com/zcyy/3293942.html
最新回复(0)