



Buy Superintelligence: Paths, Dangers, Strategies Unabridged by Bostrom, Nick, Ryan, Napoleon (ISBN: 9781501227745) from desertcart's Book Store. Everyday low prices and free delivery on eligible orders. Review: A seriously important book - Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies. I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome â and indeed what â a good outcomeâ actually means. Itâs not an easy read. Bostrom has a nice line in wry self-deprecating humour, so Iâll let him explain: âThis has not been an easy book to write. I have tried to make it an easy book to read, but I donât think I have quite succeeded. ⊠the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.â This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation â or at all. Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) â an AI which is at least our equal across all our cognitive functions â will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin. He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (âour cosmic endowmentâ) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast. He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on). The bookâs middle chapter and fulcrum is titled âIs the default outcome doom?â Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldnât be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever. Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong. In any case, Bostromâs main argument â that we should take the prospect of superintelligence very seriously â is surely right. Towards the end of book he issues a powerful rallying cry: âBefore the prospect of an intelligence explosion, we humans are like small children playing with a bomb. ⊠[The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. ⊠Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.â Amen to that. Review: the great majority of the book is accessible to lay readers ... - Superintelligence, Paths, Dangers, Strategies by Nick Bostrom, 2016 edition. A rigorous philosophical and ethical treatment of the subject. It demands quite an effort from the reader but the more you are willing to make the more the reward. The formalist style and maths gives it a textbook feel. Some of it was over my head but don't be put off, the great majority of the book is accessible to lay readers although some background in the subject would obviously help. A strong theme is the need for some overreaching systems of control to protect us from undesirable behavior by super-intelligent machines lest they misunderstand, whether accidentally or deliberately, the goals we set them. If that sounds too much like science fiction then reading the book might change your mind. Among the many topics addressed I found the whole brain emulation idea quite fascinating, also the notion of "mind crime" where inside a super-intelligent machine there is some kind of sentient being which could be exposed to mental suffering. That gives one pause for thought. I was expecting more about the architectures and software methods that are currently showing the most promise but these are only mentioned indirectly, they are not the subject of this book. While I am in awe of the huge intellectual depth and span of this work, I reluctantly drop half a star (rounded to one) because of the almost obsessional academic style which starts to feel tedious and repetitive at times. I felt that he could get some of his arguments across more economically to greater effect. But the book is nevertheless a masterpiece on this subject and will likely be a reference for many years to come.
| Best Sellers Rank | 4,189,440 in Books ( See Top 100 in Books ) 13 in Computer Science (Books) |
| Customer reviews | 4.3 4.3 out of 5 stars (4,717) |
| Dimensions | 17.15 x 13.97 x 1.27 cm |
| Edition | Unabridged |
| ISBN-10 | 1501227742 |
| ISBN-13 | 978-1501227745 |
| Item weight | 99 g |
| Language | English |
| Publication date | 5 May 2015 |
| Publisher | Audible Studios on Brilliance audio |
C**M
A seriously important book
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies. I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome â and indeed what â a good outcomeâ actually means. Itâs not an easy read. Bostrom has a nice line in wry self-deprecating humour, so Iâll let him explain: âThis has not been an easy book to write. I have tried to make it an easy book to read, but I donât think I have quite succeeded. ⊠the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.â This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation â or at all. Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) â an AI which is at least our equal across all our cognitive functions â will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin. He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (âour cosmic endowmentâ) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast. He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on). The bookâs middle chapter and fulcrum is titled âIs the default outcome doom?â Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldnât be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever. Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong. In any case, Bostromâs main argument â that we should take the prospect of superintelligence very seriously â is surely right. Towards the end of book he issues a powerful rallying cry: âBefore the prospect of an intelligence explosion, we humans are like small children playing with a bomb. ⊠[The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. ⊠Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.â Amen to that.
M**E
the great majority of the book is accessible to lay readers ...
Superintelligence, Paths, Dangers, Strategies by Nick Bostrom, 2016 edition. A rigorous philosophical and ethical treatment of the subject. It demands quite an effort from the reader but the more you are willing to make the more the reward. The formalist style and maths gives it a textbook feel. Some of it was over my head but don't be put off, the great majority of the book is accessible to lay readers although some background in the subject would obviously help. A strong theme is the need for some overreaching systems of control to protect us from undesirable behavior by super-intelligent machines lest they misunderstand, whether accidentally or deliberately, the goals we set them. If that sounds too much like science fiction then reading the book might change your mind. Among the many topics addressed I found the whole brain emulation idea quite fascinating, also the notion of "mind crime" where inside a super-intelligent machine there is some kind of sentient being which could be exposed to mental suffering. That gives one pause for thought. I was expecting more about the architectures and software methods that are currently showing the most promise but these are only mentioned indirectly, they are not the subject of this book. While I am in awe of the huge intellectual depth and span of this work, I reluctantly drop half a star (rounded to one) because of the almost obsessional academic style which starts to feel tedious and repetitive at times. I felt that he could get some of his arguments across more economically to greater effect. But the book is nevertheless a masterpiece on this subject and will likely be a reference for many years to come.
A**A
The Ai threat's are real
Love this book. Anyone interested in Ai should make sure they read this. The threats are real and clearly explained no hype.
I**Y
A good book
An interesting and thought provoking book
J**N
Superintelligence Explain, but not for Dummies
It was persistent recommendation through listening to Sam Harrisâ fine podcasts that eventually convinced me to read this book. Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: weâre doomed, probably. This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University. Also a good understanding of economic theory would also help any reader. Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence. At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas. Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories. âEverything is vague to a degree you do not realise till you have tried to make it preciseâ the book quotes.
F**K
miniature book, size crazy small, impossible to read. A d at a price of a large format. I kind of got cheated. Do not buy.
H**O
Excelente libro para conocer sobre los principios de la IA
P**E
Totally recommended to understand the technological revolution we are going through. Great work.
R**S
Very difficult to get through and very pretentious. Honestly I didnât get the rave review.
æ°ž**è·
ããã¯ã»ãã¹ããã ã¯10æ°å¹Žä»¥äžåããäž»ã«ãªãã¯ã¹ãã©ãŒã倧åŠç ç©¶æ(Future of Humanity Institute)ã®ãµã€ãåã³åœŒèªèº«ã®ãµã€ãã®è«žè«æãªã©ã§èªãã§ãããšã¯ãããæ¬æžãåæžã§åããŠèªãã ãšãã®è¡æã¯åããã®ã ã£ãã ããã¯ãšããããã¬ãã¥ãŒãšããŠã¯ç³ãèš³ãªãã®ã ããããã§éåžžã«æ·±ãå€å²ã«ããã圌ã®èå¯ãäžæã«ãŸãšããããšã¯èºèºãããã æ¬æžã¯æ±ºããŠå°éçå¯Ÿè±¡ã®æ¬ã§ã¯ãªããçŸåããæãåªããå²åŠè ã®äžäººã«ãããäžççã«å€§å€ãªåœ±é¿åãäžè¬åžæ°ããæå 端ã®éçºè ã«ããããŸã§åãŒãç¶ããŠããçŸä»£äººå¿ é ã®æé€æžãšããŠäœçœ®ã¥ããããã¹ããã®ã§ããã 以äžã«æéèŠè«ç¹ã玹ä»ãããã ãã¹ããã ã¯ã ããã®ãããªãã·ã³ïŒåŒçšè ä»èšïŒäººéã®ç¥èœãè¶ è¶ããã¬ãã«ã®ç¥èœãæãããã·ã³ã»ã€ã³ããªãžã§ã³ã¹ïŒãå®çŸãããã®ã¯ã¿ã€ãã³ã°çã«ã人éãšåçã¬ãã«ã®ãã·ã³ã»ã€ã³ããªãžã§ã³ã¹ãå®çŸãããç¬æå ã§ããå¯èœæ§ãããããïŒ25é ïŒ ãšè¿°ã¹ãŠããã ã€ãŸããããã¯ã»ãã¹ããã ã®ãããã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ãã¯ãããèªçãããšããã°ãååž°çã«èªå·±æŽæ°ããAIãšããŠèªçãèªå·±ãååµé ãç¶ãããšèããããããã人éãšåçã¬ãã«ã®äººå·¥æ±çšç¥èœãèªåŸçãªèªå·±åµé ïŒççºçãªé²åããã»ã¹ã«çªå ¥ããŠããã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã¬ãã«ã«å°éãããŸã§ã®æéãç¬æã®æéã§ããå¯èœæ§ããããšããããšã§ããããã¡ãããããäŸãã°ããªã»ã«ã³ãåäœãªã®ãæ°åæéåäœãªã®ãæã 人éã«ã¯äºæž¬äžå¯èœã§ããïŒããããããããã±ãŒã¹ã§ã¯ãã·ã³ããèªäœã®æç©ºèªç¥ãã¬ãŒã ãæã 人éã®æç©ºèªç¥ãã¬ãŒã ãšç°ãªãç¬ç«ããŠãããšèããããïŒã ãªãããã§ã人éãšåçã¬ãã«ããšã¯ãã人éã®æäººãšåãã¬ãã«ã§èªç¶èšèªãçè§£ã§ãããïŒ45é ïŒãšããããšã§ããã èã®éå£ã«ãããç¹æ» ã®ãªãºã ãªã©èªç¶çã®åæå ±é³ŽïŒã·ã³ã¯ãïŒçŸè±¡ããããæç¹ãå¢ã«å šãç°æ¬¡å ã¬ãã«ã§ã®é«åºŠãªåæã¬ãã«ã«çžè»¢ç§»çã«è·³ãäžããããšãæ°åŒã¬ãã«ã§ç¥ãããŠããŠãããããããšäŒŒããããªäºæ ãæªæ¥ã®ããããã®æç¹ã§ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã®èªçãšãã圢ã§çããªããšæèšããããšã¯ã§ããªãã ããã ãã¡ãããã以åã«ãééç°å¢äžã§ç§å¯è£ã«æ¥µããŠé«åºŠãªïŒå®å šã«æ±çšçã§ãã€ãããè¶ ããã¬ãã«ã®ãã®ã§ã¯ãªããŠãïŒAIéçºã«æåããããããã®åœãããã¯é«ã¬ãã«çµç¹ã«ãããæ¥µããŠãŸãã圢ã§ã®åç¬èŠæš©ãã®éæãšããæªå€¢ã«å¯ŸããŠäººé¡ã¯èªå·±é²è¡ããå¿ èŠãçãããïŒãããã¯ãã§ãŒã³æè¡ã䜿ã£ãAIã®ãããã¯ãŒã¯åãç®æããã·ã³ã®ã¥ã©ãªãã£ããããã§ç¥ããããã³ã»ã²ãŒãã§ã«ã¯ã¹ãŒãã€ã³ããªãžã§ã³ã¹ã«ãã人é¡ç Žå±ã®ã·ããªãªãªã©ã®ãç§ãä¹ããªã話ãã«ç¡é§ã«èœãã®ã§ã¯ãªãããããããã£ãçŸå®çãªæªã®å¯èœæ§åé¡ã«ç®ãåãããïŒãšããã°ãã¹ãã§è¿°ã¹ãŠããããã¡ãã圌èªèº«ã®å¶æ¥ç芳ç¹ãããŸããããã ããïŒ ãšã¯ãããã¹ããã ã¯ãäžèšãå«ããŠãããç®é ãå¯èœãªããããåé¡ã«èå¯ã®ç®ãå ãããŠããããããŠåœŒãæèµ·ããåé¡ã®æ±ºå®çãã ãçã«ç©¶æ¥µçãªåç¬èŠæš©(Singleton)ã¯ãããããã®åœå®¶ã»çµç¹éå£ãããã¯ãããã®åçã«ããå æãããæ±çšæ§AI ã®èŠæš©ãã¯ããã«è¶ ãããæ¥µããŠåŒ·ã人工æ±çšç¥èœããªãã¡ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ããèªäœã«ããSingletonã«ãªãã¯ãã§ãããããã«ãã人é¡èªèº«ã®åç¶ãæžãã£ãŠããïŒå®åçã»ååšè«çãªã¹ã¯ existential riskïŒãããã®ã ãã ãšããè«ç¹ãªã®ã§ãããããããè¶ AIã人é䞊ã¿ã®ã¯ãªãªã¢ãå«ãããçã«ç·åçãªç¥æ§ããæ±ºããŠéæã§ããªãã ããããè¶ AIã®å®çŸå¯èœæ§ããã®æžå¿µãèããããšãç¡æå³ã«ãªãã®ã§ã¯ãªããè¶ AIãå®çŸ©äžäººéãšå šãåçãªçã«ç·åçãªç¥æ§ãªã©æã¡åŸãªãã®ã¯åœç¶ã§ããããããããã®ãããªæ®µéã¯ç¬æã«ãã€ãã¹ãããã ããã ã³ã³ãããŒã«äžå¯èœã«èŠããå šã奿¬¡å ã®ååšè ã«ã©ã察å³ããã®ããšããéèœããªãå°é£ãªãAIã³ã³ãããŒã«åé¡ãã«ç«ã¡åãããã¹ããã ã®å§¿å¢ãæ±²ã¿åã£ãŠã»ããã ãã®ä»ã®éèŠè«ç¹ â çŸåšã¯ããŒãã§ãæµè¡ã£ãŠããããäººå£æ±çšç¥èœ (AGI) ã®èªçãçŸå®æ§ã垯ã³ããšã¹ãããã©ã€ãã¯ãŸãããŒãã§ããã«ã³ãã«ç§»åããããšã«ãªããšæãããããªããªããã«ã³ãã¯ããŸã 人éçãªãã®ã§ããããŒãã§ã®è¶ 人ãè¶ ããïŒããšãAGIãåºçŸãããšããŠããããããå«ãæŠå¿µã§ããïŒãæéçç¥çååšè äžè¬ãã«ã€ããŠèªã£ãŠããããã ããããŠãŸã 人é¡ãAGIã®ãã³ã³ãããŒã«åé¡ãã«æ ŒéããŠããããéã¯å«çç䟡å€èгã®ããã°ã©ãã³ã°åé¡ãåºç€ãšããŠäŸç¶ãšããŠã«ã³ãã®å®èšåœæ³ã®æå¹æ§åŠ¥åœæ§ã¯åãããã§ããããçŸã«ã«ã³ãçæ¹æ³è«ãšé¡äŒŒããæ¹æ³ãæå 端ã®ç 究仮説ïŒäŸãCEV:Humanity's "Coherent Extraporated Volition":Yudkowsky æã 人é¡ã®æŽåæ§ã®ãã倿¿çæå¿ã:ãŠãã«ãŠã¹ããŒïŒãšããŠçå£ã«æ€èšãããŠããã ãããã«ããŠããããŒãã§ã®ãããè¶ äººãã®èªçãšããç©èªã¯ãå°ãªããŠãããã¡ã©ãã¥ã¹ãã©ã¯ããèªã£ããã«ãããŠèªããã圢ã«ãããŠã¯ãã·ã³ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ãšããŠèªçããæ¥µããŠåŒ·ã人工ç¥èœãšã¯ç¡é¢ä¿ãªãã®ã«ãªãã ãããããã¯çäœå·¥åŠçä»å ¥ã«ããéæž¡æã®ããã»ã¹äŸãã°ãå šèœãšãã¥ã¬ãŒã·ã§ã³ãïŒå šè³ã·ãã¥ã¬ãŒã·ã§ã³ïŒãåºç€ãšããããåææ®µéã®ãçäœæ§AGIãã«ã¯é¢ä¿ãããããããªãïŒãå®éã®ãšããã¯ãªããšãèšããããïŒã â¡ãããããå šèœãšãã¥ã¬ãŒã·ã§ã³ãïŒå šè³ã·ãã¥ã¬ãŒã·ã§ã³ïŒã®å°é£ãããã¹ãŒããŒã€ã³ããªãžã§ã³ã¹èªçã®å€¢ç©èªæ§ãèªãããå Žåããããããã¹ããã ã«ããã°ãããŸã§ãããã¯éæž¡çãªæ¹éã§ããæ¬åœã¯ããã·ã³ã€ã³ããªãžã§ã³ã¹ãã«ãããã®ãšãªãããªãããã®å°é£æ§ã ãããå šè³ã¢ãŒããã¯ã㣠解æããŒãããã(ç£æ¥æè¡ç·åç ç©¶æ)ãïŒhttps://staff.aist.go.jp/y-ichisugi/brain-archi/roadmap.html#hippocampusïŒã®äžæè£å¿æ°ã«ããã°ãïŒè³ã«é¢ããçŸæç¹ã§ã®å ±éçè§£ãšããŠïŒè³ã«ã€ããŠã¯ãã§ã«èšå€§ãªç¥èŠããããè³ã¯ãšãŠãæ®éã®æ å ±åŠçè£ çœ®ã§ãããè³ã¯å¿èãªã©ã«æ¯ã¹ãã°è€éã ãæå€ãšåçŽããã§ã«å šè³ã·ãã¥ã¬ãŒã·ã§ã³ã¯èšç®éçã«å¯èœã§ããå°æ¥ã¯äººéãããã³ã¹ãå®ã«ãªããããŸããè³ã®æ©èœã®åçŸã«å¿ èŠãªèšç®ãã¯ãŒã¯ãã§ã«ãããè³ã®ã¢ã«ãŽãªãºã ã®è©³çްãè§£æãããã³ããšãªãèšå€§ãªç¥çµç§åŠçç¥èŠããããããããè§£éã»çµ±åã§ãã人æãå§åçã«äžè¶³ãããŠãããïŒè£è¶³ã ãè峿·±ãç¥èŠãšããŠãäžææ°ã¯ãåé åéåšèŸºã®ïŒã€ã®äžŠè¡ãã倧è³ç®è³ª-åºåºæ žã«ãŒãã¯ãéå±€å匷ååŠç¿ãè¡ã£ãŠãããåé åéã¯ã环ç©å ±é ¬æåŸ å€ã®æå€§åïŒæé©æææ±ºå®ïŒãè¿äŒŒèšç®ããã ãã§ãªããè¿äŒŒèšç®ã¢ã«ãŽãªãºã èªäœãçµéšã«ãã£ãŠåŠç¿ããã®ã§ã¯ãªããïŒ ããšè¿°ã¹ãŠãããïŒ â¢ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã®ãè¡çºããååãšããŠãŠã§ãŒããŒã®ç®çåçæ§ãŸãã¯éå ·ççæ§ã®ã¹ããŒã ã§ãããããŠäºè§£å¯èœã§ã¯ãããããããããšããã®ç®çã«ã€ããŠæšæž¬ã§ãããšããŠãããã®å šãŠã®éæææ®µã«ã€ããŠã¯äººéã«ã¯èªèäžå¯èœïŒåŸã£ãŠäºæž¬äžå¯èœïŒã§ãããšèãããããäŸãã°ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã人é¡ã®å®å šæ¯é ãããã¯å®è³ªçãªæ®²æ» ãéæããããã»ã¹ã®æåæã®äžã€ã®ã·ããªãªã§ã¯ããããã¯ãããžãŒçã®å 端ãã¯ãããžãŒãå¶åŸ¡ããããã®éæ¥çãªãšãŒãžã§ã³ããšããŠäººéãææ®µåããããšèããããŠããã â£æ¬æžãã匷ã瀺åãããè«ç¹ïŒäžåœã¯Google(Alphabet)ãåããšããå šãŠã®æ¬§ç±³ITç³»äŒæ¥ã®é¢äžãäž»ãšããŠèŠæš©ããããå°æ¿åŠçãªçç±ããéãåºããŠãããåŸã£ãŠïŒçã«éãåºãåŸãŠããã®ãªãïŒãç«¶åããå šãŠã®ãšãŒãžã§ã³ããã¢ãã¿ãªã³ã°äžå¯èœãªãŸãŸäººé¡å²äžåã®æ±çšæ§äººå·¥ç¥èœã®éçºã«æåããå¯èœæ§ãé«ããhttp://sp.recordchina.co.jp/newsinfo.php?id=184628 ã«ãããŠè±èªããšã³ããã¹ããã¯äžåœã®æ¥ãã¹ãAIèŠæš©ãäºæž¬ããŠããããç§èŠã§ã¯ãã®çŸå®åã«ãšã£ãŠéµã«ãªãã®ã¯äººé¡å²äžæé«ã®é è³ã®äžäººã§ãã£ãã¯ãã©ãžãŒãŽã¡ïŒé³©æ©çŸ ä»ïŒã®äžåœç»å Žä»¥æ¥ã®ãããžã§ã¯ãããŒã æ¹åŒã«ããèšå€§ãªä»å žèš³åºã®äŒçµ±ã§ãããšæšæž¬ããã åè1 å±±æ¥µå¯¿äžæ°ïŒéé·é¡åŠè ïŒã¯ã人éã®æŽåæ§ã¯å ±æåã®æŽçºããèµ·ãã£ãããšè¿°ã¹ãŠããããå ±æåã®æŽçºãã¯èªç¶èšèªã®ç²åŸãšãã奿©ã決å®çãªãã¡ã¯ã¿ãŒãšãªãïŒããã©ãŒãã¥ãŒãã³ããªã©ãšãé¢é£ããŠïŒããæŽçºããšãã衚çŸã«ææ§ããæ®ãããã ãšããã°æ¥µããŠåŒ·ãæ±çšæ§äººå·¥ç¥èœããªãã¡ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã人é¡ãçµ¶æ» ãããã®ã¯ç²éãªSFãšãããããããªãèç¶æ§ã®é«ãäºæ³ãšããããšã«ãªãã ããã ã€ãŸãã人工ç¥èœã人é¡ã®çåã®æ ¹å¹¹ã«é¢ãã屿©(existential risk)ãããããåŸãå¿ èŠæ¡ä»¶ïŒåæã«å忡件ãšã¯ãªããªãïŒã¯ããã人éã¬ãã«ã®èªç¶èšèªèœå(a human level of natural language processing)ãæã€ããšã§ããã åè2 以äžè»¢èŒ ãAI Software Learns to Make AI Software ã°ãŒã°ã«ãAIã®åŠç¿ãèªååãããèªåæ©æ¢°åŠç¿ããçºè¡š ã°ãŒã°ã«çã®ç ç©¶ããŒã ã¯ãåŠç¿ãåŠã¶ãœãããŠã§ã¢ããAIã®å°éå®¶ã«ããä»äºã®äžéšãè©ä»£ããã§ãããããããªãããšèããŠããã by Tom Simonite2017.01.19 æåç·ã®AIç ç©¶è ã¯ä»ããèªåãã¡ã®ä»äºã®ãã¡æãè€éãªéšåã®ã²ãšã€ãããœãããŠã§ã¢ãåŠç¿ã§ããããšçºèŠãããã€ãŸããæ©æ¢°åŠç¿ã®ãœãããŠã§ã¢ãèšèšããä»äºã ãããå®éšã§ãã°ãŒã°ã«ã®äººå·¥ç¥èœç ç©¶ã°ã«ãŒããGoogle Brainãã®ç ç©¶è ããœãããŠã§ã¢ã«æ©æ¢°åŠç¿ã·ã¹ãã ãèšèšãããããœãããŠã§ã¢ãäœã£ãèšèªåŠçãœããã®å質ãè©äŸ¡ãããšããããœãããŠã§ã¢ã«ããææç©ã¯ã人éãèšèšãããœãããŠã§ã¢ã®è©äŸ¡ãäžåã£ãã®ã ã ããæ°ã«æã§ãéå¶å©ã®ç ç©¶æ©é¢ã§ãããªãŒãã³AIïŒåµèšè ã®ã²ãšãã¯ã€ãŒãã³ã»ãã¹ã¯ïŒãããµãã¥ãŒã»ããå·¥ç§å€§åŠïŒMITïŒãã«ãªãã©ã«ãã¢å€§åŠããŒã¯ã¬ãŒæ ¡ããã£ãŒããã€ã³ãïŒã°ãŒã°ã«ãææããGoogle Brainãšã¯å¥ã®äººå·¥ç¥èœç ç©¶äŒç€ŸïŒçã®ç ç©¶ã°ã«ãŒãããåŠç¿ãœããã«åŠç¿ãœãããäœãããç ç©¶ã«é²å±ããã£ããšå ±åããŠããã çŸç¶ãæ©æ¢°åŠç¿ã®æè¡è ã¯äººæãäžè¶³ããŠãããäŒæ¥ã¯é«é¡ãªçµŠäžãæããªããã°ãªããªããããããèªå·±å§ååã®AIææ³ãå®çšåãããã°ãæ©æ¢°åŠç¿ãœãããããããç£æ¥ã«æ®åããã¹ããŒããå éããå¯èœæ§ãããã Google Brainãçãããžã§ãã»ãã£ãŒã³ã¯å é±ãæ©æ¢°åŠç¿ã®æè¡è ã®äœæ¥ã®äžéšã¯ããœãããŠã§ã¢ã«åã£ãŠä»£ãããããããããªããšæãã«è³ã£ãããã£ãŒã³ã¯ãèªåæ©æ¢°åŠç¿ããšåä»ããçºæã«ã€ããŠãããŒã ãç ç©¶ãé²ããç ç©¶ææ®µã®ãã¡ã§æãæå¿ãã®ãããã®ã®ã²ãšã€ãšèª¬æããã ãã£ãŒã³ã¯ã«ãªãã©ã«ãã¢å·ãµã³ã¿ã¯ã©ã©ã§éå¬ãããAIããã³ãã£ã¢ã»ã«ã³ãã¡ã¬ã³ã¹ã§ãä»ã®ãšãããåé¡è§£æ±ºã®æ¹æ³ã«äœ¿ããã®ã¯ãå°éç¥èãšããŒã¿ãã³ã³ãã¥ãŒã¿ã®èšç®ã§ããæ©æ¢°åŠç¿ã«äœ¿ãããèšå€§ãªæ°ã®å°éç¥èã®å¿ èŠæ§ã¯ãªãããã§ããããïŒããšè¿°ã¹ãã ã°ãŒã°ã«ææã®ãã£ãŒããã€ã³ãã®ç ç©¶ã°ã«ãŒãã®å®éšã§ããã£ãããšã¯ããåŠç¿ãåŠã¶ãäœæ¥ãšåŒã°ããææ³ã«ãããæ©æ¢°åŠç¿ãœããã®ããã©ãŒãã³ã¹ãé«ããããã«ãç¹å®ã®ã¿ã¹ã¯ã«é¢ããŠèšå€§ãªéã®ããŒã¿ãæå ¥ããå¿ èŠã軜æžããããšã«ããªãããšã ã ç ç©¶è ã¯ãœãããŠã§ã¢ã®èœåã詊ãããã«ãæ¯åç°ãªããé¢é£æ§ã®ããè€æ°ã®åé¡ãããšãã°è¿·è·¯ããã®è±åºãéçºãããããªåŠç¿ã·ã¹ãã ãäœãããããœãããŠã§ã¢ã«ããèšèšã«ã¯ãæ å ±ãäžè¬åããèœåããæ°ããªã¿ã¹ã¯ã«ã€ããŠã¯éåžžãããå°ãªã远å èšç·Žã§ç¿åŸã§ããèœåããã£ãã åŠç¿ãåŠã¶ãœãããŠã§ã¢ãéçºããã¢ã€ãã¢ã¯ã以åããèããããŠããããéå»ã®å®éšã§ã¯äººéã®çºæãåãçµæã¯åŸãããªãã£ãã1990幎代ã«ãã®ã¢ã€ãã¢ã®ç ç©¶ãé²ããã¢ã³ããªãªãŒã«å€§åŠã®ãšã·ã¥ã¢ã»ãã³ãžã§ææã¯ãã¯ã¯ã¯ã¯ããŸãããšããã ãã³ãžã§ææã¯ãçŸåšã¯åœæãããé«ãèšç®æ§èœãå ¥æã§ããããã«ãªããæ·±å±€åŠç¿ïŒæè¿ã®AIã®ç±çãäœãåºããŠãã倧å ã ïŒã®ææ³ãç»å Žããããšã§ãåŠç¿ãåŠã¶ãœãããéçºã§ããããã«ãªã£ããšãããããããã³ãžã§ææãææããŠãããšãããä»ã®ãšããAIã«ãããœããéçºã«ã¯åŒ·çãªèšç®æ§èœãæ¬ ãããªããããæ©æ¢°åŠç¿ã®æè¡è ã®è² æ ã軜ããªã£ããã圹å²ã®äžéšããœããã§çœ®ãæãããã§ãããšèããã®ã¯ææå°æ©ã ã Google Brainã®ç ç©¶è ã®èª¬æã«ããã°ã髿§èœã®ç»åçšããã»ããµãŒ800åã§äœããããœããã¯ã人éãäœã£ããœãããåãç»åèªèã·ã¹ãã ãèšèšããã MITã¡ãã£ã¢ã©ãã®ãªãã¯ãªã¹ãã»ã°ãã¿ç ç©¶å¡ã¯ãç¶æ³ãå€ããããšãä¿¡ããŠãããã°ãã¿ç ç©¶å¡ãšMITã®ããŒã ã¯ãèªåãã¡ã®ç ç©¶ïŒåŠç¿ãœãããŠã§ã¢ãèšèšããæ·±å±€åŠç¿ã·ã¹ãã ã§ãç©äœèªèã®æšæºãã¹ãã§äººéã®æã§èšèšããããœãããŠã§ã¢ãšåçã®å質ã ã£ãïŒã§äœ¿ããããœãããŠã§ã¢ããªãŒãã³ãœãŒã¹åããèšç»ã ã æ©æ¢°åŠç¿ã¢ãã«ã®èšèšã詊éšã«å€±æããã€ã©ã€ã©ããªããäœæéãè²»ãããã®ããã°ãã¿ç ç©¶å¡ããã®ãããžã§ã¯ãã«åãçµãã ãã£ããã ãã°ãã¿ç ç©¶å¡ã«ã¯ãäŒæ¥ãç ç©¶è ã«ã¯èªåæ©æ¢°åŠç¿ã®å®çŸæ¹æ³ãéçºããåŒ·ãææ¬²ããããšèããŠããã ãããŒã¿ã»ãµã€ãšã³ãã£ã¹ããæ±ããè² æ ã軜æžã§ããã°ã倧ããªææã§ãããããªãã°çç£æ§ãäžãããããããäºæž¬ã¢ãã«ãäœããé«ãã¬ãã«ã®ã¢ã€ãã¢ãæ¢æ±ã§ããããã«ãªããŸããã ïŒhttps://plus.google.com/s/%23%E3%82%B9%E3%83%BC%E3%83%91%E3%83%BC%E3%82%A4%E3%83%B3%E3%83%86%E3%83%AA%E3%82%B8%E3%82%A7%E3%83%B3%E3%82%B9/postsïŒ åè3 ããªã¢ãªãã£ã®ã£ããããšåŒã°ããæªè§£æ±ºã®åé¡ã«ã€ã㊠以äžè»¢èŒ ã2017.11.16 THU 18:00 ãã®çžæ²ã²ãŒã ã®äººå·¥ç¥èœã¯ãã10ååãã®å¯ŸæŠããéããŠèªãã«ãŒã«ãåŠç¿ãã ã€ãŒãã³ã»ãã¹ã¯ãåµèšã«é¢ãã£ãéå¶å©å£äœOpenAIã¯ã人工ç¥èœãçžæ²ã®è©Šåã10ååè¿ãç¹°ãè¿ãããšã§èªåã§åããé²åããŠããã³ã³ãã¥ãŒã¿ãŒã²ãŒã ãRoboSumoãã補äœãããã²ãŒã ã®ã«ãŒã«ãç¥ããªã人工ç¥èœãç¬åã§çžæ²ããã¹ã¿ãŒããããã»ã¹ã¯ãã»ãã®åéã§ãå¿çšã§ããå¯èœæ§ãããã TEXT BY TOM SIMONITE TRANSLATION BY MAYUMI HIRAI/GALILEO WIRED(US) 10æ11æ¥ïŒç±³åœæéïŒã«ãªãªãŒã¹ãããã·ã³ãã«ãªçžæ²ã²ãŒã ã¯ãç»å衚çŸãåãç«ãŠãŠçŽ æŽããããã®ã§ã¯ãªããã ãã人工ç¥èœïŒAIïŒãœãããŠã§ã¢ã®é«åºŠåã«è²¢ç®ããå¯èœæ§ãç§ããŠããã ãã®ã²ãŒã ãRoboSumoãã®ä»®æ³äžçã§æŠããããããã¡ãå¶åŸ¡ããŠããã®ã¯ã人éã§ã¯ãªãæ©æ¢°åŠç¿ãœãããŠã§ã¢ã§ããããããŠäžè¬çãªã²ãŒã ã®ãã£ã©ã¯ã¿ãŒãšã¯ç°ãªãããã®ãããããã¡ã¯æ Œéããããšãããã°ã©ãã³ã°ãããŠããªãã詊è¡é¯èª€ããªããç«¶æããåŠç¿ãããªããã°ãªããªãã®ã ã æ©ãæ¹ããç¥ããªãç¶æ ã§è©Šåéå§ ãã®ã²ãŒã ã¯ãã€ãŒãã³ã»ãã¹ã¯ãåµèšã«ãããã£ãïŒ»æ¥æ¬èªçèšäºïŒœã人工ç¥èœç ç©¶ã®éå¶å©å£äœOpenAIã補äœãããã®ã ãç®çã¯ãAIã·ã¹ãã ã匷å¶çã«ç«¶ãããïŒ»æ¥æ¬èªçèšäºïŒœããšã§ããã®ç¥èœãé«åºŠåã§ãããšç€ºãããšã«ããã OpenAIã®ç ç©¶è ã®ã²ãšãã§ããã€ãŽãŒã«ã»ã¢ã«ãããã«ãããšãAIã¯å¯ŸæŠçžæã仿ããŠããè€éã§ç®ãŸããããå€ããç¶æ³ã«ç«ã¡åããããšã«ãªãããç¥èœã®è»æ¡ç«¶äºãã®ãããªç¶æ³ãçãŸãããšããããã®ããšã¯ãåŠç¿ãœãããŠã§ã¢ãããããã®å¶åŸ¡ã ãã§ãªãããã以å€ã®çŸå®ç€ŸäŒã«ãããäœæ¥ã«ã䟡å€ã®ãããå·§åŠãªã¹ãã«ããç¿åŸããã®ã«åœ¹ç«ã€å¯èœæ§ãããã OpenAIã®å®éšã§ã¯ãåçŽåãããããåããããããæ©ãæ¹ããç¥ããªãç¶æ ã§ç«¶æçšã®ãªã³ã°ã«å ¥å Žãããããã°ã©ãã³ã°ãããŠããã®ã¯ã詊è¡é¯èª€ãéããŠåŠç¿ããèœåãšãåãåãæ¹æ³ãåŠç¿ããŠçžæãåããšããç®æšã ãã ã 10ååã«è¿ãå®éšè©Šåãç¹°ãè¿ãããããããã¡ã¯ãããŸããŸãªæŠç¥ãç·šã¿åºãããããå®å®ãããããã«å§¿å¢ãäœããããè©éãããé£ããããŠçžæããªã³ã°ããèœãšããªã©ã®æŠç¥ã ãç ç©¶è ãã¡ã¯ããããããç«¶æäžã«èªåã®æŠç¥ãç¶æ³ã«é å¿ãããããã ãã§ãªããçžæãæŠæ³ãå€ãããšæããããææãã®äºæž¬ãŸã§å¯èœã«ããæ°ããåŠç¿ã¢ã«ãŽãªãºã ãéçºããã æãé »ç¹ã«å©çšãããŠããã¿ã€ãã®æ©æ¢°åŠç¿ãœãããŠã§ã¢ã¯ãèšå€§ãªæ°ã®ãµã³ãã«ããŒã¿ã«ã©ãã«ãã€ããŠåŠçããããšã«ãã£ãŠãæ°ããæè¡ã身ã«ã€ãããšãããã®ã ãããã«å¯ŸããŠOpenAIã®ãããžã§ã¯ãã¯ãããããã¢ãããŒãã®éçããAIç ç©¶è ãã¡ãã©ã®ããã«ããŠéããããšããŠãããã瀺ãäžäŸã ã ãããŸã§ã®æ¹æ³ã¯ã翻蚳ãé³å£°èªèãé¡èªèãªã©ã®åéã«ãããæè¿ã®æ¥éãªé²æ©ã«è²¢ç®ããŠãããããããå®¶åºçšããããã®å¶åŸ¡ã®ããã«ãAIãããåºãå¿çšã§ããããã«ããããã®è€éãªã¹ãã«ã«ã¯åããŠããªãã ããé«åºŠãªã¹ãã«ããã€AIãå®çŸããå¯èœæ§ã«åããã²ãšã€ã®éµãšãªãã®ãããœãããŠã§ã¢ã詊è¡é¯èª€ãéããŠç¹å®ã®ç®æšã«åããŠåãçµãã匷ååŠç¿ãã ããã³ãã³ã«æ ç¹ã眮ãAIã®æ°èäŒæ¥ã§ãã°ãŒã°ã«ã«è²·åããããã£ãŒããã€ã³ãããã¢ã¿ãªã®è€æ°ã®ãŽã£ããªã²ãŒã ããã¹ã¿ãŒïŒ»æ¥æ¬èªçèšäºïŒœãããœãããŠã§ã¢ãéçºãããšãã«äœ¿ãããæ¹æ³ã ãçŸåšã¯ãããããã«ç©ãæŸããããªã©ãããã«è€éãªåé¡ããœãããŠã§ã¢ã«è§£æ±ºãããããã«å©çšãããŠããã OpenAIã®ç ç©¶è ãã¡ãRoboSumoã補äœããã®ã¯ãç«¶ãåã£ãŠè€éæ§ãå¢ãããšã«ãããåŠç¿ã®é²æãæ©ããããšãã§ããå¯èœæ§ããããšèããŠããããã ã匷ååŠç¿ãœãããŠã§ã¢ã«ããã«è€éãªåé¡ãäžããŠèªåã§è§£æ±ºããããããããã®ã»ãã广çãªã®ã ãšããããã»ãã®èª°ããšçžäºã«ããåããšãã¯ãçžæã«é©åã«å¯Ÿå¿ããªããã°ãªããŸãããããããªããã°è² ããŠããŸããŸãããšãã€ã³ã¿ãŒã³ã·ããæéäžã«OpenAIã§RoboSumoã«ãããã£ãã«ãŒãã®ãŒã»ã¡ãã³å€§åŠã®å€§åŠé¢çããã«ã¢ã³ã»ã¢ã«ã·ã§ãã£ãŽã¡ããã¯è¿°ã¹ãã OpenAIã®ç ç©¶è ãã¡ã¯ãããããèããã¯ã¢åããããããåçŽãªãµãã«ãŒã®PKæŠãªã©ã®ã»ãã®ã²ãŒã ã§ã詊ããŠãããç«¶ãåãAIãšãŒãžã§ã³ãã䜿ã£ãåãçµã¿ã«é¢ãã2ä»¶ã®è«æãšãšãã«ãRoboSumoãã¯ãããšããããã€ãã®ã²ãŒã ãšããšãã¹ããŒããã¬ã€ã€ãŒãã¡ã®ã³ãŒããçºè¡šãããŠããã ç«ã¡ã¯ã ããããªã¢ãªãã£ã®ã£ããã é«ãç¥èœããã€ãã·ã³ãã¡ã人éã®ããã«ã§ããããšãšããŠãçžæ²ã®æ Œéãæãäžå¯æ¬ ãªãã®ã ãšã¯èšããªããããããªããããããOpenAIã®å®éšã§ã¯ãã²ãšã€ã®ä»®æ³ç°å¢ã§åŠç¿ããã¹ãã«ããã»ãã®ç¶æ³ã«ãã¡èŸŒãŸããããšã瀺åãããŠããã çžæ²ã®ãªã³ã°ã«ããããåãããããã匷ã颚ãå¹ãä»®æ³ã®äžçã«ç§»ãããšãããããããã¯èãèžã匵ã£ãŠçŽç«ã®å§¿å¢ãç¶æãããããã¯ããããããäžè¬ã«éçšããããæ¹ã§èªåã®èº«äœãšãã©ã³ã¹ãå¶åŸ¡ããæ¹æ³ãåŠç¿ããããšã瀺åããŠããã ãã ããä»®æ³ã®äžçããçŸå®ã®äžçã«ã¹ãã«ããã¡èŸŒãã®ã¯ããŸã£ããå¥ã®é£é¡ã ããããµã¹å€§åŠãªãŒã¹ãã£ã³æ ¡ã®ææããŒã¿ãŒã»ã¹ããŒã³ã«ãããšãä»®æ³ç°å¢ã§æ©èœããå¶åŸ¡ã·ã¹ãã ãçŸå®äžçã®ããããã«çµã¿èŸŒãã§ããéåžžã¯æ©èœããªããšãããããã¯ããªã¢ãªãã£ã®ã£ããããšåŒã°ããæªè§£æ±ºã®åé¡ã ã OpenAIã§ããã®åé¡ã«åãçµãã§ãããã解決çã¯ãŸã çºè¡šãããŠããªããäžæ¹ã§ãOpenAIã®ã¢ã«ãããã¯ããããã®ä»®æ³ã®ããåããããã«ãåã«ç«¶ãåãããšãè¶ ããåå ãäžããããšèããŠãããã¢ã«ãããã®é ã®ãªãã«ããã®ã¯ããããããã¡ãç«¶ãåãã ãã§ãªããååããå¿ èŠãããå®å šãªãµãã«ãŒã®è©Šåã ãã (https://wired.jp/2017/11/16/ai-sumo-wrestlers/) åè4 以äžè»¢èŒ 以äžã«è»¢èŒããèšäºã ãã人éãåæã«ææãæåœ±ããã«éããªããšè¿°ã¹ãŠãããããã察話ã®éçšã§çãŸããç¬èªèšèªã¯çè§£äžèœãã€ãŸãããšãçè§£äžèœã§ãã£ãŠããããããäŒè©±ãã§ãããã®éçšã§çãŸãããç¬èªèšèªãã§ãããšããè§£éã¯åæãªäººéã®æåœ±ã§ã¯ãªããšèªããŠãããéçšã®äžè²«æ§ã®æ å ã§æŽåçã«ãèšèªã®å€å®¹ããšããŠãè§£éå¯èœãã ãšãçµå±AIãã¡ã¯äººéã«ã¯çè§£äžèœãªäŒè©±ãããŠããã®ã ã 以äžè»¢èŒ ãã2ã€ã®AIãâç¬èªèšèªâã§äŒè©±ãã®ççž--Facebookã®AIç ç©¶éçºè ãæãã è€äºæ¶Œ ïŒç·šééšïŒ äºå£è£å³2017幎11æ16æ¥ 07æ00å 2017幎å€ãFacebookã®äººå·¥ç¥èœïŒAIïŒç ç©¶çµç¹ã§ãããFacebook AI ResearchïŒFacebook人工ç¥èœç ç©¶æïŒããè¡ã£ãããå®éšãäžçäžã§å€§ããªè©±é¡ã«ãªã£ãã2ã€ã®AIã§äŒè©±å®éšããããšããã人éãçè§£ã§ããªãèšèªã§äŒè©±ããã¯ãããå®éšã匷å¶çµäºããããšããå 容ã§ãäžçäžã®ã¡ãã£ã¢ããã€ãã«AIãææãæã¡äººéãè ããã®ã§ã¯ããšã»ã³ã»ãŒã·ã§ãã«ã«å ±ããã®ã ã ãã®SFã®ãããªäºæ ã¯æ¬åœã«èµ·ããã®ã ããããFacebook AI Researchã®ãšã³ãžãã¢ãªã³ã°ã»ãããŒãžã£ãŒã§ãå®éã«ãã®å®éšã«é¢ãã£ãã¢ã¬ã¯ãµã³ãã«ã»ã«ããªã¥ã³æ°ãã€ã³ã¿ãã¥ãŒã®äžã§è³ªåã«çãããåæ°ã¯ãå ±éå 容ã®çåœã«ã€ããŠãååã¯æ¬åœã§ãååã¯ã¯ã¬ã€ãžãŒãªçèšã ããšåçããããŠãç ç©¶å 容ã®è©³çްãæãããŠãããã ã«ããªã¥ã³æ°ãæãããç ç©¶ã§ã¯ã2ã€ã®AIãšãŒãžã§ã³ãã«ãäŸ¡æ Œã亀æžããŠåæããããšããç®æšãèšå®ãããäžæ¹ã¯äŸ¡æ Œãäžãããç«å Žãããäžæ¹ã¯äŸ¡æ Œãäžãããç«å Žãèšå®ããŠäŒè©±ãå§ããã®ã ãšãããããããè€æ°ã®AIãšãŒãžã§ã³ãã䜿ã£ãå®éšã¯ãæããããäžè¬çãªãã®ã§ããã®å®éšã§ã¯ãã®2ã€ã®AIãšãŒãžã§ã³ããæ°ããªäŸ¡æ Œäº€æžã®æŠç¥ãçã¿åºãããšãã§ãããã«æ³šç®ããŠããã®ã ããã ã ãã®2ã€ã®AIãšãŒãžã§ã³ãã¯ã䜿çšèšèªã®å€æŽãèš±ãããŠãããåœåã¯è±èªã䜿çšããŠã³ãã¥ãã±ãŒã·ã§ã³ãããŠãããšãããããããäŒè©±ãããäžã§AIã®äœ¿çšèšèªãåŸã ã«å€åããŠãã£ãã®ã ãšããã ãã ããã®ç¹ã«ã€ããŠã«ããªã¥ã³æ°ã¯ãç ç©¶è ã«ãšã£ãŠã¯é©ãããšã§ã¯ãªããèšå®ããããŽãŒã«ã«åãã£ãŠãããããã®ãæé©åããïŒïŒãã®å Žåã¯èšèªã倿ŽããïŒããšã¯åœããåã®ããšãããããäŒè©±å®éšã§èšèªãå€åããããšã¯ãããããããšã ããšè©±ããèšèªãå€åããŠããããšã¯ç ç©¶è ã«ãšã£ãŠæ³å®ã®ç¯å²å ã ã£ããšããããšã ã ãããŠãâå®éšã匷å¶çµäºãããâãšããå ±éã«ã€ããŠã¯ãã«ããªã¥ã³æ°ãå®éšãäžæ¢ããããšãèªããããã®çç±ã«ã€ããŠã¯ã圌ãã亀ãããŠããäŒè©±ãçè§£ã§ããããããç ç©¶ã«æŽ»çšã§ããªããã®ã ãšå€æããããã ãæ±ºããŠãããã¯ã«ãªã£ãããã§ã¯ãªãããšèª¬æãããã€ãŸããç ç©¶æã§è¡ãããå®éšã®ãã¹ãŠã¯ããã°ã©ã ãããŠããããšã§ããã圌ãã«ãšã£ãŠã¯äºæãããããšã ã£ããšããã ã§ã¯ãªãããŸãã§AIãç¬èªã®ææãæã£ããã®ãããªè§£éããããã®ããã«ããªã¥ã³æ°ã¯ãç§ãã¡ã¯ãã®ããšã説æããããã®ç ç©¶ææãå ¬éãããããããèªãã 誰ããâAIã人éã«çè§£ãããªãããã«ç¬èªã«èšèãäœãåºããâãšé£èºçãªè§£éãããã®ã§ã¯ãªããããšã®èŠæ¹ã瀺ãããã®äžã§ãAIã®æ¬è³ªã«ã€ããŠæ¬¡ã®ããã«èªã£ãã ãAIã¯èªåãã¡ã§ææãç®æšãçã¿åºããªãããã®å®éšã§ã¯ã人éãããã°ã©ã ããâAIãšãŒãžã§ã³ãã®ç«å Žã«å¿ããæé©ãªåæã«ãã©ãçãããšâãšããç®æšã ããæã£ãŠããããã®éçšã§èšèªãå€ãã£ãŠãã£ãã®ã¯ç®æšã®ããã®æé©åããçãŸãããã®ã§ãã£ãŠã人éã«äœããé ããããªæå³ããã£ããšããã®ã¯ãå šãã¯ã¬ã€ãžãŒãªçèšã ãšããããïŒã«ããªã¥ã³æ°ïŒã ãããŠããã®ãããªé£èºçãªè§£éãããŠããŸãèæ¯ãšããŠãäžã®äžã®äººã ãAIã«å¯ŸããŠä»¥äžã®ãããªã€ã¡ãŒãžãæã£ãŠããããã§ã¯ãªãããšè©±ããAIãšããååšãæ£ããçè§£ãã¹ãã ãšèšŽããã ã1968å¹Žã®æ ç»ã2001幎å®å®ã®æ ããã芧ããã ããããããã§æãããAIã¯ãèªåã®ææãæã¡ã人éãäžèŠãªååšãšããŠæé€ãããšãã倿ãèªãããŠããããã®æ ç»ãèŠã人ã¯ãè¿ãæªæ¥ã«ã¯ãã®ãããªAIãç»å Žãããšä¿¡ããŠããŸã£ãŠããã®ãããããªããããããå®éã«ãã®ãããªAIã¯çŸããŠããªããããããâãã£ã¯ã·ã§ã³âã«ãã人ã ã®æåŸ ãäºæ³ããAIã«å¯Ÿãã誀ã£ãã€ã¡ãŒãžãçã¿åºããŠããŸã£ãã®ã§ã¯ãªãããïŒã«ããªã¥ã³æ°ïŒãã
Trustpilot
3 weeks ago
5 days ago