So here’s one way even a superintelligent, general-purpose AI might be predictably restrained from, you know, destroying us all on some inscrutable whim:
What if the best possible processes at the core of intelligence – for saying, « given what I know about the world, what new information can I conclude » and « how can I manipulate my environment to get from A to B? » – are conceptually simple? It might even be possible that existing algorithms – e.g. decision trees, theorem provers, and the classification and correlation-measuring engines used today in machine learning – are close to optimal, in the correctness of their results if not necessarily their speed.
One of the reasons I think so is that these algorithms tend to be direct approaches to their problems. Want to find out how likely you are to like The Kinks given that you like the Rolling Stones and The Who but not Fleetwood…
View original post 636 mots de plus