Making The World Computable

  • Uploaded by: Sally Morem
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Making The World Computable as PDF for free.

More details

  • Words: 2,169
  • Pages: 8
Making the World Computable A Platonic Search Engine? By Sally Morem Plato believed mathematics was “out there” in the world, not merely in the minds of mathematicians. So does Stephen Wolfram. He has spent his professional life translating Plato’s belief in the Eternal Forms into his own belief that the universe is composed of an inconceivable large number of computations. If this idea sounds familiar to you, perhaps you’ve read something about the work of noted physicist, Edward Fredkin. Fredkin stipulated that universal processes are discrete, that is, digital. He further stipulated that any discrete process is by definition computable. So far, so good. But, then he took his analysis one step further, a radical step. He claimed that our universe is actually an artifact produced by programming on a computer, although he did decline to specify exactly who or what was doing the programming. Stephen Wolfram has never taken that last step. But he is in fundamental agreement with Fredkin on the digital nature of the universe. Wolfram is most noted for his long-term work on cellular automata. CA is made up essentially of tiny elements of software, or cells, with simple instructions on how to change their state in certain very definite ways in response to changes in their neighbors’ states. As each “generation” of computation goes by, we see the cells winking on and off on the screen in patterns of amazing complexity. Wolfram’s work was influenced by a number of different people, but none more than English mathematician Alan Turing. In the 1930s, Turing constructed a mathematical concept known as the “Turing Machine.” It’s an abstract computational device, a kind of thought experiment intended to explore the notion of computation. What does it mean when you compute something? What does the process entail? Turing described a bare-bones version of computation involving a

simple device that moves along a paper strip, makes a mark or not according to a very simple memory circuit that translates neighboring marks on the paper into instructions, and then moves to the next section of paper, again in response to already existing marks. Each step in this process becomes one computation. If you combine a number of Turing Machines to make a larger machine, you could, in theory, program them to duplicate the work of any computer. Since they are so much simpler and slower, duplicating the work of, say, of a laptop would take a very long time, but the work would get done eventually. This ability to mimic any operation in any computer is why mathematicians call the Turing Machine a universal computer. By describing computation in such a fundamental way, Turing helped trigger the computer revolution that began during World War II and is continuing at accelerating speed to revolutionize life in the 21st century. The relationship between the Turing Machine and the computations going on in CA are obvious. Wolfram took the insights he gained working on CA to develop Mathematica, a software package that permits a scientist to visually map an enormous amount of data onto a large variety of appropriate graphs. By so doing, the scientist gains a much more intuitive sense of what that data is trying to say from many different angles. And then came Wolfram’s enormous doorstop of a book, “A New Kind of Science,” (NKS) in which he put his hard-earned knowledge from CA and Mathematica to the test. In it, he worked out his own version of Fredkin’s astonishing hypothesis on the nature of the universe. Essentially, he said that the universe is a computer, built up of a rich medium of innumerable tiny computers that act very much like those cells found in CA. Massively parallel processing with a vengeance. Rudy Rucker, in his recent article on Wolfram, summarizes the idea nicely: “Given the world’s apparent complexity, it seems counterintuitive that the world could be based on simple rules. But computer scientists like Wolfram have amply demonstrated that simple rules can in fact generate complex behavior. An example: a simple rule describes how billiard balls bounce off

each other, but if you set a bunch of balls in motion, the resulting patterns are quite intricate.” For those of you familiar with chaos theory and fractals, this description should sound familiar. This is precisely how fractals are grown—run extremely simple equations, plug the answer back into the equation, and repeat the procedure over and over again, thousands, perhaps even millions of times. Chaotic systems, such as weather systems, coastlines, and stock market prices, are modeled using similar equations. Results plotted on a graph are very fractal-like in nature. Universal computation would be an even more fecund explanation of the way things are if it could be demonstrated that interactions of cells at this lowest level of computation would generate higher levels of complexity, somewhat analogous to what we see from a bird’s eye view as massive numbers of CA cells interact. Perhaps an advanced version of CA using Mathematica might be able to generate models of interacting processes at even higher levels of organization, up from those described by physics, through chemistry, through biology, through neurology, all the way to the social sciences. If, say, a form of cellular automata could generate an analog of life, artificial life, for instance, this achievement would offer strong circumstantial evidence for the existence of primal tiny computers at the very bottom of it all acting as seeds for everything around us. But Stephen Wolfram didn’t stop at Mathematica. We will soon have the use of Wolfram/Alpha, due to be launched in May of 2009. Rucker included quotes from a recent telephone interview with Wolfram on his latest brainchild. Here’s Wolfram’s cogent summation: “Wolfram/Alpha isn’t really a search engine, because we compute the answers, and we discover new truths. If anything, you might call it a platonic search engine, unearthing eternal truths that may never have been written down before.” Wolfram/Alpha, is a search engine that is not a search engine. In contrast to real search engines such as Google, that respond to a user’s request for a certain type of web page based on the words used in the query by displaying lists of web page results, Wolfram/Alpha actually carries out computations

based on data it finds on the Internet. Wolfram is hoping it will help everyone compute numerous things of interest in their lives. Even though the software instructs you to type your question into a highlighted box, just as you would when using Google, Wolfram/Alpha actually breaks your question down into its linguistic parts, figures out what you mean, and then applies that analysis by running calculations based on data found on a number of appropriate web sites, and generates an answer to your query. And this next claim may seem very eerie to you, but your answer may not have ever existed anywhere in the universe, not even in the Library of Congress, until Wolfram/Alpha generated it. Stephen Wolfram is Isaiah Berlin’s proverbial hedgehog. The ancient Greek, Archilochus, originally said, “The fox knows many things, but the hedgehog knows one big thing.” Berlin quoted Archilochus to great effect in his famous essay on the intellectual hedgehog and fox. As we study Wolfram’s work, we see that his life demonstrates the truth of this truism. Wolfram does indeed know one big thing. This quote was also pulled from Rucker’s article: “’Most of us only have one idea,’ he remarks. ‘My idea is to make the world computable. Mathematica was about finding the simplest primitive computations, and designing a system where humans could hook these computations together to create patterns of scientific interest. NKS was about the notion that that we can start with primitive computations and not bring in humans at all. If you do a brute search over the space of all possible computations, you can find ones that are rich enough to produce the naturallooking kinds of patterns that you want. And Wolfram|Alpha is about how we might build the edifice of human knowledge from simple primitive computational rules.’” Here is where we get an inkling of how Wolfram/Alpha may impact our accelerating technological trends, if (and I do emphasize if) it works as advertised. Imagine that over the years millions of people ask Wolfram/Alpha to calculate things for them, things that might never have been calculable before. For example:

• A detailed description of weather conditions in your home town over the past 100 years. •

Or, the number of individual sales of anything and everything in the United States for the last year, or for the last month, or for one day during that month.



Or, a detailed measurement of the Grand Canyon rendered in its fractal dimensions.



Or, for people interested in the technological Singularity (that point at which unimaginable advances have sped up so much that they are taking place within hours, even minutes), they may wish to ask at what rate technological development is advancing now.

The possibilities are truly endless. Those interested in the artificial intelligence Singularity (that point at which an incredibly advanced, thinking, self-aware AI comes into being) may be interested in the actual workings of Wolfram/Alpha. As I said above, it uses linguistic software to analyze and break down the user’s query into computable parts. Instead of having to compile an enormous linguistic database ahead of time, as a number of AI researchers are doing now, Wolfram/Alpha would let the users do the grunt work by supplying queries to the system. Wolfram described an ordinary short phrase in a Google query as an example of linguistic “deep structure,” without all the frills added by surface grammar. He also asserts that the deep structure evident in such phrases make them computable. Picture this if you will: Wolfram/Alpha is receiving queries and generating answers over a number of years. As it does so, the system as system compiles an enormous data base of them—usages and meanings as used and meant by real, ordinary human beings. We may well imagine that the system engages in an enormous amount of cross-referencing, growing an intricate web of meaning. As this linguistic database grows from the bottom up, it will constitute a treasure trove of computable usages, usable, in turn, by AI researchers.

Furthermore, imagine all these users asking even more interesting questions based on previously generated answers. Think of this bootstrapping process as an enormously long and complex dialogue between millions of humans and rapidly advancing software. If this happens, Wolfram/Alpha and users will comprise a tight loop of teaching and learning: Humans teaching Wolfram/Alpha linguistic meanings while in turn learning from answers received. And, as Wolfram/Alpha’s computing power increases through its embodiment in more and more advanced hardware and a vast projected growth in the Internet, and as the algorithms themselves improve while compiling a progressively larger, richer, more accurate data set, imagine the answers getting better and better, perhaps even eventually approaching the level imagined by advocates of AI for passing a Turing test—that point at which the AI’s responses can no longer be distinguished from those offered by a human. This is not exactly the manner in which AI researchers thought a true AI would come into being. Top-down research programs still remain the mainstay of that field. But I suspect this may offer a far more promising path, if it comes about. How big a boost will Wolfram/Alpha provide for the future development of the sciences, for technical inventions and adaptations, and for new and improved ordinary, everyday life? Would such an innovation catalyze even faster acceleration than what would have occurred without it? If this technology grows powerful enough, it may well play a major role in our advancement towards the Singularity.

Sources Here is a well thought out summary of Edward Fredkin’s work: http://www.bottomlayer.com/bottom/finite-all.html Here is a fine encyclopedia entry on Turing Machines:

http://plato.stanford.edu/entries/turing-machine/ Here is Wolfram/Alpha’s home page. It is launching May 2009: http://www.wolframalpha.com/ Here is the original announcement of the existence of Wolfram/Alpha posted on Wolfram’s personal blog: http://blog.wolfram.com/2009/03/05/wolframalpha-is-coming/ Here is a review found on a New York Times web site blog, “Better Search Doesn’t Mean Beating Google,” by Saul Hansell: http://bits.blogs.nytimes.com/2009/03/09/better-search-doesnt-mean-beating-google/ Here is the original article written by Rudy Rucker and published in H+ Magazine: http://hplusmagazine.com/media/sw_alphapodcast.mp3. Here is the audio interview of Stephen Wolfram conducted by science and science fiction writer Rudy Rucker. It is a phone conversation, so the audio quality is not the best: http://hplusmagazine.com/media/sw_alphapodcast.mp3 Here is where you can find Isaiah Berlin’s famous essay at Amazon: http://www.amazon.com/Hedgehog-Fox-Essay-Tolstoys-History/dp/1566630193 Here, a fan posted an essay by Jorge Luis Borges, “The Analytical Language of John Wilkins.” Wolfram tells us during his interview with Rucker that this essay made a very big impression on him. So did the work of the early Encyclopediasts from the 17th and 18th century: http://www.alamut.com/subj/artiface/language/johnWilkins.html

Here is a sample of Rudy Rucker’s nonfiction book, “The Lifebox, the Seashell and the Soul.” It deals with a number of insights Rucker held in common with Wolfram’s work. It’s a PDF file: http://www.rudyrucker.com/lifebox/lifeboxsample.pdf

Related Documents


More Documents from ""