Restrictions on biological adaptation in language evolution

Posted 3 Mar 2009 at 13:01 UTC by sye Share This

Language acquisition and processing are governed by genetic constraints. A crucial unresolved question is how far these genetic constraints have coevolved with language, perhaps resulting in a highly specialized and species-specific language "module," and how much language acquisition and processing redeploy preexisting cognitive machinery. In the present work, we explored the circumstances under which genes encoding language-specific properties could have coevolved with language itself. We present a theoretical model, implemented in computer simulations, of key aspects of the interaction of genes and language. Our results show that genes for language could have coevolved only with highly stable aspects of the linguistic environment; a rapidly changing linguistic environment does not provide a stable target for natural selection. Thus, a biological endowment could not coevolve with properties of language that began as learned cultural conventions, because cultural conventions change much more rapidly than genes. We argue that this rules out the possibility that arbitrary properties of language, including abstract syntactic principles governing phrase structure, case marking, and agreement, have been built into a "language module" by natural selection. The genetic basis of human language acquisition and processing did not coevolve with language, but primarily predates the emergence of language. As suggested by Darwin, the fit between language and its underlying mechanisms arose because language has evolved to fit the human brain, rather than the reverse.


* Baldwin effect * coevolution * cultural evolution * language acquisition

By the same token, Computing languages has evolved to take full advantage of computing hardware and human social networks, not the reverse?

Natural language vs Programming Languages, posted 3 Mar 2009 at 14:27 UTC by StevenRainwater » (Master)

While not specifically addressing your question of the evolution of programming languages, there was an interesting discussion of similarities and differences in the two on Meta-Filter last year: Computer languages considered in linguistic contexts?

Evolved?, posted 3 Mar 2009 at 14:41 UTC by redi » (Master)

Although the idea is interesting, programming languages are more consciously designed, and have gone through far fewer generations compared to natural languages, so I'm not sure how applicable it is to talk about them evolving (yet)

language involves a lot more than biology, posted 3 Mar 2009 at 17:53 UTC by ta0kira » (Apprentice)

By the same token, Computing languages has evolved to take full advantage of computing hardware and human social networks, not the reverse?
The idea of digital computing came out of the era of cognitivism: the speculation that a human mind functions as we conceptualize computers to function. A computer, therefore, is an intentional analogy of the human mind. Whether or not there are any direct correlations has been greatly disputed (and largely defeated) in the half-century since.

The conclusion of the abstract has a strong resemblance to the founding principle of cognitive emergence (in the context of cognitive science,) which essentially states that cognition, and therefore language, arises from physical experience and not because they're are innate or are composed of built-in modules (as proposed by Jerry Fodor.) This directly refutes the idea of cognitivism, which is largely the basis for AI.

One can view a programming language as a closed grammatical system which facilitates an open lexical system; this idea directly corresponds to the theory of language proposed by Leonard Talmy. In programming, however, the lexical system expands within its own context; expression of a symbol in one program provides it with validity in the code it's used in, but in no way does it hold a general definition used by all code.*

Natural language is used with the intention of others being able to understand it based on cultural interpretations. The actual code written in a programming language is understood by others because they know the grammar and what it means for the relationships between the symbols used, but this is normally because of comments and symbols names are chosen to reflect their purposes in a natural language. Natural language evolves among social groups; both grammatically and lexically, and unintentionally for the most part. Programming languages evolve as grammars through intentional decisions and acts by groups. The lexical systems used might evolve through coding standards; however, using an entirely-novel system of symbol nomenclature in no way invalidates the use of the programming language, so long is it complies with the grammatical rules. Granted, one can form sentences of pure gibberish that have meaning to the user; however, the primary recipient of natural-language communication isn't its user.

As a final point, natural language evolves differently than programming languages because communicators are also receivers of the same format of information, whereas communicators of programming languages don't receive unique feedback from the computer in the form of code. Natural languages, therefore, evolve in a homogeneous undirected network. Programming languages evolve among these same networks; however, the primary communication using a programming language is unidirectional toward the machine. Although different programmers can communicate with each other using code, the symbol names used gain their relevance in the context of the code; they haven't evolved with the language.*

If any evolution takes place in programming languages, it's a result of an ongoing social consensus of each programmer's evolved ability to interact with a machine via code. The machines themselves evolve based on how technology can be used to fulfill the requirements of programming. By induction, this means programming languages don't evolve to fill the mind of the computer; they evolve to fill the generalized needs of programmers who find it best to collaborate using the social structure already involved in the evolution of natural language.

Kevin Barry

*An obvious counterpoint to make here is that standard libraries use standardized names, but remember that those names don't casually evolve into others slightly varying in spelling, although they might in meaning. These meanings generally don't vary by locality, nor by culture.

Evolving, posted 3 Mar 2009 at 18:31 UTC by zanee » (Observer)

Programming languages are just a representation of language as it exist now; so when I think about programming languages evolving I think we'll see more symbolic languages that grow as human languages themselves evolve. Partly do to societies attention span but also partly do to hardware limitations.

Parallels to language and knowledge and DNA systems, posted 3 Mar 2009 at 21:29 UTC by lkcl » (Master)

in speaking with a research professor involved in the human genome project, i learned that they are beginning to appreciate that languages, meta-languages and meta-meta-languages have evolved on top of DNA.

the parallels between DNA "computing" and Silicon-based "computing" are therefore startling.

DNA CGAT is like the equivalent of binary 0s and 1s.

then you get genes, which are like the equivalent of char and ints.

then you get gene sequences, which are like assembly-instructions (my knowledge of DNA starts to get a little fuzzy here)

then the researchers have discovered that "parsers" are being created, in DNA form, which are able to understand the "assembly sequences".

and parsers on top of parsers.

my guess is that at some point they will find implementations of compression algorithms!

the parallels with computing - the deployment of tokenisers on top of lexers on top of grammar rules on top of syntactical analysers - are clear as daylight.

the parallels with natural language are also clear: after all, a parser's job is to take concepts in one space and map them onto another one, providing a means to understand the relationship between concepts at the "same level" (e.g. grammar) and those of the levels "below" (grammar plus lexicon plus tokens).

sometimes it is easier to have "language" as being a sequential expression of tokens, with "time" being the means to differentiate between the tokens - thus we have "spoken" language.

sometimes the medium allows several tokens to be expressed at once, as in music, or as in a VLIW CPU architecture or a SIMD instruction set.

the bottom line is that ultimately there really isn't any significant difference between what DNA does, what language is (in written and spoken form), what memes are, and what computers are, which isn't all that surprising when you think about it because we are, after all, sharing the same Operating System: our universe.

more rain for your parade, posted 4 Mar 2009 at 04:08 UTC by ta0kira » (Apprentice)

my guess is that at some point they will find implementations of compression algorithms!
If you think about it, everything about how the mind works has to do with compression. In order for higher-order processes to take place the mind has to draw abstractions from flawed and inconsistent input, which inevitably requires the loss of information. These higher-order processes also decompress the abstracted information in that they provide feedback to the lower-order areas. For example, your attention might be unintentionally drawn to a stimulus recognized from somewhere else.

Something to be wary of is the charisma of analogy. Extensive analogy is one sign of a flawed argument, although that doesn't necessarily imply intent to deceive (or to argue, for that matter.) No matter how well an analogy holds up, it cannot serve as evidence in and of itself. The problem with it is that it can line up extremely nicely on a point-for-point basis; however, this is often used to justify extending one half based on fact, then transposing that to the other half as proof of fact. In other words, there are probably equal ways in which biology doesn't mirror computers. Those points might be construed as implementation details, but remember, many a theory has fallen because the "minor details" cannot be definitively connected (e.g. "the symbol grounding problem.")

Programming languages are just a representation of language as it exist now; so when I think about programming languages evolving I think we'll see more symbolic languages that grow as human languages themselves evolve. Partly do to societies attention span but also partly do to hardware limitations.
One additional point I'd like to make, along the lines of my previous comment, is a generalized comparison of language evolution over the last 40 years. I expect these points to support themselves; therefore, I'll just throw them out there as-is.

Compare the evolution of the English language to programming languages over the last 40 years. Specifically, I'll point out that English has sparsely changed as far as grammatical rules; the changes are largely the modification of vocabulary, most notably with technological terms. Programming languages, on the other hand, have evolved grammatically. As I pointed out before, programming languages don't have demonstrable evolution in vocabulary (other than refinement of standards, more of an equilibrium process); therefore, I wouldn't even say programming languages have evolved in parallel to natural language.

An obscure counterpoint is the possibility that new natural languages have arisen in unknown parts of the world. For example, the invention of a new language by deaf children in Nicaragua (not the best article about it.) This obviously can't be correlated to the evolution of programming languages; programming languages are carefully created by those who already know what the language needs to do.

I don't mean to rain on everyone's parade, and I did like the analogies brought up by lkcl; however, someone does need to restore the scientific premise of the topic at hand. Although I do admit, the implications brought up thus far are very fascinating.

Kevin Barry

In the beginning of A.I research, posted 4 Mar 2009 at 10:08 UTC by sye » (Journeyer)

From 1955's 'A proposal for the dartmouth summer research project on artificial intelligence'

The following are some aspects of the artificial intelligence problem:

1 Automatic Computers

If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

2. How Can a Computer be Programmed to Use a Language

It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.

3. Neuron Nets

How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.

4. Theory of the Size of a Calculation

If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.

5. Self-lmprovement

Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.

6. Abstractions

A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.

7. Randomness and Creativity

A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.

From wikipedia "Nathan Rochester"

In 1955, IBM organized a group to study pattern recognition, information theory and switching circuit theory, headed by Rochester.[1] Among other projects, the group simulated the behaviour of abstract neural networks on an IBM 704 computer.[2] That summer John McCarthy, a young Dartmouth College mathematician, was also working at IBM. He and Marvin Minsky had begun to talk seriously about the idea of intelligent machines. They approached Rochester and Claude Shannon with a proposal for a conference on the subject. With the support of the two senior scientists, they secured $7,000.00 from the Rockefeller Foundation to fund a conference in the summer of 1956.[3] The meeting, now known as the Dartmouth Conference, is widely considered the "birth of artificial intelligence."[4] Rochester continued to supervise artificial intelligence projects at IBM, including Arthur Samuel's checkers program, Herbert Gelernter's Geometry Theorem Prover and Alex Bernstein's chess program.[5] In 1958, he was a visiting professor at MIT, where he helped McCarthy with the development of Lisp programming language.[6] The artificial intelligence programs developed at IBM began to generate a great deal of publicity and were featured in articles in both Scientific American and The New York Times. IBM shareholders began to pressure Thomas J. Watson, the president of IBM, to explain why research dollars were being used for such "frivolous matters." In addition, IBM's marketing people had begun to notice that customers were frightened of the idea of "electronic brains" and "thinking machines".[5] An internal report prepared around 1960 recommended that IBM end broad support for AI[6] and so the company ended its AI program and began to aggressively spread the message that "computers can only do what they were told."[5]

Thought Experiments, posted 4 Mar 2009 at 11:26 UTC by badvogato » (Master)

'Disturbing the Universe" by Freeman Dyson

Author's Preface

The physicist Leo Szilard once announced to his friend Hans Bethe that he was thinking of keeping a diary: "I don't intend to publish it; I am merely going to record the facts for the information of God." "Don't you think God knows the facts?" Bethe asked. "Yes," said Szilard. "He knows the facts, but He does not know this version of the facts."

18. Thought Experiments

"The scientific worker of the future will more and more resemble the lonely figure of Daedalus as he becomes conscious of his ghastly mission and proud of it." Of all the scientists I have known, the one who came closest in character to Haldane's Daedalus was not a biologist but a mathematician, by name John von Neumann. To those who knew von Neumann only through his outward appearance, rotund and jovial, it may seem ludicrously inappropriate to compare him with Daedalus. But those who knew him personally, this man who consciously and deliberately set mankind moving along the road that led us into the age of computers, will understand that from a psychological point of view Haldane's portrait of him was extraordinarily prophetic.

During the Second World War, von Neumann worked with great enthusiasm as a consultant to Los Alamos on the design of the atomic bomb. But even then he understood that nuclear energy was not the main theme in man's future. In 1946 he happened to meet his old friend Gleb Wataghin, who had spent the war years in Brazil. "Hello, Johnny," said Wataghin. "I suppose you are not interested in mathematics any more. I hear you are now thinking about nothing but bombs." "That is quite wrong," said von Neumann. "I am thinking about something much more important than bombs. I am thinking about computers."

In September 1948 von Neumann gave a lecture entitled "The General and Logical Theory of Automata," which is reprinted in Volume 5 of his collected works. The lecture is still fresh and readable. Because he spoke in general terms, there is very little in it that is dated. Von Neumann's automata are a conceptual generalization of the electronic computers whose revolutionary implications he was the first to see. An automaton is any piece of machinery whose behavior can be precisely defined in strict mathematical terms. Von Neumann's concern was to lay foundations for a theory of the design and functioning of such machines, which would be applicable to machines far more complex and sophisticated than any we have yet built. He believed that from this theory we could learn not only how to build more capable machines, but also how to understand better the design and functioning of living organisms.

Von Neumann did not live long enough to bring his theory of automata into existence. He did live long enough to see his insight into the functioning of living organisms brilliantly confirmed by the biologists. The main theme of his 1948 lecture is an abstract analysis of the structure of an automaton which is of sufficient complexity to have the power of reproducing itself. He shows that a self-reproducing automaton must have four separate components with the following functions. Component A is an automatic factory, an automaton which collects raw materials and processes them into an output specified by a written instruction which must be supplied from the outside. Component B is a duplicator, an automaton which takes a written instruction and copies it. Component C is a controller, an automaton which is hooked up to both A and B. When C is given an instruction, it first passes the instruction to B for duplication, then passes it to A for action, and finally supplies the copied instruction to the output of A while keeping the original for itself. Component D is a written instruction containing the complete specifications which cause A to manufacture the combined system, A plus B plus C. Von Neumann's analysis showed that a structure of this kind was logically necessary and sufficient for a self-reproducing automaton, and he conjectured that it must also exist in living cells. Five years later Crick and Watson discovered the structure of DNA, and now every child learns in high school the biological identification of von Neumann's four components. D is the genetic materials, RNA and DNA; A is the ribosomes; B is the enzymes RNA and DNA polymerase; and C is the repressor and derepressor control molecules and other items whose functioning is still imperfectly understood. So far as we know, the basic design of every microorganism larger than a virus is precisely as von Neumann said it should be. Viruses are not self-repro ducing in von Neumann's sense since they borrow the ribosomes from the cells which they invade.

Von Neumann's first main conclusion was that self-reproducing automata with these characteristics can in principle be built. His second main conclusion, derived from the work of the mathematician Turing, is less well known and goes deeper into the heart of the problem of automation. He showed that there exists in theory a universal automaton, that is to say a machine of a certain definite size and complication, which, if you give it the correct written instruction, will do anything that any other machine can do. So beyond a certain point, you don't need to make your machine any bigger or more complicated to get more complicated jobs done. All you need is to give it longer and more elaborate instructions. You can also make the universal automaton self-reproducing by including it within the factory unit (item A) in the self-reproducing system which I already described. Von Neumann believed that the possibility of a universal automaton was ultimately responsible for the possibility of indefinitely continued biological evolution. In evolving from simpler to more complex organisms you do not have to redesign the basic biochemical machinery as you go along. You have only to modify and extend the genetic instructions. Everything we have learned about evolution since 1948 tends to confirm that von Neumann was right.

As we move into the twenty-first century we shall find von Neumann's analysis increasingly relevant to artificial automata as well as to living cells. Also, as we understand more about biology, we shall find the distinction between electronic and biological technology becoming increasingly blurred. So I pose the problem: Suppose we learn how to construct and program a useful and more or less universal self-reproducing automaton. What does this do to us on the intellectual level? What does it do in particular to the principles of economics, or to our ideas about ecology and social organization?

I shall try to answer these questions by means of a series of thought experiments. A thought experiment is an imaginary experiment which is used to illuminate a theoretical idea. It is a device invented by physicists; the purpose is to concoct an imaginary situation in which the logical contradictions or absurdities inherent in some proposed theory are revealed as clearly as possible. As theories become more sophisticated, the thought experiment becomes more and more useful as a tool for weeding out bad theories and for reach ing a profound understanding of good ones. When a thought experiment shows that generally accepted ideas are logically self-contradictory, it is called a "paradox." A large part of the progress of physics during this century has resulted from the discovery of paradoxes and their use as a critique of theory. A thought experiment is often more illuminating than a real experiment, besides being a great deal cheaper. The design of thought experiments in physics has become a form of art in which Einstein was the supreme master. A thought experiment is an entirely different thing from a prediction. The situations that I shall describe are not intended as predictions of things that will actually happen. They are idealized models of developments with which we shall have to come to terms intellectually before we can hope to handle them practically.

My first thought experiment is not my own invention. The basic idea of it was published in an article in Scientific American twenty years ago by the mathematician Edward Moore. The article was called "Artificial Living Plants." The thought experiment begins with the launching of a flat-bottomed boat from an inconspicuous shipyard belonging to the RUR Company on the northwest coast of Australia. RUR stands for "Rossum's Universal Robots," a company with a long and distinguished history. The boat moves slowly out to sea and out of sight. A month later, somewhere in the Indian Ocean, two boats appear where one was before. The original boat carried a miniature factory with all the necessary equipment, plus a computer program which enables it to construct a complete replica of itself. The replica contains everything that was in the original boat, including the factory and a copy of the computer program. The construction materials are mainly carbon, oxygen, hydrogen and nitrogen, obtained from air and water and converted into high-strength plastics by the energy of sunlight. Metallic parts are mainly constructed of magnesium, which occurs in high abundance in sea water. Other elements, which occur in low abundance, are used more sparingly as required. The boats are called "artificial plants" because they imitate with machines and computers the life-cycle of the microscopic plants which float in the surface layers of the ocean. It is easy to calculate that after one year there will be a thousand boats, after two years a million, after three years a billion, and so on. It is a population explosion running at a rate several hundred times faster than our own.

The RUR Company did not launch this boat with its expensive cargo just for fun. In addition to the automatic factory, each boat carries a large tank which it gradually fills with fresh water separated by solar energy from the sea. It is also prepared to use rain water as a bonus when available. The RUR Company has established a number of pumping stations at convenient places around the coast of Australia, each equipped with a radio beacon. Any boat with a full cargo of fresh water is programmed to proceed to the nearest pumping station, where it is quickly pumped dry and sent on its way. After three years, when the boats are dispersed over all the earth's oceans, the RUR Company invites all maritime cities in need of pure water to make use of its services. Up and down the coasts of California and Africa and Peru, pumping stations are built and royalties flow into the coffers of the RUR Company. Deserts begin to bloom—but I think we have heard that phrase before, in connection with nuclear energy. Where is the snag this time?

There are two obvious snags in this thought experiment. The first is the economic snag. The RUR boats may provide us with a free supply of pure water, but it still costs money to use it. Just pumping fresh water onto a desert does not create a garden. In most of the desert areas of the world, even an abundance of fresh water will not rapidly produce wealth. To use the water one needs aqueducts, pumps, pipes, houses and farms, skilled farmers and engineers, all the commodities which will still grow with a doubling time measured in decades rather than in months. The second and more basic snag of the RUR project is the ecological snag. The artificial plants have no natural predators. In the third year of its operation, the RUR Company is involved in lawsuits with several shipping companies whose traffic the RUR boats are impeding. In the fifth year, the RUR boats are spread thick over the surface of almost all the earth's oceans. In the sixth year, the coasts of every continent are piled high with wreckage of RUR boats destroyed in ocean storms or in collisions. By this time, it is clear to everybody that the RUR project is an ecological disaster, and further experiments with artificial plants are prohibited by international agreement. But fortunately for me, the prohibition does not extend to thought experiments.

The details of my second thought experiment are partly taken from a story by the science fiction writer Isaac Asimov. We have the planet Mars, a large piece of real estate, completely lacking in economic value because it lacks two essential things, liquid water and warmth. Circling around the planet Saturn is the satellite Enceladus. Enceladus has a mass equal to five percent of the earth's oceans, and a density rather smaller than the density of ice. It is allowable for the purposes of a thought experiment to assume that it is composed of dirty ice and snow, with dirt of a suitable chemical composition to serve as construction material for self-reproducing automata.

The thought experiment begins with a rocket, carrying a small but highly sophisticated payload, launched from the Earth and quietly proceeding on its way to Enceladus. The payload contains an automaton capable of reproducing itself out of the materials available on Enceladus, using as energy source the feeble light of the far-distant sun. The automaton is programmed to produce progeny that are miniature solar sailboats, each carrying a wide, thin sail with which it can navigate in space, using the pressure of sunlight. The sailboats are launched into space from the surface of Enceladus by a simple machine resembling a catapult. Gravity on Enceladus is weak enough so that only a gentle push is needed for the launching. Each sailboat carries into space a small block of ice from Enceladus. The sole purpose of the sailboats is to deliver their cargo of ice safely to Mars. They have a long way to go. First they must use their sails and the weak pressure of sunlight to fight their way uphill against the gravity of Saturn. Once they are free of Saturn, the rest of their way is downhill, sliding down the slope of the Sun's gravity to their rendezvous with Mars.

For some years after the landing of the rocket on Enceladus, the multiplication of automata is invisible from Earth. Then the cloud of little sailboats begins to spiral slowly outward from Enceladus's orbit. As seen from the Earth, Saturn appears to grow a new ring about twice as large as the old rings. After another period of years, the outer edge of the new ring extends far out to a place where the gravitational effects of Saturn and the sun are equal. The sailboats slowly come to a halt there and begin to spill out in a long stream, falling free toward the sun.

A few years later, the nighttime sky of Mars begins to glow bright with an incessant sparkle of small meteors. The infall continues day and night, only more visibly at night. Day and night the sky is warm. Soft warm breezes blow over the land, and slowly warmth penetrates into the frozen ground. A little later, it rains on Mars for the first time in a billion years. It does not take long for oceans to begin to grow. There is enough ice on Enceladus to keep the Martian climate warm for ten thousand years and to make the Martian deserts bloom. Let us then leave the conclusion of the experiment to the writers of science fiction, and see whether we can learn from it some general principles that are valid in the real world. The result of the experiment is a genuine paradox. The paradox lies in the fact that a finite piece of hardware, which we may build for a modest price once we understand how to do it, produces an infinite payoff, or at least a payoff that is absurdly large by human standards. We seem here to be getting something for nothing, whereas a great deal of hard experience with practical problems has taught us that everything has to be paid for at a stiff price. The paradox forces us to consider the question, whether the development of self-reproducing automata can enable us to override the conventional wisdom of economists and sociologists. I do not know the answer to this question. But I think it is safe to predict that this will be one of the central concerns of human society in the twenty-first century. It is not too soon to begin thinking about it.

Let me illustrate the question with a third thought experiment. One of the by-products of the Enceladus project is a small self-reproducing automaton well adapted to function in terrestrial deserts. It builds itself mainly out of silicon and aluminum which it can extract from ordinary rocks wherever it happens to be. It can extract from the driest desert air sufficient moisture for its internal needs. Its source of energy is sunlight. Its output is electricity, which it produces at moderate efficiency, together with transmission lines to deliver the electricity wherever you happen to need it. There is bitter debate in Congress over licensing this machine to proliferate over our Western states. The progeny of one machine can easily produce ten times the present total power output of the United States, but nobody can claim that it enhances the beauty of the desert landscape. In the end the debate is won by the antipollution lobby. Both of the alternative sources of power, fossil fuels and nuclear energy, are by this time running into severe pollution problems. Quite apart from the chemical and radioactive pollution which they cause, new power plants of both kinds are adding to the burden of waste heat, which becomes increasingly destructive to the environment. In contrast to all this, the rock-eating automaton generates no waste heat at all. It merely uses the energy that would otherwise heat the desert air and converts some of it into useful form. It also creates no smog and no radioactivity. Legislation is finally passed authorizing the automaton to multiply, with the proviso that each machine shall retain a memory of the original landscape at its site, and if for any reason the site is abandoned the machine is programmed to restore it to its original appearance.

My third thought experiment is again degenerating into fiction, so I will leave it at this point. It appears to avoid the ecological snag that the RUR boats ran into. It raises several new questions that we have to consider. If solar energy is so abundant and so free from problems of pollution, why are we not already using it on a large scale? The answer is simply that capital costs are too high. The self‐ reproducing automaton seems to be able to side-step the problem of capital. Once you have the prototype machine, the land and the sunshine, the rest comes free. The rock-eater, if it can be made to work at all, overcomes the economic obstacles which hitherto blocked the large-scale use of solar energy.

Does this idea make sense as a practical program for the twenty‐ first century? One of the unknown quantities which will determine the practicality of such ideas is the generation time of a self-reproducing automaton, the time that it takes on the average for a population of automata to double. If the generation time is twenty years, comparable with a human generation, then the automata do not change dramatically the conditions of human society. In this case they can multiply and produce new wealth only at about the same rate to which we are accustomed in our normal industrial growth. If the generation time is one year, the situation is different. A single machine then produces a progeny of a million in twenty years, a billion in thirty years, and the economic basis of society can be changed in one human generation. If the generation time is a month, the nature of the problem is again drastically altered. We could then cheerfully contemplate demolishing our industries or our cities and rebuilding them in pleasanter ways within a period of a few years.

It is difficult to find a logical basis for guessing what the generation time might be for automata of the kind which I postulated for my three experiments. The only direct evidence comes from biology. We know that bacteria and protozoa, the simplest truly self- reproducing organisms, have generation times of a few hours or days. At the second main level of biological organization, a higher organism such as a bird has a generation time on the order of a year. At the third level of biological organization, represented precariously by the single species Homo sapiens, we have a generation time of twenty years. Roughly speaking, we may say that a biochemical automaton can reproduce itself in a day, a higher central nervous system in a year, a cultural tradition in twenty years. With which of these three levels of organization should our artificial automata be compared?

Von Neumann in his 1948 lecture spoke mainly about automata of the logically simplest kind, reproducing themselves by direct duplication. For these automata he postulated a structure appropriate to a single-celled organism. He pictured them as independent units, swimming in a bath of raw materials and paying no attention to one another. This lowest level of organization is adequate for my first experiment but not for my second and third. It is not enough for automata to multiply on Enceladus like bugs on a rotten apple. To produce the effects which I described in the second and third experiments, automata must propagate and differentiate in a controlled way, like cells of a higher organism. The fully developed population of machines must be as well coordinated as the cells of a bird. There must be automata with specialized functions corresponding to muscle, liver and nerve cell. There must be high-quality sense organs, and a central battery of computers performing the functions of a brain.

At the present time the mechanisms of cell differentiation and growth regulation in higher organisms are quite unknown. Perhaps a good way to understand these mechanisms would be to continue von Neumann's abstract analysis of self-reproducing automata, going beyond the unicellular level. We should try to analyze the minimum number of conceptual components which an automaton must contain in order to serve as the germ cell of a higher organism. It must contain the instructions for building every one of its descendants, together with a sophisticated switching system which ensures that descendants of many different kinds multiply and function in a coordinated fashion. I have not seriously tried to carry through such an analysis. Perhaps, now that von Neumann is dead, we shall not be clever enough to complete the analysis by logical reasoning, but will instead have to wait for the experimental embryologists to find out how Nature solved the problem. My fourth thought experiment is merely a generalized version of the third. After its success with the rock- eating automaton in the United States, the RUR Company places on the market an industrial development kit, designed for the needs of developing countries. For a small down payment, a country can buy an egg machine which will mature within a few years into a complete system of basic industries together with the associated transportation and communication networks. The thing is custom made to suit the specifications of the purchaser. The vendor's guarantee is conditional only on the purchaser's excluding human population from the construction area during the period of growth of the system. After the system is complete, the purchaser is free to interfere with its operation or to modify it as he sees fit.

Another successful venture of the RUR Company is the urban renewal kit. When a city finds itself in bad shape aesthetically or economically, it needs only to assemble a group of architects and town planners to work out a design for its rebuilding. The urban renewal kit will then be programmed to do the job for a fixed fee.

I do not pretend to know what the possibility of such rapid development of industries and reconstruction of cities would do to human values and institutions. Qn the negative side, the inhuman scale and speed of these operations would still further alienate the majority of the population from the minority which controls the machinery. Urban renewal would remain a hateful thing to people whose homes were displaced by it. On the positive side, the new technology would make most of our present-day economic problems disappear. The majority of the population would not need to concern themselves with the production and distribution of material goods. Most people would be glad to leave economic worries to the computer technicians and would find more amusing ways to spend their time. Again on the positive side, the industrial development kit would rapidly abolish the distinction between developed and developing countries. We would then all alike be living in the postindustrial society.

What would the postindustrial society be like to live in? Haldane in his Daedalus tried to describe it:

Synthetic food will substitute the flower-garden and the factory for the dunghill and the slaughterhouse, and make the city at last self-sufficient. There's many a strong farmer whose heart would break in two If he could see the townland that we are riding to. Boughs have their fruit and blossom at all times of the year, Rivers are running over with red beer and brown beer, An old man plays the bagpipes in a golden and silver wood, Queens, their eyes blue like the ice, are dancing in a crowd.

This is a poetic vision, not a sociological analysis. But I doubt whether anybody can yet do better than Haldane did in 1924 in imagining the human aspects of the postindustrial scene.

YARV -> Ruby, posted 4 Mar 2009 at 12:50 UTC by sye » (Journeyer)

What is this thing about building a virtual machine for one particular programming language?

Ruby Linguistics:

UTIYAMA Masao, posted 4 Mar 2009 at 14:54 UTC by sye » (Journeyer)

内 山将夫 UTIYAMA Masao

情報通信研究機構(NICT)の主任研究員であり,自然言語処理の研究をしている. 主な興味は自然言語におけるモデルを考え,それを利用した実際的なアプリケー ションを作ることである.あるいは,実際的なアプリケーションを考え,それに 適した自然言語におけるモデルを考え,そこから実際的なアプリケーションを作 ることである.

Masao Utiyama is a senior researcher of the National Institute of Information and Communications Technology, Japan. His main research field is natural language processing. He completed his doctoral dissertation at the University of Tsukuba in 1997. His current research interests are exploring models of natural languages and their practical applications.

{ I can not map this sentence in English version "He completed his doctoral dissertation at the University of Tsukuba in 1997" in the Japanese version... )

please explain "Thought Experiments", posted 4 Mar 2009 at 21:18 UTC by ta0kira » (Apprentice)

badvogato, can you provide some sort of interpretation or explanation of that huge block of text you posted? There must be something you wanted us to get out of it since the author didn't write it in response to our discussion. Thanks.

Kevin Barry

re: please explain "Thought Experiments", posted 4 Mar 2009 at 21:33 UTC by zanee » (Observer)

ta0kira meet badvogato; you may as well sit and be entertained by the hiliarity of it.

Q & A, posted 4 Mar 2009 at 22:30 UTC by badvogato » (Master)

robogato: Are you hiding 'Lost and Found' customer service counter from me? My articles are but lost since March 13, 2008 !

To the other Question, My 'The thought experiments' was meant as a data stream for repeating what Andrey Markov was doing to the first 20,000 words of Pushkin's Eugene Onegin.

What was the question again?

Re: Q & A, posted 4 Mar 2009 at 23:01 UTC by robogato » (Master)

Robogato has kicked the cron job and the per-user article indices are now updated.

RE: Q & A, posted 4 Mar 2009 at 23:09 UTC by ta0kira » (Apprentice)

To the other Question, My 'The thought experiments' was meant as a data stream for repeating what Andrey Markov was doing to the first 20,000 words of Pushkin's Eugene Onegin.
What was that, a filibuster? Why did you choose to "data stream" to this article? If you'd written all of that and it seemed relevant to what we're talking about then I'd read it, but since it looks like an out-of-context cut-and-paste, I just skipped right over it. The purpose of a citation is to cite something; not to include it.

Kevin Barry

RE: Evolved?, posted 4 Mar 2009 at 23:58 UTC by ta0kira » (Apprentice)

Although the idea is interesting, programming languages are more consciously designed, and have gone through far fewer generations compared to natural languages, so I'm not sure how applicable it is to talk about them evolving (yet)
Regarding this and the analogies posted by lkcl, you could say that programming languages are evolving to higher orders to distance ideas from the hardware itself, much like the mind utilizes higher-order processes otherwise not possible if a direct link to the body was apparent. In this way, programming languages might be evolving as the mind has in relation to their respective hardware. This isn't necessarily an analogy if computers are thought of as an extension of the human mind (augmented intelligence.)

What I haven't decided is whether this is an equilibrium process or a co-evolution between machines and the technological mind. Numerous educators (and others) have pointed out how much different the minds of children are now than they were prior to the PC revolution. Unfortunately, it strikes me as re-specialization of the human mind away from living in the natural world, thereby reinforcing dependence on technology. The plasticity of the mind far exceeds that of biological evolution rated temporally, but the minds of humans advance technology, and therefore the biological requirements of humans.

So my question is, is there a resting state for this process? Maybe some day we'll marginalize survivability in the search for the meaning of life, at which point the meaning of life will be to perpetuate the technological machine we built to support human existence.

Ok, that's depressing.

I have one correction to make. I cited Talmy in my first comment. That was a mis-remembrance on my part; Talmy subdivided grammar. What I mentioned was just linguistic theory.

Kevin Barry

A & Q, posted 5 Mar 2009 at 01:26 UTC by badvogato » (Master)

Robogato: thank you.

ta0kira: your compression algorithm is flawed, although it is not entirely immoral.

Q: How do you tell the difference between sipping | vomiting| omitting?

Counting Crows: 'You can't count on me', posted 6 Mar 2009 at 11:59 UTC by badvogato » (Master)

If you want to be free, one thing you need to know that you can't count on me.

self-improvement, posted 8 Mar 2009 at 15:44 UTC by lkcl » (Master)

Probably a truly intelligent machine will carry out activities which may best be described as self-improvement.

well DUH! :)

consciousness _requires_ the ability to model "self" within the internal information.

self-improvement can therefore be a much-automated feedback loop to optimise the amount of processing spent on tasks.

"genetic algorithming" gone mad, and done in near-real-time, and done under the control of the "device" itself, would seem to be a logical way forward.

self-sacrifice, posted 8 Mar 2009 at 18:06 UTC by badvogato » (Master)

I believe the highest value of self-improvement is self-sacrifice. Dare to lay down one's life for a higher order and for good. Most of our computers and most of human die a natural cause. Self-improvement for an entire species must be different. It lies in the self sacrificial act for the survival and salvation/subversion of the whole.

I watched this movie 'duet'. It is an example of good programming, IMHO.

The poet and the monkey, posted 13 Mar 2009 at 07:47 UTC by sye » (Journeyer)

His wife, a plum tree denying his touch.
He could not bear the glow
of crimson blossoms against a blue sky

above a sea more blue than he thought was possible.
Buddha said to the monkey, if you can
traverse my palm in a single somersault

I will set you free.
He loved women so much
gave up his own son for adoption

because the boy was not a girl.
The monkey could cross whole continents
in the flicker of an eyelash. A waterfall concealed

the entrance of his kingdom

His wife helped him bring from China
a female fan of his poetry. Beneath shadows

of moonlit pines, he slipped into her bed.
His dream of ancient scholarship came true:
plum tree and nightingale, wife and concubine.

He was complete. Poems flooded. Parades
of beetles. Translucent dreams of hyacinths.
Storms that revived primeval armies

slicing the world with liquid swords.
Death, the ultimate flower
blazed scarlet in a black river.

The monkey found himself in a desert
with a solitary pink obelisk.

The plum tree read the mind

of the nightingale, told her to escape.
The poet loved the pensive bird more than his life.
The monkey shouted for Buddha, the desert

vanished. Buddha held a finger before the monkey.
The poet hacked his wife with an ax,
then hung himself from a pine tree.

-- By WANG Yun published on Issue #10 Spet. 2008 BloodLotus an online literary journal

I have some reservation on the 'proper' title of this poem. I would expect that any body with as a fictitious entitlement as 'the poet' should not commit such a violent act as to 'hacked the wife with an ax'. Yet Allen Poe's 'The Black Cat' was full of such imagery, was it not?  So we all are descended from mockery as monkeys in human hide?

Suddenly I realized Yun's writing is the true story of the famed Chinese poet Gu Cheng

We are so much alike. Two poisonous snakes betraying each others' treasure.

Below are excerpts from Anne-Marie Brady's paper 'Dead in Exile: The Life and Death of Gu Cheng and Xie Ye"

What can we learn from the deaths of Gu Cheng and Xie Ye? And what does their tragic fate tell us about the state of China’s creative world? Gu Cheng and Xie Ye were both products and victims of the last fifty years of China’s history. Their most formative years were spent under the destructive influence of Mao Zedong’s Great Proletarian Cultural Revolution. Xie Ye responded to the distressing experience of her early years by withdrawing into herself, while presenting a cheerful visage to the world. She accustomed herself to suppressing her feelings and had a passive outlook on life. Though an intelligent and talented woman, her greatest desire was to have enough money to pay for the needs of her family and to be able to live with her son. Gu Cheng’s response to the destruction of his early years was to reject the outside world and retreat into a world of fantasy. He was a disturbed and unhappy young man, first attempting suicide in 1973 and showing violent tendencies early on in his life. Gu’s poetry became popular in China in the late 1970s and early 1980s, as a result of his escapism and focus on the imaginary world. After the ravages of Mao’s political campaigns, China’s poetry readers longed for release from the prison of political struggle and economic hardship. Gu’s poetry offered a window to another world. It was this new perspective which eventually attracted both the vituperation of the Communist authorities and the interest of international scholars of Chinese literature. In 1987 Gu Cheng and Xie Ye left China for the West, the symbol of freedom and a better way of life. After giving lectures in various European countries, the couple arrived in New Zealand to begin a teaching post at Auckland University. In the eyes of their friends and admirers, Gu Cheng was a success: he was well-known both in China and amongst those in the Western world who read Chinese literature, he had succeeded in settling in a Western country and buying a house of his own. Gu Cheng was free to compose whatever he pleased, and free to live as he pleased. Yet Gu was alienated from his new country and produced nothing of note in this time. Life was a constant struggle for Gu Cheng and Xie Ye in New Zealand, both economically and psychologically. They had achieved their dream, but found the dream was empty. They were dead in exile. When a crisis came in their lives, it was much harder to avoid than if they had been living in a familiar environment.

The life and death of Gu Cheng and Xie Ye are symbolic of the crisis in modern Chinese culture. The choices that Gu Cheng faced as a writer living in modern China - to remain silent, to produce what the government approves of, produce what it does not approve of and suffer the consequences, or go abroad - are typical. Gu Cheng and Xie Ye’s alienation both from their home land and their adopted country is also not unique. A study which looks honestly at Gu Cheng’s life and the milieu in which he was shaped, Communist China, needs to be written.47 It must analyse his life and work without stooping to preference and hagiography. Through understanding the meaning of the fate of Gu Cheng and Xie Ye we will come to a deeper understanding of the malaise affecting China’s proud two thousand year old cultural history in the modern era.

If I have any regrets in this life, it is this. I don’t have much to regret, but if someone asks me, I will say it again. I regret this: I left my island, my home, my destiny. I should die there. I should believe in nothing, want nothing, like a mad tree that does not move no matter how powerful the storm is. It stands there until it is broken. It won’t float around in the sea or in dirty mud. -- Gu Cheng

No-one can transcend the prison of lies
No-one knows who is a corpse
When the sun explodes Sleeves
are empty
Everywhere is a foreign land
Death gives no refuge.

Language, posted 16 Apr 2009 at 02:33 UTC by ekashp » (Journeyer)

I've studied communications and programming languages for over 30 years now. I can't communicate with her, though I've tried so hard for so many years.

At least the computers seem to understand me.

God bless all.

- Eugene

ekashp, you need to listen to the master that is ME, posted 18 Mar 2010 at 11:41 UTC by badvogato » (Master)

no reply is expected.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page