Speaker: Olaf Witkowski
Title: Cognitive Life As Information Flows
Location: We Work South, NYC, NY 8th floor conference room
Abstract: Information is found all across the domain of physics, seemingly retaining all its properties regardless of the media in which it is instantiated. Substrate-independence and interoperability made possible symbolic representations such as the genetic code, allowing for life to develop upon it. The next transition closed the loop by producing organisms increasingly aware of their environment. This eventually led to human life, capable of learning the underlying principles that created it, with the invention of language and science.
I focus my research on collective cognition, which one can see as the informational software to life's physical hardware. If life can be formulated computationally as the search for sources of free energy in an environment in order to maintain its own existence, then cognition is better understood as finding efficient encodings and algorithms to make this search probable to succeed. The clef de voûte in my work is to consider cognitive flows as the abstract computation of life, with the purpose to make the unlikely likely for the sake of its preservation.
Traditional top-down approaches to cognition infamously introduce black boxes that fail to explain underlying mechanisms and lack sufficient detail to validate models. Instead, I propose a fully bottom-up model to characterize the pathways leading artificial organisms to develop cognitive capabilities, allowing for a rigorous mathematical framing of the "invisible reality" of cognitive life in the universe..
Presentation: Olaf works at the Institute for Advanced Study in Princeton and at the Earth Life Science Institute in Tokyo, Japan on origin of life, emergence of intelligence, and artificial life using agent-based modeling, biology and cognitive science, machine learning and neural networks, and information, game, and complexity theory.
Olaf used videos and computer animations throughout his presentation to illustrate his ideas. He began with DNA replication and repair as an example of information processing in biologic life. In artificial life, he demonstrated the use of a computer program called the Game of Life. This is a grid with rules that determine whether a specific square on the grid is on or off (black or white) based on the status of adjacent squares on the grid. He outlined four rules for an illustrative program and showed that the sequence creates “gliders”, patterns that give the appearance of movement over the grid. There are “glider guns”, from which gliders originate, that emerge as the sequence progresses over time. Patterns can become more robust and you can watch what patterns emerge based on perturbations that you can add to the program. He also showed examples of programs that can make duplicates of the patterns as in replication, and programs that can simulate locomotion in two or in three dimensions by making some of the squares “squishable”. The models can simulate chemistry, molecules, or dividing cells. Similar technology has been used to create computer games and graphics.
He said he would discuss three concerns: I) Life and Intelligence II) Distributed Intelligence III) Future of Intelligence.
If you look at a monkey you can determine whether the monkey has intelligence. In fact, monkeys appear to be quite intelligent as seen in a video he showed of a monkey tapping numbers on a computer screen. He then showed a fascinating video of an octopus freeing itself from inside of a glass jar by turning the jar while holding the lid still. Recently, Deep Mind published an algorithm for playing the centuries old board game of Go. This algorithm was unique in that once the rules of the game were programed, the machine taught itself the strategy and could defeat the best human world Go masters and to defeat previous Go playing machines that learned by observing humans playing Go.
He then looked at Life as Computation. He described three phases of this computation: A) Perception, used to predict and innovate B) Action, used to transform the unlikely into likely and C) Result, self-preservation of the entity in the given environment. As examples, he mentioned E. coli bacteria, which have the capacity to react to the environment to search for gradients in chemistry and move towards food sources. Is this a form of intelligence? In life and in artificial life Darwinian evolution can be modeled by adding perturbations to a sequence in the program that result in some members surviving and others not surviving. The beauty of computer modeling is that you can measure the effects of each perturbation to learn the effects on self-preservation. In the computer simulation that he showed, only one quarter of the entities survived regardless of the perturbation.
He then moved on to discussion of Distributed Information and introduced the topic with a photo of Claude E. Shannon, the father of information theory. The human brain is an example of distributed information as each neuron provides input and output for other connected neurons to process information. Other examples of distributed information include the transmission of information in a bee hive or an ant colony using pheromones, or the way a flock of starlings can keep an ordered formation for migration and for avoiding predators based only on keeping the same position relative to the neighboring birds in the flock. Shannon described methods of transferring information, for example compressing and then re-expanding data transmitted over a telephone line. He fathered the field of Information Dynamics. Olaf referred to his own papers, published in 2011 and in 2015, and a paper currently in preparation. In these publications, he refers to a source (information) going to a transmitter and becoming a signal in transmission (a message). Noise is introduced in the transmission process. This becomes a received signal when it reaches a receiver at another destination (reconstructed message). In Olaf’s work, he looks at what happens when the sender and receiver are different neural networks. Once again, by performing the simulations on line, he can measure and evaluate the effects of any perturbations in the system.
As for the Future of Intelligence, the third topic he is discussing, he concludes we will need to befriend each other to make the best use of information. In ant colonies, sometimes the ants follow pheromone signals into a death spiral. In biology, some parasites (such as Schistosoma) can have controlling influence on many other species as the worm passes through its life cycle in varying hosts. There are technologies that may reduce our own capabilities as we come to rely on them. Smart phones may be one such example as we rely less on our own memories. But there are other technologies that can expand our intelligence and can lead to unexpected progress. The introduction of the abacus was one such example. One goal of YHouse is to expand our collective intelligence, not to shrink it.
In conclusion, he has discussed Life as collective computation. He has shown that computer simulations can be better than experimenting with living entities. Introducing constructive misunderstanding (in the form of noise and perturbations in simulations) can help us to expand and not shrink our intelligence.
We opened the discussion for questions here.
Q: A woman had heard of bacteria that send electric signals to other bacteria to communicate. Is he aware of that?
A: He looks only at bits on computers. He measures how you can use information to optimize your own survival.
Q: Life without consciousness is not life. Consciousness may not be computerized. What is your opinion?
A: Looking carefully into the Game of Life, even in these systems you can see forms that act as Agents. Some of these agents can observe the behavior of other agents. If such an agent can observe itself (receive data about itself just as it processes data about its neighbors), is that consciousness? He can see what it is to be an agent by looking at the equations of data in and data out.
Q: The systems he has shown are low entropy. But the higher the intelligence, the higher the entropy.
A: Olaf knows the literature on this – the Maximum Entropy Principle – and thinks although it starts from a nice simple idea, it has not brought any helpful answer to the debate. One must start with simpler models to understand the nature of how intelligence naturally emerges in systems.
Q: You cannot discuss evolution without discussing the environment. To see if consciousness is high or low entropy you must look at the environment.
A: I’m not sure we get the right answers by looking at entropy.
Q: What is the lower end of the rules needed for self-replication?
A: Probably under 1 megabyte, for the 3D replication model shown earlier, and under 1 kilobyte for the simpler ones. These sizes though are very highly dependent on the physical rules of the simulation being considered, as these may themselves hard-code for high-level macros, making it much shorter to code complex functions..
Q: You defined life as computation. What distinguishes animate from inanimate in computation?
A: The short answer is nobody really knows. There is a long debate in the origin of life literature. Some start from the premise of life needing to be chemical, which seems wrong. One definition is given by Piet Hut, founder of YHouse: Autonomous agents in complex systems. There is no definite answer for now, but we are finding new ways of identifying living systems’ properties, and demonstrating the necessary and sufficient conditions for their emergence.
Q: Is the foundation of life self-preservation?
A: The question is like a mathematical one. You start with a conjecture and work with that to get a theorem.
Q: Awareness is artificial life. Have you seen systems that became self-aware?
A: The short answer is no, as very little is known or agreed upon about what awareness means. But he could see that happening as agents get information about themselves and can predict their own future states. This may have meta variables, like awareness of anger or the identification of other emotions in general, which in turn can influence behavior.
Q: In your work with Guttenberg in preparation you mentioned noise may lead to a grammar that allows you to summarize the received signal with less bits. This is by lumping energies to form sentence structures. Is the emergent grammar very dependent on the noise? If you remove the noise will the grammar change?
A: We haven’t tried that but it would be interesting.
Q: Gliders are not really moving, but replicating at a distance.
A: Yes.
Q: What is the purpose of this work?
A: Well, for one thing I find it fun. Also, given life, it would be a pity not to try to understand life as best we can. Science involves advancing collective knowledge and collective intelligence. Some people in the field do this research with the purpose of uploading their persona to a replicable machine-based framework, as they see this as a possibility within the time frame currently available to humans.
Q: Can you distinguish agent from environment?
A: There exist mathematically well-defined concepts such as Autonomy. Can you predict your future state? That is agency.
Q: Can the environment be an agent?
A: Absolutely, environment can be an agent.
Q: Information can be disembodied and exist outside the material world. Can life be disembodied?
A: Yes. Eric Smith speaks of life as the biosphere. Information in Shannon’s theory is not localized in space, but space corresponds to information. Information is always About something. In this body of work it is best to be careful to study Relevant Information – roughtly, information about something you care about.
Q: Can a life form transition from one medium to another?
A: Yes. The internet was created by us, but we may not be the same life on-line.
Q: That is not the same as downloading yourself to a computer.
A: One can imagine scenarios in which when life originated it was made of matter now considered inorganic, before transforming to the medium used by the first living cells, life as we know it everywhere on Earth.
Q: It sounds like an organism must be embodied.
A: All we know about scientifically is matter and energy.
We ended the discussion at this time.
Respectfully,
Michael J. Solomon, MD