On Thursdays at noon Yhouse holds a lunch meeting at the Institute of Advanced Study, in Princeton. The format is a 15 minute informal talk by a speaker followed by a longer open-ended discussion among the participants, triggered by, but not necessarily confined to, the topic of the talk. In order to share I am posting a synopsis of the weekly meetings.
Speaker: Olaf Witkowski, Earth-Life Science Institute, Tokyo Institute of Technology
Title: Characterizing Cognition as Information Flows
Abstract: (By the Speaker) “Information’s substrate-independence and interoperability property makes possible symbolic representations such as the genetic code, base upon which life was able to develop, eventually leading to human societies’ complex cognitive capabilities, such as language, science and technology. In this talk, Dr. Witkowski will argue cognition to be the informational software to life’s physical hardware. If life can be formulated computationally to be the search for sources of free energy in an environment in order to maintain its own existence, then cognition is better understood as finding efficient encodings and algorithms to make this search probable to succeed. Cognition then becomes the “abstract computation of life”, with the purpose to make the unlikely likely for the sake of survival. We will show that it can be quantified by well known as well as new computational tools at the intersection of artificial life, information theory and machine learning.”
Present: Olaf Witkowski, Piet Hut, Ed Turner, Brian Cantwell Smith, Ayako Fukui, Monica Manolescu, Yuko Ishihara, Nicolaas Rupke, Sean Sakamoto, Susan Schneider, David Fergusson, Erik Persson, Naoki Yajima, Liza Solomonova, Roberto Tottoli, Andreas Losch, Giuliano Mori, Fabien Montcher and Michael Solomon (by speaker phone).
Summary:
Olaf opened saying he would be talking about Cognition from the angle of Information Flows. His interest began with his interest in Language: “How do you connect two minds?” and also with his background in computer science. Information Theory began with the work of Claude Shannon in the 1940s (working at Bell Labs, at the Institute For Advanced Study, and at MIT).
Olaf defined Information as “When you look at something (box A) and at something else (box B), Information is how much box A allows you to predict something about box B.” Olaf said Entropy does not measure information. (In his work on transmitting and compressing data Shannon described Shannon Entropy, a measure of communicating information based on the number of possible states the data could take.) Information has Grounding – mutually shared experience and assumptions. “The Difference that makes a Difference”.
Information has meaning and can be used to do something. For example, in biology if you are a gazelle on the savannah and I give you ten bits about the temperature on Mars, you don’t care. But if I give you ten bits about the lion behind you, that is relevant and has meaning. Information has value and can be quantified. In his work on computer simulations of Life, and sometimes on chemistry simulations, Olaf can define boxes and track how information is transmitted over time. He will argue that in biology it is important to relate this information to survival.
In Tokyo, Olaf looks at the origin of life and of cognition. It is difficult to define Life, but even more difficult to define Cognition (see speaker’s abstract above). One aspect of life is self-reproduction. In computer-science a Quine is when execution of the code results in the code itself. There is both a code to function and a code to replicate in DNA. Some Quines can also correct errors in their own replication. Memes are analogous to genes in that memes carry cultural ideas that can be transmitted. In Biology elements of Culture are replicated with corrections. Even when oral traditions are transmitted from generation to generation, the value of the information is preserved. What is saved is not just information but relevant information.
So, when did Life become Cognitive? Life began about 4.6 billion years ago. Some believe cognition appeared when Reflexivity appeared – when two systems mutually affect each other – allowing the capacity to respond to environmental factors. Information coded in cells for mechanisms for energy production, for cellular functions, for reproduction, have persisted from the first cells to the present.
Olaf sees not a single emergence but many emergences of cognition. Looking at biology in computational terms, one can identify transitions from chemistry, to single cells with the ability to move in response to signals, to multicellular organisms where intercellular communication provides value for the group. Computational bio-modeling allows you to track how much one thing affects another in the system and identify behavioral clusters. Information in Biology has been understood in this way. We can identify algorithms some of which we internalize allowing some information to disappear and preserving other information. Cybernetics shows us how to extract this data from systems to predict what will happen. This can be seen from the viewpoint of Philosophy as well. At present Olaf is focusing on Reflexivity to understand Cognition using computational bio-modeling.
Discussion began with:
Q: A question of whether you can define Life as Information.
A: Olaf replied there is a danger in definitions. You might define life as chemistry that reproduces itself. But he is offering Information Theory as a tool. He emphasized the important point that Information is Substrate Independent.
Q: When asked if DNA replication is the key?
A: Olaf answered, No. Robustness of the system is the key. (Robust systems “maintain their state and functions against external and internal perturbations”, which is essential for a biological system to survive.) Mutualism is also key – Can systems help one another by transmitting information?
Q: A reference was made to Hume and information transmission.
A: Olaf looks at emergence through random changes. Shake the box and see what happens.
Q: How do you determine what information is significant for prediction?
A: Information can be measured and valued. For example, there are common threads in diverse religious traditions. Valuable information leads to self-preservation.
Q: With regard to the lion behind the gazelle, if information is about something, how can you quantitate the information?
A: Information itself is not relevant but must be relevant for something. You can see how knowing about the lion leads to survival and preservation of the pattern.
Q: Ed pointed out that Information is Observer dependent. It is hard to find mathematical laws that are observer independent. So the value of information changes.
A: This is a good point. Subjectivity is observer dependent and information theory accounts for this.
Q: Are predictions good because they are True or because they are Probable?
A: Good, because it predicted what has happened already. Olaf can rewind and replay the tape and get different outcomes. You have already seen the outcomes when you quantitate. This is not the same as machine learning where you look at probability.
Q: Shannon’s Information Theory is objective. If the entropy of a data set is twelve bits, you can compute it with twelve bits.
A: If you arrange the boxes in your system in some way, you can manipulate the system objectively. The system is Finite and Deterministic. The “entropy” is the number of states the system can be in, so every state is limited and can be measured as bearing an amount of information.
Q: Susan asked, “Are you working within Shannon’s framework or some other theory/”
A: Olaf uses Shannon’s but Shannon is not always clear, so he extends Shannon’s ideas such as entropy to add robustness.
Q: Ed asked, “There are simple life forms that have sensation but no representation that could be considered cognition. So, is cognition purposeful?”
A: Olaf looks at the role of cognition in life. Specifically, he looks at the maximum rate at which molecular machines can process information to use the information for self-preservation.
Q: Regarding “Difference that make a difference”, How do you define how information becomes useful? Differences cannot be determined in advance.
A: Olaf agreed.
Q: Michael suggested that it can be dangerous to assign Purpose to discussions of evolution. The mechanisms of evolution use random changes in individuals within a specific environmental niche to select for preservation of the gene pools of populations. Purpose is not necessarily inherent.
A: Aboutness rather than purpose is better. Aboutness refers to efficiency and there is a difference in the meaning of purpose.
Q: Are you measuring the amount of information or the content of the information?
A: Olaf agrees. You must consider the amount of information and the quality relevant for a specific meaning, i.e. viability.
Q: But content may not be measurable objectively.
A: That is an interesting thought, but Olaf thinks you may be able to measure content on a Meta-level. This is related to how you can compress information.
Q: Susan referred to the Theory of Thought Content in philosophy which demarcates the content of the system from the amount of information.
A: Relevance depends on how the information is Grounded, i.e. how you share context, and that can be quantified.
Q: Ed suggested there is nothing compelling about survival as a basis for information value rather than say beauty, etc.
A: Olaf cares about persistence of patterns and about identities. I am not myself ten years ago. He monitors how patterns persist and change over time in response to perturbations of the system.
Q: Shannon used the Ratio of Dependence. How the information is compressed depends on how much you know about the system.
A: You can get information from observing systems. Olaf’s PhD thesis was about observing birds to get useful information.
The presentation and discussion ended here.
Respectfully,
Michael Solomon