Synopsis of Cognition Lunch Talk by Michael Solomon

Here is a summary by Michael Solomon about his talk last week at the YHouse-IAS Cognition Lunch Salon.

YHouse is organizing this winter series of informal Lunch Salon events at the Institute for Advanced Study, at 1 Einstein Drive, Princeton, NJ. The events were held on Thursdays during September, October and November 2016, and will continue during February and March 2017. They typically feature a short talk followed by ample discussion time. More information about these talks can be found on the Cognition Lunch Salon page.

Took part to the meeting: Ayako Fukui, David Fergusson, Ed Turner, Gene Miller, Michael Solomon, Monica Manolescu, Olaf Witkowski, Piet Hut, Susan Schneider and Yuko Ishihara.


“After we introduced ourselves, Ed gave a brief summary of NASA’s announcement yesterday of the discovery of 7 planets in the TRAPPIST-1 exoplanetary system. The star is at the minimal mass to support hydrogen fusion and thus be classified as a star, and the seven planets orbiting the star are all of the order of magnitude of the size of Earth, are composed of mostly of rock rather than gas, and are in a temperature range of about 4 times the radiation of earth from our sun for the innermost planet and about one fourth the radiation for the furthest. At least three of the planets could have liquid water. The planets are so close together that they would be seen from an adjacent planet as up to about twice the size of the Moon as seen from Earth, and their masses cause perturbations in the orbits of the other planets. Ed cautioned to remain skeptical of news reports of possible life on these planets as there are no such data. He did note that the relatively near distance from us of 40 light years suggests that there are many more similar planetary systems in the galaxy. He confirmed that all of this was derived from examination of light as the planets passed in front of the star and from the star’s motion induced by the orbiting planets.

Michael Solomon then delivered a talk entitled, “What is the Moral Standing of Machines That Can Think?” He began by noting that a role of Bioethics is to anticipate future issues and to plan ahead. Don’t wait for someone to clone a sheep before considering the implications of genetic engineering. The coming singularity, when AI exceeds human capability, is one such issue. He suggested that Something has Moral Standing when Its Interests must be considered as having Intrinsic Worth. Unpacking that, ‘Interests’ involves a sense of self and what is good for it. ‘Intrinsic’ involves a sense of as an end in itself, and ‘Worth’ is not only Value but requiring Moral consideration. To clarify he asked, “What would you save from a fire, the Mona Lisa or an infant?” If you saved the painting it would not be because the painting’s continued existence is beneficial to the painting. Would you save the Mona Lisa or a bird? He went on to discuss what Moral Standing Provides and suggested one answer would be the four principles described by Beuachamp and Childress as Autonomy, Beneficence, Non-Maleficence, and Justice. What Provides Moral Standing could be being Alive or being potentially alive, as in fetuses. We recognize greater degrees of obligations to those that resemble ourselves, as in cute puppies but not lobsters, and to those we recognize as our family or race or culture but not to “Others”. Additional grounds for Moral Standing might be Sentience. For Utilitarians grounds would be the capacity to suffer. Can a Machine suffer? Many would argue that Sophisticated Cognitive Capacity, i.e. a sense of enduring self, or the capacity to care, demands Moral Standing. Additional grounds might include the capacity to develop sophisticated cognitive capacities, or being a member of a species with cognitive capacity (do you lose standing if you develop dementia or become vegetative?). For many, family ties as in kinship or community ties as in religious communities are grounds, but we are all members of multiple communities and families. Does standing require being Natural? Do we owe standing to harmony or beauty as in the rain forest? Considering Sentience, he discussed sensation (sensory organs) and perception (visual or auditory cortex processing), but focused on the HeteroModal Cortex as brain areas that receive input from unimodal areas and from other heteromodal areas to create a coherent model of the world. Anton’s syndrome and the Rubber Hand illusion or phantom limbs are possible examples of the activity of the heteromodal cortex. If consciousness is an emergent process and is the result of information processing by neural activity, then there is reason to believe that machines and AI could become conscious. But what the quality of that subjective experience would be like (What is it like to be a bat?) for the machine is unknown. According to Maurice Conti, we are living in the Augmented Age, when human capabilities will be enhanced by Computational systems to help you Think, by robotic systems to help you Make, and by Digital nervous systems that connect you to the world. Perhaps the combination of the intuitive capabilities of humans with the computational abilities of machines could be more valuable, productive, and adaptable than either alone. Perhaps the future of evolution will not be biologic or non-biologic, but cyborg. Returning to Ethics, he noted that determinations based on Big Data are Biased in many ways. Some of these ways we can identify, such as recognizing that using zip codes is a surrogate for racial and socio-economic factors. But how machines learn remains a black box. We cannot know or even discover how Deep Learning in computers reach a conclusion. Can we delegate ethics decisions to a process biased in ways we cannot even identify? In the words of Zeynep Yufekci, “We cannot outsource our moral responsibilities to machines. Artificial Intelligence does not give us a get out of ethics free card.” So, if a Role of Bio-Ethics is to try to anticipate the future and to try to develop ways to apply fundamental ethical concepts that can persist and remain relevant, then the coming Singularity, when A.I. surpasses human capability, will provide some real challenges ahead.

The ensuing discussion began with Gene Miller suggesting that the premise of the talk was a poorly defined question, “What is thinking?” If we cannot agree on what is thinking, then we cannot define machines that think. No response was offered (or expected).
Susan Schneider emphasized that on the issue of outsourcing moral responsibility, we may not be able to understand explanations for determinations made by A.I. even if explanations are offered. Rather than A.I. solving problems humans had not been able to solve but that became clear once a solution was demonstrated, the process by which the solution was reached might be incomprehensible to us.
Ed Turner noted that it seemed that most of the concerns we have for present human ethics issues can be applied to A.I. But determining what obligations we may have towards machines with “sophisticated cognitive capacities” may not be equivalent to obligations we have towards other people.
Piet Hut felt that Cyborgs would almost surely be the future. He shared his recent experience with getting a smart phone and succumbing to the seduction of checking his phone much more often than he considered necessary. Michael noted that while no one is concerned with professional athletes wearing contact lenses, we consider performance enhancing drugs very differently. Using erythropoietin to increase oxygen carrying capacity is not allowed, but training at high altitude is fine. There are strict rules in track and field for the allowed length of prostheses for amputees based on torso size, but disabled athletes are now competing with able bodied runners in Olympic track events.
Piet also said that while considering the nature of consciousness and related questions that may not be answerable at present is interesting for the first ten or so discussions, repeating the same arguments after the fortieth time is much less interesting. He hopes that whatever informal discussions occur once there is a YHouse coffee or wine bar can avoid those repetitions.
Olaf Witkowski, referring to his study of origins of cognition, noted that the use of Writing made us all cyborgs and without written language science and learning would not have been possible. He also offered that many in the A.I. communities see the computations and information processing of A.I. as thinking.
Susan raised the question of how will we treat the “Naturals”, like the Amish, when augmentations become the norm? Consider having to choose from a menu in a cosmetic neural implant store in the future the way we order from Starbucks now. She also predicted that there will be a plurality of neural realities when we can choose to enhance memory or even obtain a novel sensory input like electric field sensors that some fish have but humans lack. How will the political process accommodate this new diversity?
Ed reminded us that there are still Hunter Gatherer societies on the planet, and not everyone will have access to these so-called enhancements. Ed also noted that we have some present uses for machines precisely because they lack moral standing. Robots are being used to clean up the damage in nuclear reactors in Japan, or to defuse bombs, or to clear mine fields because the robots cannot be harmed.
David Fergusson suggested an alternative way to consider the saving the Mona Lisa or an Infant example. Would you save the Mona Lisa or your bicycle? He suggested there could be a distinct ethics for artifacts or for inanimate objects that are still worthy of respect. Equally important, he pointed out the issues of crossover: If you treat your machines without respect you may be more likely to treat friends or others the same way. The consequences for ourselves of not acknowledging moral standing may be more significant than the effects on those entities we deny respect.
Olaf Witkowski suggested that there might be ways to categorize values as quantified computations based on the number of bits of data involved. He did not elaborate, but may have been referring to ITT, Integrated Information Theory, that (to my very limited understanding) proposes equations by which degrees of consciousness may be attributed to a machine based on numbers of bits processed in a given time. It is not clear that ITT offers a measure useful for ethics determinations.
When asked how he might reconcile the existence of a Soul or Atman, or some other form of larger consciousness to which he referred in his Rubin Museum talk, with the purely scientific biases of materialists, Piet responded that the problem is trying to fit new information into the existing framework. Instead we will need to fit the existing information into a new framework.
Ed Turner asked whether there is generally consensus when difficult clinical ethics questions reach our committee or other ethics boards. Michael replied that his most common concern is that we reach consensus more often than we should and that this reflects a limit of diversity. I am sure that if we presented some of the same cases to a committee in Missouri we would often get very different recommendations.
The presentation and discussion ended at this point, but it is my hope that the inclusion of ethics in our curriculum will add a dimension worth considering and that the presentation will be a small step in that direction.”

Respectfully submitted,
Michael Solomon, MD


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s