We are now able to engineer machines to perform feats that were, only a few short years ago, thought to be very distant possibilities in an imagined future. Self-driving vehicles, medical advances that outstrip the diagnostic abilities of the most able and experienced physicians, robots capable of accomplishing tasks of great complexity, are some examples. These futuristic achievements resulted from a breakthrough in how to program computers perform tasks.
Computers are programed using algorithms, which are simply formulas that define the organization and order for systematically performing operations needed to execute a task. These, of course, can be very complex, as the tasks become more complex, but are typically also rigid; once programed, the sequencing organization cannot be changed unless the programmers make modifications. The learning—the altering the organization based on feedback from the results —is done by the programmers.
All this changed with the advent of “machine intelligence”, where learning occurs within the machine itself. The algorithms responsible for machine learning are not completely rigid. They can self-modify, based on the results of its action. Exposure to many, many, situations creates many, many different outcomes that provide feedback, generating iterative adjustments (learning) that refine and perfect the performance. Machines become experts, capable of discriminations and decisions that can surpass the best human experts.1
The human brain is the model for how machine learning is programmed. The brain is composed of billions of neurons knitted together in complex networks. Each neuron operates like an on-off switch; it is either “on”—firing an electrical impulse—or “off”. The firing occurs when the electrical potential between neurons reaches a critical value, generating a spark that jumps the gap between the neurons. This becomes a link in a neural pathway that is part of an incomprehensibly vast web of networks. The networks are constantly changing as circumstances change. Habits create established neural pathways that occur when confronted with familiar circumstances. Learning occurs when feedback from familiar situations is sufficiently different from expected, prompting alteration of the response, changing electrical potentials between neurons, and thus changing the neural networks.
Machine learning is composed of silicon, rather than neural on-off switches, and the networks are very simple, not infinitely complex, but in both, feedback changes the firing potentials between switches, which alters the networks, which alters the responses.2 Simple, individual components capable of only the most elementary and inflexible on-off responses, when combined into complex networks of coordinated action, give rise to a system capable of solving impossibly complex tasks, and self-correcting as is goes. Thus occurs a promethean leap from silicon and neurons to intelligence and mind.3
What is this? It is not a sand hill. It is a termite mound. It is also housing for, and integral to, a mind; a hive mind. Each individual termite can perform simple functions, certainly more complex that an on-off switch, but quite limited in flexibility and function. The biology of the termite (reflexes, nervous system, exoskeleton, etc.) constrain the scope and functioning of individuals, but, most importantly, also encompass the ability to communicate and cooperate with other termites. This is a critical component for survival, for like the on-off switches of computers and brains, individuals become part of networks of collaborative action, which gives rise to a hive mind.
This mind is capable intelligent actions and evidenced in the termite mound itself. The structure is among the largest of any constructed by non-human species and acts as a huge lung, allowing the entire colony to inhale oxygen, exhale carbon dioxide; houses underground cultivated gardens and specialized chambers; and is under continual alteration to adjust to changes in weather and humidity to keep a constant livable environment for the inhabitants. Single individuals are incapable of learning and lacking in memory. A hive is capable of both, in very complex ways; foraging widely for food and bringing it back to the hive, adjusting to changes in the environment, developing creative solutions to the problems encountered.4 The hive is made possible by the biology of the individual to establish collaborative networks. The survival of the individual is dependent on the survival of the hive.
What is this? It is not a metal and glass hill. It is a human mound. It is also housing for, and integral to, a mind; a hive mind. Each individual is certainly more complex than an on-off switch or a termite. Each possesses a mind capable of intelligent, creative actions and adaptive responses. Despite individual sophistication, however, they cannot survive independent of the hive.5 Biology (reflexes, nervous system, endoskeleton, etc.) constrains the scope and adaptability of individuals, but, most importantly, also encompasses the ability to communicate and cooperate with others. This is critical for survival, and like the on-off “switches” of computers and brains and the biology of termites, allows individuals to become part of networks of collaborative action that give rise to a hive mind. The survival of the individual is dependent on the survival of the hive. One becomes the many. The many protect the one.
The hive mind, that is, the collective capacity to understand and undertake projects that allow the human hive to adapt to demands and changing conditions that help insure the welfare of the collective, are beyond what any one of us could possibly conceive or execute. They also typically are hidden from view, in the background, as we attend to the foreground that preoccupies our daily lives. We drive to the market, unaware and unappreciative that every single act is made possible through the hive mind.6 What single individual could build a car from scratch; scratch here meaning, produce even a simple screw needed for the task? Indeed, the human hive-mind not only encompasses the hum and buzz of the living, but also resonates with the deeper register of the hum and buzz of the long past; those who learned to make metal from dirt, the physics of the screw, and the machine tools to make a screw, for example.
The Heroic Individual
We Americans are especially blind to the humming significance of the hive mind, as our model of the heroic individual pervades all aspects of our life, from economics, to politics, to psychotherapy. Certainly, individual initiative, determination, intelligence, and adaptability are important attributes that can contribute to our individual accomplishments and fate. Often, however, the model also includes the assumption that the individual is pitted against the world—the collective “they”; that our fate is totally in our hands and we are solely responsible for our success or failure, and the collective is a barrier to achieving success.7
The Heroic Ones and the Many
Crises that threaten the hive, such as pandemics, most forcefully reveal the limitations of the individual, however able, to survive on their own. Our collective welfare and survival, and our individual welfare and survival, are inseparable. And the most heroic individuals are those who are ready to sacrifice their welfare, even their lives, for the collective; health care workers, police and firefighters, to name but a few. We use the term “heroic” only for those who sacrifice themselves for the greater good. We understand, at a primitive level, that individual sacrifice that only benefits ourselves is not heroic. It may be admirable, encompassing individual pluck and initiative, but it is not “heroic”. One becomes the many. The many protect the one. The heroic ones protect the many.
- Very significant limitations in machine learning are often lost in the thrall of what can be accomplished. One problem is that what the machine learns is determined and constrained by the examples chosen to prompt learning, resulting in the machine perpetuating the assumptions, biases, and prejudices of the human programmers. The ethics of machine learning is another, related, vitally important issue; questions about if and when the use of machine intelligence violates fundamental ethical values of justice, individual liberties, and collective welfare must be decided by humans, not machines. The bottom line: We cannot escape ourselves and our imperfections, even as we strive for “perfection”; we program our imperfections into the machines! For a great overview of these issues see: https://uascience.org/lectures/the-promise-and-peril-of-artificial-intelligence/
- These silicone networks are called Deep Neural Networks, underscoring their being modeled after the actual neural networks.
- Intelligence and mind are not located in the silicon or neurons; they are emergent properties of the networks. And memory is not located in the switches, but in the space between.
- See Busy Bodies, by Srinivasan, New Yorker, September, 17, 2018. Also, for a fascinating lecture on “distributed intelligence” and a graphic demonstration of how individual termites, lacking memory or learning, through collaboration, contribute to a collective ability to learn and remember see: https://uascience.org/lectures/can-intelligence-be-measured//
- Should you doubt this, consider how long an infant would survive on its own.
- See Pandemic Exposure.
- Ayn Rand is the most influential proponent of this hyper-individualism, and her philosophy has profoundly shaped our politics, economic theory, and assumptions about personal responsibility.