You've stumbled upon a blog about simulation, complexity, economics, evolution and whatever other shiny thoughts currently occupy my attention. I'm a compulsive note taker, both graphical and text, from science padcasts, magazines, journals, books, videos and whatever else I may encounter. Posts will come and go to prevent this page from becoming too stale.
Additional scribblings, freely available on Apple Books: Notes from High-Performance Driving Schools and is a distillation of many years of classroom instructing at car club driving events. On Cooperation is my attempt at a true hypertext ebook. Selected topics (shown in purple) from this ebook have been added to this blog.
Simply click on a topic below.
© 2002-24, G. Allen Pugh under a Creative Commons license.
NetLogo software by Uri Wilensky. Accordion script by jQuery.
Site visitors
Adapted from a model popularized by Bryan Hayes in American Scientist. Agents meet randomly to wager a small amount on the outcome of a coin flip. With this simple model the distribution of agent wealth will become increasingly skewed and asymptotically leads to all the wealth being held by one agent.
Two methods of taxation (and for subsequent wealth redistribution) are examined. One is to set aside a percentage of each transaction between agents – a sort of an excise or sales tax. The second is to tax those with a wealth above some threshold value, somewhat like a property tax. Wealth redistribution can be done in two ways: equally divided among all agents, or a larger share going to poor agents.
You can find the working model here, including further details and references.
Even before C. Northcote Parkinson published his law (work expands to fill the time available for its completion) told us, we suspected. Klimek, Hanel, and Thurner posted an article on arXiv which details some simulated results confirming these suspicions (Parkinson’s Law Quantified: Three Investigations on Bureaucratic Inefficiency, 2008). These ideas inspired a simulation.
A network may be used to model a committee of arbitrary size, with each node representing a participant. Each node was connected to exactly two others (chosen at random). These groupings were not bidirectional. For example, participant A might consult the opinions of participants C and G when modifying its own opinion, but might in-turn influence the opinions two different participants. Participant states (opinions) were initially set to either 0 or 1 and with each round would change their opinion to match the majority of their sub group. Participant states were updated in sequence at each round.
Results are shown above. As the size of the committee grows, the likelihood it will fail to reach an unanimous consensus (in 100 rounds) grows rapidly. The implication being it is wise to keep committees small and to reconstitute them should they exceed a few rounds without a conclusion.
Recently, Wu, Wang and Evans have demonstrated another difference in the size of teams: that large scientific teams develop and small teams disrupt.
The famous bell-shaped (normal) curve formulated by Gauss is based on the assumption that all of the variables are independent (uncorrelated). But information flow in networks is neither uniform nor instantaneous, so the normal distribution is generally inappropriate. Instead, distributions describing networks often have longer tails. So the likelihood of an extreme event is much greater.
In The (mis)Behavior of Markets, Mandelbrot and Hudson observed that market deviations do not follow the normal curve, and in fact the spread is considerably wider. They go on to describe some fractal-based statistical measures to describe this behavior. Mark Buchanan (Why economic theory is out of whack, New Scientist, July 2008) describes work of Sornette and Harras which points to a possible mechanism. The model described assumes that agents are not fully rational (as demanded by traditional economic theory) and base their actions on a combination of rational and network approaches. That is, they invest using information available from markets and the behavior of those with whom they network.
In 2004, Farmer proposed (What really causes large price changes?) the source of variability was due to the structure of a market's limit order book. Many orders for stock are specified with a limit. Consider a stock currently selling for $13. A sell order might be placed for 10 shares when the price rises to $14, or a bid order placed to buy when the price drops to $12. The problem is that orders do not occupy every possible slot on the price continuum. A bid order for 100 shares might have to be filled with 40 shares at $13 (the most available) and 50 shares at $14, and finally 10 shares at $17, yielding a mean share price of $13.90. These discontinuities in price can yield huge swings in markets.
In 2008, Bouchard proposed (Wealth condensation in a simple model of economy) severe market swings were due to agents following one another rather than external information (news).
On the display of the distribution of wealth
The traditional tool for the display the distribution of wealth is the histogram. In this format, wealth is typically displayed along the horizontal axis and the frequency (number of occurrences) along the vertical. The histogram suffers from shortcomings. One is that wealth can take on an enormous range of values, and another is any power law characteristics will not be clearly evident. As an alternative, Lorenz proposed where the cumulative wealth is assigned to the vertical scale, and the cumulative population to the horizontal - both shown as a percentage.
A straight line connecting all the percentage points of equal value would indicate a perfectly uniform distribution of wealth, thus any deviation from this line becomes a measure of wealth inequity. This inequity is quantified by calculating the Gini coefficient which is the area between the uniform line and the actual data (scaled to the range 0-1).
In 1990, Steven Jay Gould proposed that if life's clock were restarted, the result would be very different. Repeatability is nearly impossible (or unpredictability the rule). This view became dominant within biology.
Jonathan Losos suggests an alternative: *adaptive convergent evolution*, or species living in similar environments evolve similar solutions. While evolution cannot be replayed, the idea of predictably may be tested by examining isolated populations.
So which model are we to believe? Losos has evidence.
References:
The bleeding starts as a tiny trickle. You clean it up, bandage it. Then glance down at your clothes. A couple of spots. Irritating, but you've got some new, portable spot remover you've been meaning to try. So you dab it on and return to coding.
An hour later you notice the band-aid is soaked. Dammit. You get it stopped, but it takes several minutes this time. Then change your pants and set the soiled ones in cold water, just like your mom taught you decades ago. You spent a moment thinking about her. Then back to coding.
But the second band-aid is already bright red. You half-remember something about bright red blood being arterial as you head for another, bigger band-aid. At least your clean pants remain unblemished, but your shirt wasn't so fortunate. To hell with it, coding is calling. After thirty minutes, you manage to stop the bleeding.
Then you remember you were very busy this morning and may have accidentally taken a second dose of blood thinner. Dammit.
A hurried Internet search reveals that's dumb but likely okay. Time to return to coding. You're compulsiveness is consuming. But enjoyable.
You don't even get to start. Blood has formed a little rivlet down your pants, across your shoe and onto the carpet. You run to the bathroom only to find blood is very slippery when confronted with tile flooring. The inevitable fall opens up the wound further. You sit amazed by your stupidity, the complete lack of pain and the rapidity of the flow. Get up. Time to call 911.
After slipping once or twice more, the phone is finally in your hands. And very slippery too. While attempting to dial, you notice your vision darkening.
Vision narrows rapidly. Then disappears. You're on the floor now but don't recall falling. You feel chilled in spite of the warm blood all around you.
Dammit.
Imagine a grain farmer. She has enough wheat to exchange a portion for other needs like shoes, but then has to find a cobbler who needs grain and is willing to trade. And then must repeat the process for every needed product. This is not an efficient process. Cash remedies that.
David Graeber convincingly argues the chronological order of economic invention started with credit systems …basically IOUs, then money, and finally barter (the exact opposite of the way it's generally taught). In essence, the basis of an economy is debt. As Richard Dawkins put it, "Money is a formal token of delayed reciprocal altruism" or more simply, a promise.
Money serves three primary functions: the familiar use as a medium of exchange, a store of value (savings) and an accounting unit. On the latter, Mitchell-Innes' credit theory of money asserts it is a form of accounting, not a commodity. It measures debt.
Cash has taken on various forms, from beads to minted coins and paper, to lines of credit. Most of these, directly or indirectly, are now controlled by the state through a central bank. That is changing. Welcome to the Internet, mobile devices and the block chain.
Mobile applications enable the creation of new forms of cash. Some leak private information more than others, but many are more secure than giving a card to a restaurant server who promptly disappears for five minutes.
Many of the new digital currency applications (digital wallets) will exist on smartphones, with the additional security implemented in hardware, such as biometrics. Already, there are near field communication systems to pay at checkout and apps written by financial institutions. With everyone scrambling to join the new market, there is bound to be a severe winnowing. It should be fascinating to watch.
But the invention with the greatest potential is the blockchain. It has at least two interesting characteristics - it can serve as the basis for a form of digital currency independent of governmental or other authority (decentralized), and the promise of security through transparency (open source).
Blockchains are a type of public ledger based on peer-to-peer computer networks. The ledgers are stored as encrypted files where anyone can tell if there has been a change made. They are the underpinnings of bitcoin and several other currencies under development.
Further reading
Stochastic simulations produce experimental results, and so should be conducted under proper experimental design protocols. One of the great advantages of simulation experiments is that they are generally very inexpensive to perform, and so one can afford to be generous with the number of experiments conducted.
From the statistician's perspective, each simulation run is a trial. In general, multiple trials, or replications, with alternate random number streams, should be performed with each change made in the model. But how many?
Replication is an important tool, and one that makes intuitive sense: the more data you have, the greater confidence in a prediction. Consider a system (illustrated at right) that can be in any of six states, such as a machine tool with two adjustments - one with three settings, the other with two settings. These adjustments are often termed factors, and the combination termed treatments. Assume that all the treatments produce identical results on average - except for one clearly superior treatment, and all the treatments produce results with the same variance. The problem is that it is not possible to know which treatment produces the best result without measurement.
Statistical analysis allows one to infer the nature of the signal by examining the data. Note that the single sample indicates that the second treatment setting is the best choice, which is clearly not accurate. Replicating the experiment will increase the likelihood of a correct decision. This is because by increasing the sample the noise is reduced in proportion to the square root of the sample size. The graph at left shows the results of a simulation study of the effects replication has on the likelihood of making the correct choice (confidence). Curves are given for three levels of imbalance (difference) between the best treatment and the others (1, ½, and ¼ standard deviations).
The graph exhibits several important characteristics: (1) replication improves confidence the correct choice was made; (2) the benefit of replication declines, and; (3) the smaller the difference in treatment effects, the more replications are required to achieve a reasonable level of confidence. For example, if the best treatment is only a quarter of a standard deviation better than the other five, it would require in excess of one-hundred replications to achieve a ninety percent confidence that one is making the correct choice.
This piece was inspired by Winning the Accuracy Game (Hugh Gauch, American Scientist, V94, 2006).
The emergence of complex entities from simple components is so unlikely as to be presumed impossible. Instead such entities are built from a hierarchy of increasingly complex but stable components. Examples include modular manufacturing and biological evolution. Jacob Bronowski termed this stratified stability.
Perhaps the best illustration of this phenomena is Herbert Simon's parable of two watchmakers (Hora and Tempus) introduced in The Sciences of the Artificial. Each make watches of 1000 pieces, and each has to begin assembly anew if interrupted. Tempus' procedure is to assemble all 1000 pieces, while Hora's design calls for 100 subassemblies of 10 pieces each. The expected cost of an interruption was far greater for Tempus.
Simon’s parable can be modeled with straightforward algebra and point probabilities, showing Hora’s approach, even while requiring more assembly operations, yields thousands more completed watches with even small probabilities of interruption. Expanding the analysis to include simulation allows one to tinker with ideas such as subassemblies slowly decaying into their constituent components over time.
Existence at the molecular scale is very unlike anything we experience. Gravitational force is infinitesimal. Instead, electromechanical forces and viscosity hold sway. In life's preferred temperature range, molecular movement is continuous and unrelenting.
The vaguely mechanical fitting of proteins and their ilk, normally a feature endearing to engineers, is occurring under incomprehensible complexity. The constant jostling of organic compounds by energetic water molecules, randomly and rapidly exposing a new face on which to try new bindings might appear inefficient. But it works better and faster than anything humans have been able to come up with.
Please visit Drew Berry's website and watch his TED talk. I found the Chaplinesque, bipedal gait of microtubial walkers especially delightful.
Simple organisms can pour more of their acquired energy into reproduction than complex organisms. So one might expect they would evolve faster. But this is seldom true.
A NetLogo simulation was built to explore the idea. The model consists of three types of organisms: grass that provides energy, and simple & complex bugs. Grass grows randomly without regard to bugs. Bugs can evolve to gain more energy from grass. The complex bugs nearly always won. Peruse the model for more detail. It may be found here. You're welcome to try it.
Note just because life's complexity tends to increase does not imply there is some sort of goal.
Recommended readings:
Imagine agents wandering about a landscape, each with a secret, invisible strategy for interacting with other agents they may encounter. In addition, each agent has a characteristic that is visible to other agents - such as color. Importantly, strategy and color are completely independent of one another. That is, no agent could guess at another's strategy by observing their color. This is the environment modeled by Ross Hammond and Robert Axlerod in The Evolution of Ethnocentrism, (Journal of Conflict Resolution, v50, n6, Dec 06, pp 1-11).
There are 4 strategies: cooperate with all; cooperate with none; cooperate if same color, and; cooperate if different color.
Astoundingly, Hammond and Axlerod found the strategy of cooperation only with those agents of the same color evolved as dominant. This, in spite of the fact that color and strategy are statistically independent! This startling result begs further investigation.
No effort was made in precisely duplicating the Hammond-Axlerod model. Instead, a simple NetLogo model was constructed using similar criteria. When agents interact, they are rewarded in accordance to their strategies. If both agents cooperate, they each receive a reward of $5. If both defect, they both lose $1. If one defects and the other does not, the defecting agent receives $3 and the cooperating agent loses $1. (Note this is similar to the famous prisoner's dilemma model). Learning occurs through reproduction of identical offspring after sufficient wealth has been accumulated. Thus, strategies are spread genetically as opposed to memetically. However, these may be considered functionally equivalent as successful strategies will spread throughout the agent population.
Initial explorations indicated the cooperate-with-all strategy nearly always dominated, in contrast to reported results. More worrisome, the model often evolved into agents of a single color, which means the strategies of cooperate-with-all and cooperate-if-same are equivalent. These results occurred with 3, 4, 5 or even 6 different agent colors. Also, agent populations would often bloom out of control. These problems were addressed by denying reproduction if there was local crowding and greatly increasing the environment size. Finally, global variables were added to allow for rare mutations, agent death, and to greatly constrain agent movement. This last modification proved critical. If agent movement was severely constrained, the model now behaved as reported, with a convergence to the strategy of cooperation with only those of the same color and clustering of like agents (in fairness it should be noted the published model allowed for very little agent movement). However, when agents were allowed greater freedom to roam the strategy of cooperate-with-all dominated.
You have just been hired to manage the shipping room for Bertha’s internet cookie boutique. Your first task is to determine if your predecessor had hired an appropriate number of employees. Fortunately, before retiring, he had compiled some valuable data about demand.
The system seems straightforward. Workers are given an order sheet, then walk around the stacks selecting the mix of cookies specified by the customer, packaging everything into a box to be picked up by the shipper at the end of the day. The data indicates an average of 350 orders are received every day, and that a single worker requires about 12 minutes to process an order - or should be able to process 35 orders in a 7-hour day. There is some variability in the order size, but with such large numbers it should all even out just fine.
Ten workers should be able to handle the load, but the shipping room currently has 13 - and they seem busy much of the time. After observing the system at work for a week, you decide the data is indeed accurate and dismiss 3 workers.
Three days later orders are beginning to stack up. You recheck your figures and call an emergency meeting with your workers. What went wrong? You simply failed to take into account system variability. Welcome to queueing systems theory (or, why waiting lines always bite you). The main problem here is that average values were used in a deterministic analysis.
Using arcane mathematical constructs like Markov chains, one can use queueing theory to estimate what would have happened when our manager decided to dismiss some workers. Or one can simulate. Either way, the result in such simple queueing systems is often the same - you can have your workers busy all the time, or you can keep the waiting lines to reasonable length. Not both.
The model for this system has 3 components. The first is the arrival process, in this case orders for boxes of cookies. The second is a queue where the arrivals wait to be processed. The final part is service (workers filling boxes). Imagine how the system behaves when variability is added. Some of the time the service process will be fastest, and the workers will be able to catch up on orders, or, if none, simply lay about. The important thing to note is that the workers cannot “work ahead.” They are forced to remain idle if the queue is empty. However, when the arrival process is running faster it can simply dump orders into the queue. It is never idle.
There are a host of interesting results derived from queueing theory (and yes, it is spelled with five vowels). In addition to the aforementioned necessity to accept either occasionally idle processes or long queues when variability is present, one of the more useful results has to do with multiple waiting lines. Many commercial establishments are arranged with a queue for each checkout station, not realizing the average time customers spend waiting in line can be significantly reduced if a single queue were employed. Alas, a single queue is longer (but moves much faster), and so is not often appealing to customers.
How did energy flow contribute to life's beginning?
Thermodynamic systems can be classified as open or closed. Closed systems eventually decay as their entropy maximizes …like a hot cup of coffee gradually cooling to room temperature. Boring. But open systems can be far more interesting. They allow for an external source of energy and a sink for disposing of it.
Jeremy England has proposed in such an open system, an entities located between source and sink would gradually evolve the means to transfer as much energy as possible. The energy from the source is generally structured (perhaps with certain wavelength) so elements in the system would use part of the energy to structure themselves to efficiently transfer energy from source to sink.
And the more such efficient structures there are, the greater the total energy dissipated. So dissipation-driven adaptation of matter may be a driver of complexity growth that eventually led to self-replication and life.
Nick Lane's well-argued hypothesis is life probably started not in a warm little pond but in alkaline hydrothermal vents well beneath the waves. He also argues that energetics should be strongly considered in any theory of the evolution of life.
Here at least two simple forms emerged: bacteria and archaea - the prokaryotes. These single-celled creatures are still the most numerous of earth. Sometime later, a merger (endosymbiotic event) occurred. A host archaea cell adsorbed a bacterium (which eventually become mitochondria) - thus forming the first complex cell, or eukaryote.
Eukaryotes generally are much larger and much more complex than prokaryotes. They could become that way because of energy (in the form of ATP) made available by mitochondria.
Mitochondria have since evolved into miniature power plants. Like most of life, they are wonderfully complex. But the essence of their structure is the convoluted internal membrane, which allows a very large electrical potential to develop.
References:
So I understand you are interested in a home Mr. Cat?
Perhaps.
Would you considered retiring from the adventures of outside life?
Maybe.
Would you agree to be declawed?
No.
Do you like to trip up old folks like me?
Their problem.
Do you feel comfortable with the Copenhagen interpretation?
It's wrong. I'll never tell you why.
Do you listen to humans rambling on about the nature of reality?
No. But I like to walk on keyboards.
Well, I think we'll get along fine.
I'll have the staff prepare a list of demands.
When I was a child, I began to consider suicide as an alternative to a prolonged and painful death. To explain, my friends were elderly with various infirmities often leading to a painful and demeaning end. It seemed a reasonable response. Such a final act of defiance was appealing to the mind of a child. It still is.
But we often linger. Determined to perceive each ignominy as just another insult to be overcome. Until compensation fills our waking moments. Evolution made us temporary objects, experimentally inserted between the genome and the environment. Each generation, winners are declared (simply, crudely) as those reproduced most frequently. We temps are left to expire. Life is a terminal illness: we do not wear out but are obeying a simple engineering directive determined by evolution.
In multicellular creatures only a few lucky cells get to contribute genetically to the next generation: the germ cells. The remainder are somatic. Somatic cells constitute the body that maximizes spread of their germ cell brethren. We are soma. But our big brains have begun to realize the situation …and investigating possibilities for avoiding it. The time allotment evolution has granted we soma is finite …and our consciousness has grown sophisticated enough to know it.
Plant and animal breeders may have inadvertently launched the opening salvo in the looming somatic (transhumanist inspired) war with Darwinian evolution. And gene editing will accelerate this process by orders of magnitude. Will we simply usurp our future developmental direction or freeze it in place so that the wealthy and powerful can become nearly immortal overlords?
Sources:
An introductory compilation of references:
The modern synthesis (genetic variation, natural selection, Mendelian inheritance) was later enhanced by the concept of genetic drift. Genetic drift, first promoted by Sewall Wright, simply demonstrates how random neutral mutations may grow or shrink in prevalence in a population. Or more simply put, randomness in evolution.
Random genetic drift is a mechanism of evolution that results in fixation or elimination of alleles independently of natural selection. - Larry Moran
About three decades later, Motoo Kimura greatly expanded and refined the idea, which he called neutral evolution. He asserted that genetic drift contributed far more to biodiversity than natural selection, the previously favored driver of variability.
First thing you have to know: the revolution is over. Neutral and nearly neutral theory won. The neutral theory states that most of the variation found in evolutionary lineages is a product of random genetic drift. Nearly neutral theory is an expansion of that idea that basically says that even slightly advantageous or deleterious mutations will escape selection — they’ll be overwhelmed by effects dependent on population size. This does not in any way imply that selection is unimportant, but only that most molecular differences will not be a product of adaptive, selective changes. - PZ Myers
In 1979 Steve Hubbell published some surprising findings. He noticed that the actual distribution of tree species was closer to random than natural selection models would indicate. The unified neutral theory had arrived.
Life's creatures seem to increase in complexity over time and simple random mutation seemed a weak spot in the ability to innovate. Andreas Wagner and colleagues might have filled it — their proposal that the method by which life searches for new solutions increases complexity. Through the use of computer simulation, they have discovered the search through the library of possibilities is reasonably covered using simple single mutation (heretofore unknown and completely unexpected), and moreover neighborhoods within this space are diverse.
The keys are twofold. First, proteins are much more complex than required to perform their function, meaning a random change probably has no effect. Second, there are many proteins which can perform the identical function. The first leads to diverse neighborhoods, the second to the ability to search over vast areas.
So complexity leads to robustness, and robustness leads to innovation.
As with most everything in a healthy science, the theories are not universally accepted. See details here.
Robert Reich does such a great job explaining inequality that I offer can only offer 2 minor pieces of supporting evidence from simulation. Along with a few choice quotes from various authors.
So much for the illusion of meritocracy.
While inequality is persistent and pervasive, one can nonetheless ask, is it moral? The political commentator Matt Miller argues in his Two Percent Solution that how one answers that question is a key distinguishing feature of the Left-Right divide. - Eric Beinhocker
If investments grow faster than the rest of the economy, inequality will increase. - Thomas Piketty.
The role of inequity in society is grossly underestimated. Inequity is not good for your health, basically. - Frans de Waal
The bottom half of the country has been shut out from income growth for 40 years. - The New York Times
Today the trend to greater equality of incomes which characterised the postwar period has been reversed. Inequality is now rising rapidly. Contrary to the rising-tide hypothesis, the rising tide has only lifted the large yachts, and many of the smaller boats have been left dashed on the rocks. This is partly because the extraordinary growth in top incomes has coincided with an economic slowdown. - Joseph Stiglitz
Despite the moral assurance and personal flattery that meritocracy offers to the successful, it ought to be abandoned both as a belief about how the world works and as a general social ideal. It’s false, and believing in it encourages selfishness, discrimination and indifference to the plight of the unfortunate. - Clifton Mark
When mammals mate, the expected contribution to the offspring's genome of each parent is 50%. Unexpectedly, it turns out females' contribution to the species is far greater. It seems more females successfully mate than males.
The DNA studies on how today's human population is descended from twice as many women as men have been the most requested sources from my earlier talks on this. The work is by Jason Wilder and his colleagues. - Roy F. Baumeister
MOOCs (massive open online courses) aren't all that new. I recall teaching my first decades ago. They weren't as open either, as the hosting institution often collected tuition and the students had to have access to specialized equipment often lodged in companies or libraries.
In contrast, territorial maps have been around for centuries. But they aren't the kind I'm referring to. Mind maps are my preferred method of taking notes, and those are about the same age. They seem to capture the interrelatedness of concepts better than the linear form of traditional prose.
Nowadays, with the infirmities of age making themselves more apparent daily, mind maps are my sole means of collecting and preserving the information offered in MOOCs. And to my great delight, there are multiple mind map applications on the iPad from which to choose. My favorites are OmniGraffle and MindNode.
This post was prompted by an offhand comment in the MOOC I recently watched. Offered by Numenta, it is a course that elucidates the basics of hierarchical temporal memory (HTM), and I enjoyed it immensely. In the section introducing grid cells and mapping, instructor Matt Taylor muses that the mapping function may be used within the Neocortex might be extended to ideas.
Maybe that's why I enjoy rendering lectures into mind maps so much.
So what might a mind map taken from an online course look like? Here's a sample from the excellent Agent-based modeling by Bill Rand on Complexity Explorer in 2016.
A simple NetLogo model about ants eating cookies, reproducing & dying.
poor → rich
Discrete-event simulation is a powerful tool for modeling and analysis, production, manufacturing, and similar enterprise systems. Competition yields products and processes which are increasingly complex yet possess shorter life cycles. Simulation can be less expensive and faster than experimenting directly with an existing system. Simulation also allows for change earlier in the cycle of system deployment. As a rule, the cost of change increases exponentially as a system design progresses from concept, through design and fabrication, and finally to deployment.
A simulation is nothing more than a model of a process. The greater the depth and accuracy of the model, the more use it will prove to be. The most important factor in the construction of a model is information. Once this information has been collected, the assembly of the model can proceed. If the information is faulty, so too is the model, and any results extracted from it. Models may be used to develop decision and control systems, predict behavior, optimize, or simply to gain a deeper insight into system behavior.
Four schemes for classifying simulation models |
---|
|
There are probably as many ways to categorize models as there are models. However, some grouping schemes have proven useful. One is classify a model as physical or virtual. Physical models are generally built to scale and are used extensively in research (like wind tunnels), and as production prototypes (chemical processing plants). Virtual models include mathematical, dimensional (computer aided drafting and finite element methods), and software representations.
Another scheme is consider variability. Deterministic models do not account for randomness, and stochastic (random) models do. Simulations which contain random variables are often referred to as Monte Carlo simulations. A third scheme is to classify a model as discrete or continuous (or even combined). Continuous models march across time in constant identical steps. This is worthwhile in modeling systems that behave according to a closely controlled schedule, where the time between events is fixed (or can be safely modeled as such). An example might be an inventory system where receipts and shipments are reconciled at the end of each day. Continuous models do not preclude variation (the daily order quantity might be quite variable). In discrete event models, time does not flow uniformly, but leaps from event to event. Discrete events are very common in manufacturing. Models are occasionally described with state varaibles. An example of a discrete system state variable is the number of customers waiting for service, while an example of a continuous system state variable is the level of fluid in a tank.
Is randomness (or noise) ever good? |
---|
Consider these examples:
|
System variability (randomness) may come from several sources, including: ignorance (inability to find a pattern); precision (inability to measure well enough); ignorance of, and sensitivity to, initial conditions (chaos), and; speed (inability to calculate anywhere near real time).
Computers are engineered to behave deterministically, but random numbers are required for stochastic simulation. But he only known source of truly random numbers are quantum processes like radioactive decay. The pseudo-random number (PRN) algorithm was invented to allow computers to generate a steam of values that at least appear random. For example, an algorithm might start by selecting a number between 0 and 12 (the seed) - say 4. After the initial choice, the algorithm produces each new PRN by multiplying the preceding number by 17, dividing the result by 13, and taking the remainder, yielding the sequence 3, 12, 10, 1, and on to 4 again, whereupon the cycle is repeated. Such algorithms are clearly not random, since they quickly fall into a pattern. Fortunately, selecting the seed and other values carefully can result in a very, very long sequence before repeating.
Entities are the objects of the model. They represent the parts, orders, or customers that flow through the simulation. Processes are the procedures that act upon the entities. Processes typically include anything from telephone operators to machining stations. There may be any number of entities and processes in a simulation. The characteristics or properties of entities are termed attributes. Attributes that might be assigned include priority, the time which some incident occurred to the entity, or even color.
Events are occurrences that happen as the model runs over time. Events may happen to entities or processes. They are an important feature in simulations and allow construction of complex models. Many simulation languages also allow for the management of resources that are consumed or utilized during the course of a simulation run. Resources can even be made to flow through the model in the same manner as entities.
There are two other key constructs that are necessary in the construction of a robust and accurate model. These are messages, which allow for the passing of information throughout the model, and decisions. Decisions permit choices to be made based upon attributes, resources, entities, events, or flow branching within the model.
At the core of most discrete event simulation languages lies a master software routine called the executive. In simplest terms, it consists of a simple logical sequence that monitors and manipulates a small database. The typical logic of an executive routine is illustrated in the diagram below.
Consider a simple single-queue, single-server system. This system only has two events types to model (not counting the trivial event of ending the run when the simulated termination time is reached). These are an arrival event (a new customer enters the system), and an end-of-service event (a customer exits the system). One might use a start-of-service event as well, but there is no real need, as the end-of-service event completely defines it. Each event type has its own logic subprogram. Below is an outline view of these programs.
There are several important concerns that must be resolved before a simulation model may be accepted. The first is verification. The computer model must be a robust and accurate representation of the system model. This is accomplished during construction and development of the model.
The second concern is validation. A simulation is considered valid if model performance mirrors that of the actual system. That is, the model and the actual system must exhibit functionally identical behavior. For this reason it is important to simulate existing systems before experimenting with modified systems. If the simulation model does not reasonably approximate the current system, confidence in suggestions made for improvements will wane. Schruben [1980] proposed a Turing test to validate a model. He presented simulated and actual data from an automobile component factory. When a group of managers, engineers, and workers could not determine which data sets were simulated, the model was readily accepted.
A third concern is initialization bias. Many simulations begin with queues empty. This may be perfectly appropriate. For example, customers are seldom willing to wait in a bank lobby overnight so that tellers can complete their service the following morning. Parts on an assembly seldom exhibit such qualms. There are numerous statistical tests for detecting and contending with initialization bias in a simulation model. However, very often simple visual inspection will suffice to detect bias. Two additional straightforward methods are to calculate moving averages or delay collecting statistics until the simulation has stabilized. Extend has a provision for the latter approach.
The last concern to be addressed here is the selection of termination criteria. Unless the particulars of the model indicate otherwise, it is generally useful to run simulations for as long as possible. This will help wash out initialization bias and narrow statistical confidence bounds on results. If the model does not allow this, as the case of job shops, where nearly everything changes daily, then the simulation should be repeated several times to achieve the same objective.
Again, inspection of graphs of the output variables can yield some feeling about system stability. If the value of the output variable appears to be converging toward at constant, then the simulation has most likely been run long enough. From a statistical perspective, the accuracy of a simulation improves with the reciprocal of the square root of the number of trails.
A related concept is experimental replication. That is, how many runs should be made? Simulations are often dependent upon initial conditions and will achieve an alternate stable state, or not achieve one at all, given a very small difference in initial conditions. Multiple runs are required to explore this phenomenon, and statistical experimental design methods may be employed to analyze the results. In general, multiple runs, with alternate random number streams, should be performed with each change made in the model.
At long last, someone has provided a rigorous underpinning to evolution. And in doing so, manages to flatter it. Based on the second law of thermodynamics, which predicts entropy, or loosely, disorder, will nearly always increase (Actually, entropy might be more accurately expressed as the difference between the total amount of energy in a system and the amount of useful work that can be extracted, often expressed as a function of temperature.)
Thermodynamic systems can be classified as open or closed. Closed systems eventually decay as their entropy maximizes …like a hot cup of coffee gradually cooling to room temperature. Boring. But open systems can be far more interesting. They allow for an external source of energy and a sink for disposing of it.
Jeremy England has proposed in such an open system, an entities located between source and sink would gradually evolve the means to transfer as much energy as possible. The energy from the source is generally structured (perhaps with certain wavelength) so elements in the system would use part of the energy to structure themselves to efficiently transfer energy from source to sink.
And the more such efficient structures there are, the greater the total energy dissipated. So dissipation-driven adaptation of matter may be a driver of complexity growth that eventually led to self-replication and life.
Further reading:
More than any other, this form of simulation has developed an inviting interface style which means non-experts may generate and modify models.
The general idea is straightforward. Agents roam around an environment and may interact with each other and the environment. Agents may be heterogeneous and stochastic.
Often believed to be more rigorous, the alternative modeling technique based upon differential equations, is actually quite limited in application. To site two examples, (1) the mean value of a variable is often used rather than the statistical distribution, and (2) agents may form networks.
Agent-based models have proven themselves useful it areas of economics, biology, genetics, social science and many more.
Suggested reading
Although I did not know it then, 99% of my statistical training in grad school was as a frequentist. The name Bayes was only mentioned occasionally, late at night, in hushed tones, almost conspiratorially. Then quickly forgotten.
No one admitted to being a Bayesian. At least publicly. Processing a priori opinion was deemed unscientific. The professional risk and the formidable intellectual shadow of frequentists was too intimidating. Reflecting back, I unthinkingly accepted this was the way the world was. And that seemed fine as statistics was just a tool one had to master. Now being old and retired, I don't have to care what some journal editor or academic administrator thinks.
Consider the fundamental methodology. Tasked with evaluating the accuracy of a hypothesis, you gather empirical evidence (likely noisy data), find the proportion of events that match the hypothesis, and proceed through a series of calculations to determine if the data supports the hypothesis with a predetermined probability.
Bayesian analysis is much easier conceptually. You believe something is true with a prior confidence (for example, there is a 50% chance the earth is round). Then, gathering evidence, your confidence incrementally changes. So simple. So logical. And there seems to be a growing consensus this is how our brain operates as well.
However, there is a major cravat. Having a reasoned opinion (prior belief) does not exempt you from carefully gathering enough empirical evidence to validate.
The pie chart at right depicts the belief that one of three possibilities is the cause of some effect. Evidence is added in support of one of the potential causes by clicking on one of the three corresponding buttons. With each click, the chart is updated to reflect the new data using the assumption that 90% of the time the observation (click) is accurate.
Note the original belief is spread evenly among the three possibilities (equal priors), which frequentists consider the rule's greatest weakness. But the initial beliefs may be rapidly washed out with the accumulation of evidence. This formulation of the problem yields the expected result that beliefs reflect sampled inputs - if ten of each type is entered, beliefs are equal. However, this procedure appears to neglect data quantity - the change in belief due to a single entry remains the same no matter how many entries have proceeded it.
For an fascinating perspective of the history of the theorem, see The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy by Sharon Bertsch McGrayne.
Two wonderful, and strongly contrasting, articles have appeared recently that highlight the Jekyll and Hyde nature of our electronic devices. Andy Clark is convinced the human mind has a heightened ability to incorporate external information storage into mind, and employs it in ways that have enhanced our success as a species reports Larissa MacFarquhar. While Madeleine Bunting is greatly concerned with our communication tools robbing us our ability to focus and deeply interact. Of course, the delightful/frightful irony is they have become one in the same device. The ultimate incorporation of mind extension and distraction?
Our technology has evolved to resemble its very flawed creators.
The smartphone may be the poster child of all human artifice. As with all tools, it began life as free of intent - just another neutral extension of ourselves, but with staggering amounts information from sources outside ourselves. But it too good to last. Creators of malicious software stumbled upon the value of your data to advertisers, and the world forever changed. Now our wonderful tool has become an semi-intelligent device, serving the attention miners. Praying on human weakness. Our data - collected contemptuously without a care for your privacy - effortlessly stolen.
Now even our thoughts and hopes and fears are relentlessly spied upon, our politics carefully nudged in a direction away from any sense of a shared culture. But to simply disconnect is to disengage, which is equally problematic for democracy. Better to resist the data gatherers like Facebook and Google, investigate the veracity of sources, question why a corporation would provide "free" services, actually engage with people.
If only we could restore our wondrous devices to their original intent.
Further reading:
The ability to make decisions on our own seems so obvious that it remains unquestioned over entire lifetimes. The exercise of free will appears to be foundational to concepts like political freedom and the justice system. Regarding the later, conscious premeditation (or malice aforethought) seems to serve as a substitute for free will. But the concepts are not quite the same.
Some argue that free will is an illusion, albeit one profoundly ingrained. Anil Seth has suggested not only is free will an illusion, but one in direct conflict with one of the cherished ideas of metaphysics; causation. Free will requires action without causation. Free will assumes intention & agency - and the self thinks it is the cause.
The BookLab recently did a podcast comparing two excellent books on this topic.
Natural selection acts upon biological individuals through their genes. Some have argued that natural selection may also act upon groups of individuals. Jerry Coyne and others have argued vigorously and, I believe, persuasively this is very unlikely.
Sam Bowles suggests group selection has never cleared the hurdle of widespread adoption by biologists for one primary reason - it is extremely difficult to find a case in nature where genetic inheritance is less powerful than altruism, except for kin selection. He also noted that cooperation may confer benefits to all participating individuals (mutualism, which can be motivated by self-interest), or not (altruism). Altruism may also flourish where a high likelihood of reciprocation exists.
But there is another evolutionary system that contributes to the growth of cooperation: culture. Culture can change much faster than genetically driven (biological) systems and has co-evolved with them. It also is completely dependent on their existence. It is within cultures that group selection occurs. And this accounts for the rise of the type of altruistic behavior in which individuals sacrifice for unrelated others arise?
Group selection is relevant (essential) for social behavior including economics - Yaneer Bar-Yam (Twitter, 20 January 2019)
There isn't a meaningful conflict here. Group selection doesn't occur in biological evolution, but must in the evolution of social groups.
In 2003, Nick Bostrom proposed we might be living as entities in a computer simulation. This argument renders free will, reality and everything else to mere illusion. As one might expect, bushel baskets of hand wringing have been expended. But no one has proven it incorrect. Many choose to ignore it.
Philip Ball offers a complete and wonderfully lucid discourse on the entire issue.
Whether or not the universe is a simulation, it seems our brain might use simulation as an internal tool. Michael Graziano suggests there are multiple predictions (simulation models) constantly vying for our attention.
(From my archives).
Few tools are as useful as simulation for exploring and visualizing various game-theoretic models - especially complex biological models.
Consider a wolf-sheep model based upon the Lotka-Volterra mathematical formulation. The population of sheep in the next generation will change based upon their current number and the number of wolves. Likewise, the number of wolves depends upon the current population and the number of sheep. And either population may become low enough to disappear. An excellent, interactive NetLogo model may be found here.
It has taken seven decades for me to understand I'm not an artist. Even after a stroke left me without a functioning right side years ago.
Although the final results were uniformly and predictably disappointing, the effort was often enjoyable. So I kept at it. And at it. And …
But yesterday I decided to stop. There will occasionally be diagrams, but no further attempts at the impossible. Enough of this insanity.
So I have begun to populate my apartment walls with the art work of others. The first piece arrives in a few days.
This post concerns the work of Iain McGilchrist, the author of The Master and His Emissary and The Divided Brain and the Search for Meaning. Other sources include Sam Harris' Making Sense podcast and the video documentary The Divided Brain. I believe what Dr. McGilchrist has achieved is remarkable and deserves your critical attention. Please remain cognizant however, that the paragraphs below are my interpretations of his ideas.
Animals must pay attention to dual things to survive. The first is access to food, requiring rapid identification and acquisition. The second is a constant awareness of possible predators. The solution nature seems to have arrived upon is to devote separate parts of the brain to each task, operating in parallel. In the case of mammals, especially humans, these are the brain's hemispheres.
There are striking differences in the way the hemispheres view the world, albeit through the same array of sensory inputs. Here's but a sampling: the left brain contains the keys to language, the left tries to manipulate objects while the right tries to understand them and relate to them as a whole, the left's way of thinking is reductionist, and mechanistic while the right’s is holistic and tolerant, left brain breaks the world into unrelated parts to categorize - it cannot understand humor, relationships, movement, metaphor, insight, irony or the whole, the left hemisphere retains anger, is subject to identity & trigger words.
“I sometimes think of the right hemisphere as what enables Schrödinger's cat to remain on reprieve, and the left hemisphere as what makes it either alive or dead when you open the box. It collapses the infinite web of interconnected possibilities into a point-like certainty for the purposes of our interaction with the world.” - Iain McGilchrist
Left hemisphere thinking is taking control of the world. The social ramifications are enormous - making us act as if we all had right hemisphere damage.
About a decade ago Jeff Hawkins introduced us to his ideas of the structure of the cerebral cortex. He subsequently founded Numenta and established the HTM (hierarchical temporal memory) theory of brain function (2014), lately subsumed by the thousand brains theory of intelligence (2017). Both primarily deal with the cerebral cortex, the physically dominant structure sitting atop the brain and about the size of a napkin.
In his most recent book, he expands this work, including his idea of reference frames.
References:
Snuggled amongst the fissures of our left temporal lobe lies a remarkable capability — the center of our innate ability to name and categorize the natural world.
All animals have a finite array of sensors and information processing skills with which to make our way around our environment. These limited abilities yield a unique perception of the world — their umwelt.
While humans have used tools to build artificial sensors to enhance our umwelt, they are imperfect, incomplete and generally still rely on our limited information processing ability.
Carol Yoon summarizes it well:
These movements led to what some said was the rightful, scientific goal of taxonomy - an evolutionary map of life that could reveal the evolution of life. But in doing so they separated science from a basic, innate part of part of us - our umwelt.
References:
William of Ockham 's phrase "Plurality must never be posited without necessity", or as Albert Einstein put it; Everything should be as simple as possible, but not simpler, is the rationale driving this essay.
As we became increasingly social, the brain evolved several capabilities for coping with this altered environment. Among those were the theory of mind and some notion of self. It is a small step to apply those to ourselves eventually resulting consciousness.
“If the brain were so simple we could understand it, we would be so simple we couldn’t.” - Emerson Pugh, The Biological Origin of Human Values.
António Damásio thinks consciousness (subjectivity) is responsible for creativity, love, …even awareness of our own existence.
Theories of consciousness abound. Two primary contenders are Bernard Baars’ global workspace theory and Giulio Tononi’s integrated information theory. Both suggest that our immediate conscious attention is chosen from a constantly updated variety offered up by the brain.
Two elegant explanatory models (both supported by evidence) have recently come to light.
Iain McGilchrist investigates left/right brain phenomena. He believes the normal balance between the brain hemispheres is becoming undone. There are striking differences in the way the hemispheres view the world, albeit through the same array of sensory inputs. Here's but a sampling: the left brain contains the keys to language, the left tries to manipulate objects while the right tries to understand them and relate to them as a whole, the left's way of thinking is reductionist, and mechanistic while the right’s is holistic and tolerant, left brain breaks the world into unrelated parts to categorize - it cannot understand humor, relationships, movement, metaphor, insight, irony or the whole, the left hemisphere retains anger, is subject to identity & trigger words.
McGilchrist, the author of The Master and His Emissary and The Divided Brain and the Search for Meaning. Other sources include Sam Harris' Making Sense podcast and the video documentary The Divided Brain. I believe what Dr. McGilchrist has achieved is remarkable and deserves your critical attention. Please remain cognizant however, that the paragraphs below are my interpretations of his ideas.
Animals must pay attention to dual things to survive. The first is access to food, requiring rapid identification and acquisition. The second is a constant awareness of possible predators. The solution nature seems to have arrived upon is to devote separate parts of the brain to each task, operating in parallel. In the case of mammals, especially humans, these are the brain's hemispheres.
Left hemisphere thinking is taking control of the world. The social ramifications are enormous - making us act as if we all had right hemisphere damage.
“I sometimes think of the right hemisphere as what enables Schrödinger's cat to remain on reprieve, and the left hemisphere as what makes it either alive or dead when you open the box. It collapses the infinite web of interconnected possibilities into a point-like certainty for the purposes of our interaction with the world.” - Iain McGilchrist
But my preference is for an even simpler model.
Michael Graziano suggests consciousness arose from a simple merger of at least two characteristics known to be present in the brain: a somewhat flexible sense of self, and; a theory of mind. It is relatively easy to understand how the prediction of predator (or prey) behavior would be a great evolutional advantage. Such an ability has been shown to exist. It is called theory of mind (or, more simply stated, mind reading).
It is also known that the brain maintains one (or more) models of the self. These models may incorporate tools and sensory device information. Consciousness may arise when the two abilities interact.
Daniel Dennett describes consciousness as an illusion …a sort of user interface of the mind …in that a good interface does not indicate the complexity of what lies beneath.
For the most part, consciousness remains a passive observer, pretending to be a player in the world, exploiting the illusion of free will as evidence. But then why would evolution allow it to exist? Perhaps because it is built from existing parts that have proven valuable, and it may itself be valuable in long-term planning and social interaction.
Philosophical zombies are identical to humans in every way, except that qualia does not exist for them. Qualia are subjective (conscious) experiences. Reminds me of the Borg.
Finally, Nick Chater describes the mind as being created moment by moment, an improvisation of recalled analogous precedents and sensory inputs with no real depth, no dedication to truth, no permanent self, no beliefs. Quite simple indeed. Perhaps too simple?
A complex system is characterized by a connected network of entities. Each entity must be capable of changing state based upon communication with connected entities. Typical requirements for complex systems to arise are: many parts that change state dependent upon signal; many parts connected in a network, and parts that self-organize (fall into place automatically). Characteristics of these systems might include behaviors more than sum of parts (non-linearity or emergence - not reducible), complex systems can become parts of larger systems,, growth or decay at exponential rates (primarily due to feedback loops), and become adaptive to local conditions without any top down guidance.
Often the behavior of the system is much richer than the behaviors of individual parts. While this is a descriptive definition, Melanie Mitchell notes it is dependent upon the definition of complexity, which is itself ill-defined.
Reductionism, the bedrock of scientific thought, fails to cope with one of the most important ideas to come along in the last century - emergence.
Cosma Shalizi defines emergent properties are ones which arise from the interactions of the lower-level entities, but which the latter themselves do not display.
Is emergence a phase transition? Sometimes, as when a collection of units exhibits behaviors that are unpredictable based on our knowledge of their individual characteristics. Emergence is the universe's way of producing novelty.
John Miller believes reductionism, the dominant force in science, takes a top-down approach, while complex systems do the opposite. Reduction gives us little insight into construction. And it is in construction that complexity abounds.
As a result, computer simulation becomes a preferred tool for analysis.
Likewise, George Ellis believes there is a disconnect within reductionist thinking. The primary paradigm of science is incompatible with complex systems. Bottom up construction leads to top down causality. Richard Laughlin asserts the age of emergence collective behavior has supplanted the age of reductionism.
Due to the rampant infestation of non-linearities, and the bottom up construction, complex systems do not lend themselves to traditional reductionist analysis.
It has been argued cooperation is as important as competition in evolution. Cooperative creatures often conserve energy, thus making it available to replicate and evolve faster compared to those who don't, effectively reducing the intensity of selection. Cooperation usually implies intent, but in this monograph it takes on a broader meaning including collaboration and even concurrence.
Steven Johnson managed to break emergence into a set of key characteristics centered around principles and feedback. The principles include more-is- different, ignorance is useful, encourage random encounters, look for patterns in the signs, pay attention to neighbors. Feedback may be positive, leading to exponential growth, or negative, leading to aversion. Agent & system rules can alter feedback.
Lee Dugatkin is among a growing group of scientists responsible for the advancements in our understanding of cooperation. Especially enlightening are his descriptions of the Allee effect (the positive correlation between population density and individual fitness of a population or species), the Hamilton rule of when altruism is beneficial (if the cost in fitness to the actor is greater than the genetic relatedness between actor & recipient times the fitness benefit to the recipient) and strongly contrasting emphasis on competition (Huxley) or cooperation (Kropetkin).
David Krakauer argues there is a range of complexities the various life forms might gravitate to - from profoundly simple to staggeringly complex. Each has advantages. Recall the Darwinian imperative may be stated as those who reproduce most efficiently and quickly generally win, so it follows logically that creatures would dispense with as much complexity as possible.
For example, viruses don’t even process the machinery to replicate and have to steal it from others (which is the reason many don’t consider them alive).
Why would complex creatures ever evolve – uncertainty. If the environment is unpredictable, there may be significant advantages to carrying additional complexity. And one of the many ways to achieve complexity is through cooperation.
Really interesting behaviors are often unpredictable. Networked agents, when acting alone, are seemingly unsophisticated, sometimes exhibit striking behaviors as a group. Such behaviors are said to be emergent.
Emergent systems won't reveal themselves to reductionist analysis. Fortunately, there are useful tools for modeling, analysis, and visualization of such systems: agent-based computer simulation (ΑΒΜ) and network science being foremost among them.
Agent-based modeling (ABM) is a type of computer simulation. Each agent's behavior is encoded into a few simple rules that are influenced by other connected agents and the environment.
William Rand notes that ABM modeling has several advantages over more traditional equation-based modeling which generally make assumptions of agent homogeneity, are temporally continuous, usually require aggregate knowledge and are top-down (reductionist).
NetLogo is a graphical agent-based simulation language that allows entities (agents) to interact with one another, and it is wonderfully easy to use. It is the perfect environment in which to model cooperative systems. But the NetLogo models are not required to understand cooperative systems. They simply enrich the experience. Simulation is an especially useful method for studying complex systems composed of many agents repeatedly interacting in a non-linear way. Such systems often exhibit emergent behavior which cannot be predicted using the traditional analytic methods of decomposition.
Networks are a most useful tool for modeling complex systems due to their basic two elements of construction: nodes (agents or entities) and vertices (links, connections or edges). Connections are generally far more important than nodes.
As a network becomes larger (more nodes), the number of potential connections grows much faster than the number of nodes.
Laszlo Barabasi notes that network science is powerful because structure often determines function — understanding the content of an object is not enough, the typology determines how that content manifests.
Herbert Spencer coined the phrase survival of the fittest as an alternative to Darwin’s somewhat gentler term natural selection. It certainly sounds harsher - especially when applied to economics as Spencer was want to do. The phrase has since evolved to represent cutthroat competition.
However there exists another mechanism in the evolution of species that, many have argued, is every bit as powerful: cooperation. The varieties of cooperation - the algorithms (the methods, or recipes) are the subject of this monograph.
Cooperation, by definition, can only occur between two or more agents - agents with enough sophistication to be able to sense the state of the agents to which it is connected. The connectedness of a network often determines the degree of cooperation, and the behavior of the system as a whole.
Nature is far more cooperative and symbiotic than we understand because we have this tendency to just look at things by abstracting tiny little sections of the complexity out of the living hall, the dynamic transformation that we’re part of.
An example of interspecies cooperation is the microbiome. At one time the assortment of single-celled creatures residing in the gut were thought to enhance the digestive system and little else. But rapidly increasing evidence indicates the microbiome is more intimately integrated into the lives of multicellular creatures than first believed.
Every large living entity on earth seems to attract a collection of microbes. They are usually not pathogenic and often helpful, even crucial, to the host in some way. Examples include the gut microbiome, the root (rhizosphere) microbiome, and skin flora. Often the relationship between host & microbiome is so entangled that many consider the combination as one entity.
Evolution may be the most powerful process in the universe. Though undirected, it has moulded life into ever more complex entities, including creatures that can now contemplate the process.
In 1995, John Maynard Smith and Eörs Szathmáry compiled a list of major transitions in evolution. They selected seven candidates: the genetic code, the formation of cells, chromosomes, eukaryotes (complex cells), multicellularity, society (human and eusocial), and language (constituting a second line of information inheritance). Many of these transitions required or cooperation.
Stuart West, et al, offer an updated view, focused on what constitutes an individual.
Andy Pross notes the standard Darwinian view that life replicates and therefore evolves should be expressed: some replicating things evolve and therefore become living — a simple, but profound, difference.
Emergence, cooperation, and mutation are the great creative forces in the evolution of complex systems — competition and selection discard the nonviable and unlucky.
Copying errors during replication (mutation) can lead to three outcomes: no effect (neutral); non-viability (sterility, immediate or eventual death), or; improved function. Error-free copying essentially stops evolution as there will be no variability in the next generation. Large mutations are generally disastrous, but small mutations often survive, allowing selection something to act upon. So some error is necessary, but too much will eventually produce too few viable offspring — or error catastrophe. The allowable amount of error diminishes exponentially with genome length. Jim Rutt notes the error catastrophe that occurs when reproductive fidelity isn't sufficiently high. As would resource constraints.
Adaptation is an optimization dynamics transferring information from the environment into the agent reducing uncertainty about states of the world — Krakauer & Rockmore
The competitive exclusion principle states that species competing for a limited resource cannot coexist at constant population values. There can be only one winner. The process of species adaptation can occur in many ways. An organism may move to an adjacent resource to avoid direct competition, or find a way to cheat, or migrate, or stumble upon a new resource, or enter into a symbiotic relationship with another.
Selection (the environment of threats and resources) acts upon at multiple levels (genes, individuals and groups).
Micheal Levin notes that Planeria avoid the Weismann barrier (children do not inherent mutations acquired by parents) by reproducing simply by tearing themselves in two and regrowing the missing halves. This leads to a complete mess in the DNA as every cell with mutations accumulated over hundred of millions of years being inherited. Yet it results in animal that is immortal, excellent at regeneration with perfect fidelity and resistant to cancer. Why does the animal with the worst genome imaginable have the best fidelity? He speculates there are twin mechanisms for producing successful organisms: genetic and bioelectrical – and if one is weak, the other has to be stronger. Part of the evidence available tadpoles with “Picasso faces” or tadpoles that have their eyes and other facial features surgically moved from their natural positions. Yet they develop into frogs with normal faces. He suspects bio-electric gradients are guiding these processes.
W. Brian Arthur has proposed Darwin's theory of natural selection may be successfully applied to the evolution of technology with an additional proviso. Instead of merely allowing a set of entities to compete for survival, some entities may combine (or be combined), effectively becoming new entities in the pool. But this is a form of cooperation (in the broad sense used in this monograph). And likely will add system complexity in doing so.
Some have tried to define life by listing the characteristics of living things, such as metabolism, homeostasis, digital inheritance, adaptation, reproduction, etc – but the list soon becomes impossible to complete, and various actual life forms only hold a fraction of the characteristics.
The process of evolution requires something to act upon. That something is generally life, for which it is crucial. But what is do we mean by life? Definitions abound. It is often more convenient to simply consider characteristics.
Stuart Bartlett and Michael Wong have introduced a fascinating model of the conditions necessary for life (imagined, simulated and actual) to exist — using the term lyfe to indicate they are considering possibilities beyond earth. Based on what they term ‘four pillars’ — four characteristics that must be present in a system in order for it to qualify as living. These are: dissipation (the existence of thermodynamic gradients, driving forces, and the dissipation thereof); homeostasis (the ability to regulate physical variables to within viable ranges); autocatalysis (the ability to grow exponentially when resources are abundant), and; learning (the ability to encode and process information).
Where does the constant flow of energy required for evolution and all that has emerged from it come from? Jeremy England (dissipative-driven adaptation) thinks the second law of thermodynamics is the source.
The order of the universe at its beginning was enormous - like a gigantic box with all its contents crushed into one corner. The unwinding of this order - the redistribution of the contents toward something approximating equilibrium - is a source of energy. Some matter found itself in a position to conduit that energy created in the transfer from order to disorder. Using a portion of the energy passing through, it found ever more complex ways to increase the flow. That is what powers the growth in complexity, life, evolution and cooperation - without purpose or goal beyond that.
Michael Hinczewski of Case Western Reserve University offers an enlightening explanation on thermodynamics (entropy) and the origin of life.
Brian Greene has noted two exceptions to the universe’s constant drive toward increasing entropy: star cores and life (including machines made by humans). The universe is undergoing an epic death march from order to bland uniformity, and relatively tiny stellar nuclear furnaces and remote islands of life are stealing a bit of the available energy to resist their own decay.
Microbes (like bacteria and other single cell forms) are often relatively simple creatures. Even so, they are social, and therefore interactions like cooperation are a strong possibility.
We should perhaps begin by defining what cooperation is, and is not. Taking a basic approach, W. D. Hamilton classified social behaviors into four elementary groupings. Many more forms of cooperation have since been suggested.
Cooperation is, by its very nature, a social activity. Cooperation (and its opposite, competition) occurs at many levels: genetic, cellular organisms, between and within species. Cooperative systems can have an efficiency advantage over purely competing ones, but it is the interplay between such systems that becomes interesting. Humans (and perhaps others) have carried this a step further. The sharing of ideas, or memetic cooperation.
Cooperation flourishes only if the survival rate is greater than when not cooperating and if a policing function exists to prevent or exclude cheating.
Symbiosis (often called mutualism) is the most important form. Commensalism and parasitism are sometimes included but don’t really qualify as cooperation as benefits are not mutual. We can exclude reciprocity as well as it is a benefit of cooperation, not a cause. Collaboration is generally used a synonym for cooperation. That leaves altruism as the second significant type of cooperation.
Examples of mutualistic microbial cooperation abound.
And there are examples of altruistic cooperation as well but it often difficult to differentiate these from kin selection. As W. D. Hamilton noted, a gene will be successful provided copies reproduce, even if not from the individual.
But while microbes exhibit a wide array of cooperation, their behavioral repertoire is ultimately limited by a lack of complexity. Although, to be fair, the complexity of single-celled creatures at the dawn of cell specialization may be greater than previously thought.
Millions of years ago, two single-celled creatures, a bacterium and an archeaon, were proceeding along normally. Then one attempted to ‘eat’ the other. At any rate, somehow the bacterium ended up inside the archeaon. And stayed there (its descendants became mitochondria). Thus, was born complex life (eukaryotes). Complex cells are characterized by a nucleus which contains the cell's DNA, organelles of several types, including mitochondria, the providers of enormous amounts of energy. Energy which allows such complexity. Complex life may be a required precursor to a major breakthrough in cooperation: multicellularity.
Lynn Margulis was a pioneering and compelling advocate of cooperation as a major force in the evolution of life. Her idea of endosymbiosis, the radical idea that absorption of one cell into another, without significant loss of functionality in either was an extreme form of cooperation. We only have evidence for it happening twice in the history of life on earth - mitochondria (respiration centers) and chloroplasts (photosynthesis).
While mitochondria provide energy to all complex (eukaryotic) cells, and thus are extremely important, they are not the only organelle within cells. The observation that organelles exist within bacteria (prokaryotes) has led to speculation about possible alternative evolutionary pathways for eukaryotes. A network of cells, rather than a simple tree, might account for this. This eliminates the extreme claim this critical event occurred only once in the evolution of life.
Thijs Ettema has proposed a refined theory of eukaryote origins. In 2015, he found DNA evidence in the seabed for a new family of archaea he called Asgard. And their DNA is closer to eukaryotic DNA than either bacteria or regular archaea. Two years later Hiroyuki Imachi found a living sample and a way to grow it in the lab. Prometheoarchaeum syntrophicum can only live with at least one other creature in a cooperative relationship. Unusually, it has long protrusions in which it nestles its partners - protrusions that hint at a way to eventually engulf.
Eukaryotes generally are larger and much more complex than prokaryotes. They could become that way because of energy (in the form of ATP) made available by mitochondria. The result of the first endosymbiotic event resulted in the advent of very energetic cells: eukaryotes.
Mitochondria have since evolved into miniature power plants. Like most of life, they are wonderfully complex. But the essence of their structure is the convoluted internal membrane, across which they maintain an enormous electrical potential.
Nick Lane believes more attention needs to be paid to energetics in evolutionary theory. Life has used a variety of energy sources, including: alkaline hydrothermal vents; solar (algae, plant photosynthesis); chemical, mitochondrial partnership, gut microbiome partnership, cooking, animal domestication (labor), and fossil fuels. It seems generating charge across a membrane is as universal as the genetic code.
In general, thermodynamically closed systems (like the universe) will always decay (increase entropy), but open (non-equilibrium) systems allow the growth of complexity. With this new-found energy, these cells grew much more complex, allowing a speed boost and the further additions like plasmids & Golgi. The two events of endosymbiosis are stunning (and exceedingly rare) acts of cooperation. They led to everything else. And they eventually became the precursor to the next great step in evolution: multicellularity.
Thermodynamically closed systems will always decay (increase entropy), but open (non-equilibrium) biological systems allow the growth of complexity. In 2000, Adami, Ofria, and Collier proposed that biological complexity tends to increase as evolution proceeds.
There are (at least) four things to remember about the second law of thermodynamics: it is a probabilistic concept - the chance of a scrambled egg being reassembled into the former whole is staggeringly low; any highly ordered system will move toward equilibrium, increasing disorder (entropy); the entropy of a system plus its environment never decreases, and; the directionality of time (as well as cause and effect) is directly related to the continuous growth of entropy.
There are degrees of multicellularity, from simple clustering of identical cells, to communities of different types of cells, to differentiated multicellular organisms.
Differentiated multicellular organisms are the most complex and interesting and most discussion herein refers to them.
The leap to multicellularity may not be that difficult as there is evidence of it occurring multiple times in evolution (The genes held in common by all multicellular lineages have to do with housekeeping, not multicellularity — evidence of independent origins of multicellularity). Every time a multicellular animal evolved, systematic cell differentiation did as well. Cassandra Extavour thinks the division of reproductive labor in a multicellular entity requires that cells work together to maximize the reproductive potential of the entity of which they are a part. Somatic cells act as cooperators and germ cells act as defectors. Multicellularity as we know it might not have been possible without having both the defectors and the cooperators — each type of behavior is necessary.
There are three leading theories for evolution of multicellularity:
Multicellularity is often touted as an example of cooperation through group selection wherein individual independent cells sacrifice their drive to reproduce (giving it specially selected cells). A new type of individual is thus created, upon which the forces of evolution act. And so complexity, partly through cooperation, ever grows. But in fact since all the cells contain the same set of genes, this is not true.
The path to multicellularity probably followed something along this line: simple clumping reveals advantages of size; endosymbiosis creates energetic cells; greater energy allows more complex genomes (genes that control other genes), and; cell differentiation (division of labor).
In general, multicellular creatures live much longer than mono-cellular organisms — it simply takes more time for them to mature. A single-celled organism may go through thousands of generations in a lifetime of a multicellular organism, making it very likely it will evolve into a successful predator. A multicellular creature cannot exist without providing a defense that evolves as quickly as a single-celled one – an immune system. This may be why it took so long for multicellular life to appear in the evolutionary record.
Some multicellular groups separate the sterile somatic cell line and a germ cell line. Weismannists are rare (vertebrates, arthropods, Volvox, …), as most species have the capacity for somatic embryogenesis (land plants, most algae, many invertebrates, …).
Science Magazine published an interesting article suggesting the transition to multicellularity may be far simpler than first thought. Choanoflagellates are single-celled creatures that have many the genes thought to be required for multicellular life, and additionally bear a remarkable resemblance to multicellular sponges.
The real explosion in the complexity of multicellularity was the development of cells which differ in functionality, or cell specialization. It likely appeared when genes became capable of controlling other genes.
Cooperation occurs at many levels of complexity and in very diverse ways, from simple clustering of single-celled entities, to multi-country economic treaties. For example, the clustering of rudimentary cells, perhaps yielding reduced predation simply due to increased size, also results in inside cells being exposed to a different environment than outside cells. This can in turn offer an opportunity for cell specialization.
Cell specialization can result in a very different sort of organism.
With specialization comes death. In multicellular life, programmed death is used as a tool during development and for discarding experiments (the elderly, unlucky or unsuited) once they have fulfilled their role. Are these cooperative?
Cancer manifests not so much a disease but a betrayal, a fundamental violation of an agreement to cooperate.
Among the most important tools for analyzing cooperation is game theory, and in particular, a game called the prisoner's dilemma. Consider the following scenario: two people are arrested for committing a crime, but before being arrested they vowed to one another never to confess. The police keep them separated. The jail term each one is likely to receive (payoff matrix) depends upon on both their own action and that of their partner, as shown.
There are 4 possible outcomes: if prisoner 1 chooses to confess (defect), he faces either 4 years in jail (if his partner confesses) or 0 (if his partner does not confess) and ; if prisoner 1 doesn’t confess (cooperates with his partner), his sentence will be either 2 or 6 years.
The default selection of confess (never cooperate) is the best selection. So, from an evolutionary perspective, this form of cooperation would never evolve. However, if the game is played iteratively, and the participants develop the ability to recognize and recall their opponents' history, the various strategies that may play out become more interesting.
If a prisoners dilemma game is played where the agents may defect, cooperate or merge, the winning strategy is always cooperation. – Michael Levin
The iterated prisoner's dilemma tournament was announced in the late 1970's by Robert Axlerod. He invited contestants to submit a strategy and these were played against one another on a computer. The various strategies did not have access to the number rounds the game would last (if they did, the optimal strategy would be to always defect). The ultimate winner was the delightfully simple strategy tit for tat (cooperate on the first move, thereafter simply reflect your opponent's previous move). More importantly, the tournament demonstrated how cooperation might evolve.
The Tragedy of the Commons can occur whenever some of us cooperate for mutual benefit but others see that they could do better for themselves by breaking the cooperation. It may be modeled similarly to the prisoners dilemma. Elinor Ostrom won the Nobel Prize for her analysis of how people actually cooperate locally to find solutions to this problem.
Simon DeDeo details a more general simulation in which agents have slightly greater variability, intelligence and memory. The unexpected result was a system that oscillated between states of no cooperation and complete dominance by a single agent species.
Hilbe, Simsa, Chatterjee and Nowak have recently proposed a repeated game where the reward varies at each iteration in proportion to the cooperation level in the previous round. Such a game allows the participants to get clear feedback from their actions. As a result, cooperation is greatly increased.
Carrie Arnold reports the child’s game Rock, Paper, Scissors may have intriguing applications to genetics. The game consists of very straightforward rules. Upon an agreed starting signal, participants simultaneously reveal one of three choices: a fist (rock); an open hand (paper), or; two fingers (scissors). Rock beats (smashes) scissors, paper beats (covers) rock, and scissors beats (cuts) paper. Ties are replayed. As one might predict, when thousands of games are played (or simulated), there is no winning strategy, just ebbs and flows.
There are several biological systems employing similar arrangements, including some species of bacteria and lizards. It might even help explain the enormous biodiversity around us, which is difficult to do with only competitive relationships.
Few tools are as useful as simulation for exploring and visualizing various game-theoretic models - especially complex biological models.
Consider a wolf-sheep model based upon the Lotka-Volterra mathematical formulation. The population of sheep in the next generation will change based upon their current number and the number of wolves. Likewise, the number of wolves depends upon the current population and the number of sheep. And either population may become low enough to disappear. An excellent, interactive NetLogo model may be found here.
In 1986, Craig Reynolds released Boyds, a simulation that demonstrated how complex group behavior could result from simple rules. In this case, the rules are: move in the same mean direction as neighbors (alignment); remain close to their neighbors (long range attraction), and ; avoid collisions with their neighbors (short range repulsion).
The result is a simulated flock that behaves in the way of actual birds (murmuration). Such striking behaviors, while appearing bafflingly complex, are based upon participants following simple rules. A wonderful example of self-organization or emergent behavior - no leader or intent required.
A large selection of mathematical and biological models have been constructed to investigate group coordination.
We humans seem to possess a fondness for classification, and so it is with sociality. Sociality reveals itself as behaviors, such as cooperative brood care, altruism, and a division of labor (perhaps first invented by evolving multicellular creatures), wisdom of the crowds, trial by jury, markets, and even the collective computation of reality.
Martin Nowak has described five mechanisms of cooperation: direct reciprocity (tit-for-tat); indirect reciprocity (reputation); spatial selection (networks); multilevel (group) selection, and; kin selection (relatedness).
The major arguments supporting the evolution of sociality are kin or group selection, but the argument is far from settled, with the majority opinion falling on the side of kin selection. In 2007, Robert Trivers was awarded the Crawfoord Prize in Biosciences for pioneering work done in the early 70’s. Specifically, he proposed several critical ideas on cooperation, one of which is very salient here - reciprocal altruism, or cooperation between individuals that are not related can only develop if the animals cooperate over a long period of time and if they are able to recognize one another.
Natural selection acts upon biological individuals through their genes. Some have argued that natural selection may also act upon groups of individuals. Jerry Coyne and others have argued vigorously and, I believe, persuasively this is unlikely. Sam Bowles suggests group selection has never cleared the hurdle of widespread adoption for one primary reason - it is extremely difficult to find a case in nature where genetic inheritance is less powerful than altruism, except for kin selection. He also noted that cooperation may confer benefits to all participating individuals (mutualism, which can be motivated by self-interest), or not (altruism). Altruism may also flourish where a high likelihood of reciprocation exists.
But there is another evolutionary system that contributes to the growth of cooperation: culture. Culture changes much faster than genetically driven (biological) systems and has co-evolved with them. It also is completely dependent on their existence. It is within cultures that group selection occurs. And this may account for the rise of the type of altruistic behavior in which individuals sacrifice for unrelated others arise?
Group selection is relevant (essential) for social behavior including economics - Yaneer Bar-Yam (Twitter, 20 January 2019)
There isn't a meaningful conflict here. Group selection doesn't occur in biological evolution, but must in the evolution of social groups.
W. D. Hamilton also proposed the idea of inclusive fitness (very similar to kin selection), where genes which indirectly promote the survival of other individuals who carry the same genes (such as relatives), tend to be more successful.
Hamilton’s rule is a central theorem inclusive fitness (kin selection) theory. It also may not be accurate. - Andrew Bourke
E. O. Wilson now believes altruism evolved by progressive provisioning — mother (or parents) provide food & care for an increasingly longer time, then finally lose the gene that determines dispersal time, and has endorsed a form of group selection by Nowak, Tarnita & Wilson that proposes selfishness beats altruism within groups, but altruistic groups beat selfish groups.
Patricia Churchland believes biology alone is sufficient to account for the evolution of morality. And she cites an intriguing possibility as the cause: warm bloodedness (endothermy), which allowed bigger brains to evolve. Endotherms have the ability to perform well over a wider range of temperatures than their cold blooded competition, but a requirement to consume an order of magnitude more calories.
From Richard Dawkins: “The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
Being smart helps in gathering all those calories. So an ever increasing brain size began to evolve. However, increasing brain size (cortex in mammals, nidopallium in birds) requires parental care beyond birth, which is the beginnings of a social structure …and morality.
Michael Tomasello believes true cooperation is unique to humans, based upon shared intentionality. Mutualism (collaboration), not altruism, may be the primary process in the evolution of cooperation (mutual cooperation is two entities interacting to the benefit of both). He further suggests that morality is composed of social mores which encourage cooperation.
To get to a human level of cooperation (compared to other apes) requires three additional behaviors: (1) cognitive skills sufficient to have a shared intentionality. The more cognitively sophisticated the organisms involved, the easier to assign intent to their behavior, (2) trust and tolerance, and (3) social norms.
Biographer Oren Harman weaves together the state of altruism theory with Price's contribution and troubled life. Harman identifies two types of altruism: biological (mindless) and psychological (requiring intention, and therefore a brain …and perhaps even the illusion of free will).
He also notes three theories that have to explain altruism: nepotism (kin selection), reciprocity (similar to tit-for-tat) and group selection (selection acting upon groups instead of individuals). Kin selection takes the genes's point of view - it doesn't matter in which individual a gene resides as relatives probably carry the same one.
Albert Kao, et al note several resource-related benefits of sociality, including collective territoriality, detection/capture, niche expansion, consumption and dispersal.
As Robert Trivers first theorized, reciprocity works with individuals, where one will temporarily lower its fitness to help another in the hope the act later will be returned. As the game tit-for-tat revealed, this does require intention. The third is group selection, where individual sacrifice may benefit the group.
George Price developed one of the landmark equations in evolutionary theory (and behavioral economics). It had to do with the root cause of altruism. But after discovery, it occurred to him that if such an equation existed, if altruistic behaviors always helped the gene, or individual or group, then pure unselfish altruism could not exist. He found that result emotionally devastating. Perhaps best summarized by this quote:
That is one person’s interpretation of Darwin’s theory. Imagine Price’s angst in becoming the first to offer mathematical proof.
The coordination of simpler entities led to more complex ones - in a kind of evolutionary, repeatable pattern.
The seeming progression toward complexity, from genes to genomes, from cells into complex cells, from complex cells into multicellular bodies, is driven by the available energy. It's as if the genes are forever assembling layers around themselves.
The next step in this ancient, successful pattern appears to be the extreme cooperation of bodies exhibited by eusociality - cooperation so thorough that individuals lose the ability to independently reproduce, so colonies then behave as one. Eusocial animals include bees, ants, termites, wasps and naked molerats. Eusocial animals exhibit three characteristics: a division of labor into reproductive and non-reproductive groups, cooperative brood care, and overlapping generations within a colony of adults. With eusocial species, leadership - in any meaningful sense - doesn't exist, and the behavior of individuals is dictated by their genes and environment.
There is a curious break in the gene-centric pattern of evolution. One animal retained the ability for individuals to reproduce, yet was able to build complex entities requiring the participation of thousands - humans. It will be fascinating to see where this pattern deviation will survive another millennium.
Eusocial behavior has some natural limitations in complexity, however. As Yaneer Bar-Yam explains in Complexity Rising, predominantly limited information flow due to the hierarchical control structure.
There is a wide variety of ways in which cooperating groups might arrange themselves, from essentially leaderless to strongly hierarchical (most eusocial species). Culture can be so complex individuals have several networks of which they are a part simultaneously (however slight). And they may reduce or increase their participation over time. The various groups have authorities ranging from none to absolute, and the authority (forced cooperation) distributed from one to many. Entry into these groups may be by birth (family, citizenship), mutual choice (friendships), sole choice (political organizations), … clearly, this a hugely complicated system.
Wherever there is cooperation, there will be cheating.
In snowflake yeast, the single-cell bottleneck means that cheater cells are stuck with a community of cheaters. The group won’t be able to survive on its own. “The simplest and most general explanation for why multicellular organisms pass through a single-cell stage is to ensure that all the cells composing the organism are as close to perfectly related to each other as they could be,” said Rick Grosberg, an evolutionary biologist at the University of California, Davis. “Everyone shares the same genetic interests. The bottleneck forces an alliance.”
The first study we did, we found that different strains do mix, and they do cheat. Two different strains will not contribute equally to spore and stalks. — Joan Strassmann
Researchers at the Max Planck Institute for Chemical Ecology found that bacterial colonies on a 2D surface, which cooperate by sharing amino acids and vitamins, will spatially exclude cheaters. The probable next step in the evolution of multicellularity was the development of cells that differ in functionality (also known as differentiation, specialization, or division of labor). This has only been accomplished by eukaryotes, possibly because of the energy required. Another Max Planck Institute group demonstrated that lone bacteria that partner with others (in a division of labor arrangement) are more efficient.
In moments of starvation, these soil-dwelling amoebas crowd together and build a tower rising above the ground from which they disperse their spores to other, more hospitable places. Some 20 percent of the group will sacrifice themselves to build the tower with their bodies, while the rest take advantage of it to spread their genes.
One of the more effective strategies against cheating cells occurs during reproduction. Most multicellular organisms follow the mono-cellular bottleneck strategy, that is, starting every new organism with a single cell.
Ågren, Davies & Foster assert the evolution of enforcement is critical for cooperation in Nature Ecology & Evolution.
The potential degree of cooperation depends heavily upon the sophistication of the agents involved. Even simple agents may cheat or collaborate without any apparent awareness or intent. Once the agents develop the cognitive skills necessary to form groups, recognize others (and recall their past behaviors), create a theory of mind, express intentionality, empathy and trust, and other related functions - the potential for very sophisticated forms of cooperation (or the opposite) exist.
The ultimatum game is a cleaver way to demonstrate irrationality and a sense of fairness. The game starts when player 1 receives an amount of money which must be shared with player 2. Player 2 may reject the offer which means both players must return all the money, or accept the offer which means which means both players keep their share. The "rational" choice is for player 2 to accept any amount at all, while the "emotional" choice is for player 2 to refuse if she feels the amount is unfairly low. Repeated observations show that if the amount shared falls to less than 30%, most people reject it.
Rational or not, a sense of fairness seems pervasive among highly socially cooperative animals. As Frans de Waal notes, even Capuchin monkeys have a surprisingly strong of fairness. And he reasonably argues cooperation plays a large role in morality.
This concept is somewhat elastic. Children have an innate sense of fairness, but the interpretation grows more refined with maturity. For an infant, the universe is not fair if you don't have something, or if some calamity happens to you. With maturity, you sense unfairness more acutely should intent be detected.
One area where the concepts of fairness and game theory merge is fair division. An often-cited example is the cake-cutting problem, solved by Brams and Taylor in 1995. For two people, the answer is trivial - I cut, you choose. For more than two, the solution is more difficult, but starts with a player cutting the cake into pieces. It gets more challenging from there - as the number of pieces grows exponentially with n.
There exists a striking number of ideas and terms associated with consciousness – each of which may be defined in multiple ways. These include panpsychism, attention, self, cognition, subjective experience, self-awareness, mind, sentience, perception, volition, prediction, agency, intent, …and on and on. Most would agree, however, that consciousness requires the cooperation of millions of cells and, perhaps, a great number of bodies.
Carolyn Jennings has a most intriguing take on attention in which she considers consciousness to be the interface between self and the world – and attention is required for meaning. It is attention which ranks interests and, in doing so, gives the self causal power. Check out the delightful interview.
Donald Hoffman has proposed a substantive and profound extension to evolutionary theory – the brain and sensory organs evolved to meet the same criterion as the rest of the body – to maximize the survivability of the organism, not to relay the actual nature of reality. Seems obvious, but like many such things, it took a Herculean effort to get there. He believes evolution by natural selection entails a counterintuitive theorem: the probability is zero that we see reality as it is. He has since pursued this line of thought to the point where I find it difficult to accept – that consciousness is a fundamental property of the universe ( panpsychism).
Giulio Tononi’s Phi introduces the idea that consciousness is integrated information but I cannot fathom how it will ever be successfully measured.
Anil Seth points to the strangeness of consciousness through the effects of anesthesia. While under anesthesia, we lose any sense of time passing, self, memory, exterior or interior events, … we lose consciousness (and perhaps the unconscious as well). He has proposed the beast machine theory. On the subject of self, Seth thinks the self is an illusion, a bundle of several selves (including emotional, body ownership, first person perspective, identity, agency, …) that resists decomposition. But illusion or not, the self is a part of consciousness. I find Seth’s approach most satisfying.
Michael Graziano suggests consciousness arose from a simple merger of at least two characteristics known to be present in the brain: a somewhat nebulous sense of self, and; a theory-of-mind. His Attention Schema Theory (AST) is the simplest (yet elegant) view I have found …and quite similar to Seth’s view. The hypothesis is that consciousness emerges from the efforts to process an ever- increasing flow of information. There are doubts over the explanatory power of AST. Notably, AST attempts to align the evolution of brain structure with increasing cognitive power (quite plausible) but becomes vague about how this leads to consciousness..
Joscha Bach talks of agency being a controller of future states – unlike a thermostat which doesn’t have a goal but a target value. He suggests only living things can be true agents.
In a way somewhat similar to that of Carolyn Jennings, Daniel Dennett describes consciousness as an illusion – a sort of user interface of the mind – in that a good interface does not indicate the complexity of what lies beneath.
Nick Chater proposes an equally provocative theory: that we are much simpler creatures than we imagine. He describes the mind as being created moment by moment, an improvisation of recalled analogous precedents and sensory inputs with no real depth, no permanent self, no beliefs… . He writes: “The idea that we are, at bottom, story-spinning improvisers, interpreting and reinterpreting the world in the moment, is immediately appealing. I find this is a profoundly insightful, and convincing, hypothesis.
Our cognitive limitations may have constrained our options for cooperation, but at the same time, our cognitive abilities have given us some choice in the degree of cooperation we exhibit.
Sebastian Watzl argues that consciousness is something you do: focus our attention. Watzl's idea of consciousness is also appealing in its apparent simplicity. Consciousness is the stance we take on the world by focusing our attention on some things rather than on others.
Gregg Henriques divides the mind (consciousness) into three levels. The first is an insect level complex sensory motor looping system, and the ability to know self from other. The second (mammal, corvids and mollusks?) level has the mental experience of being, and a theory of mind. The third (human) level adds the ability to use symbols and language.
It is difficult to accept such ruthlessly dry descriptions of the brain/mind when consciousness adds so much richness and meaning to our existence. Our delight in seeing an old friend, or unexpectedly coming upon a beautiful flower during a morning hike are representative of the fruits borne of consciousness.
In Stella Maris, the author Cormac McCarthy imagines a provocative conversation about language and the brain, leading to speculation on why our brain is split into the conscious and the unconscious. The brain had evolved along with the body over millennia, but the ability to employ language has evolved relatively recently (likely simply displacing other talents). Suddenly, language became the communication technology of choice for conscious thought, leaving the rest of the brain without the means to communicate with the conscious portion the brain. Is that remainder the subconscious? And are dreaming, vague feelings and mysterious bursts of insight the only remaining communication channels available?
In Nautilus Magazine, McCarthy expands upon this idea. Some key arguments are listed below.
Language greatly increases the bandwidth for communication with others, which, in turn, opens the door for collective intelligence (and further cooperation).
In The Language Game, Morten Christiansen and Nick Chater argue the most appropriate metaphor for language is the game of charades. We simply try to convey meaning to someone else as rapidly as possible within the rules, so innovation and a history of previous interaction are important.
Trade likely began with simple barter. But, as Adam Smith noted, it proved far too inefficient. Imagine the odds of encountering someone who needs something you have an excess of – and simultaneously, she is in the exact opposite situation – unlikely. This situation evolved quickly to the invention of a medium of exchange.
Nowadays, this is often government-issued certificates (also known as money), but Smith argued that a medium of exchange existed well before states. The important thing is all parties must trust whatever medium is chosen.
David Graeber convincingly argues the chronological order of economic invention started with credit systems …basically IOUs, then money, and finally barter (the exact opposite of the way it's generally taught). In essence, the basis of an economy is debt. As Richard Dawkins put it, "Money is a formal token of delayed reciprocal altruism" or more simply, a promise.
Money serves three primary functions: the familiar use as a medium of exchange, a store of value (savings), and an accounting unit. On the latter, Mitchell-Innes' credit theory of money asserts it is a form of accounting, not a commodity. It measures debt.
One of the most important in economics concepts is discounting, or the relationship of value to time. Animals (including us) value items in the present far more than later.
Cash has taken on various forms, from beads to minted coins and paper, to lines of credit. Most of these, directly or indirectly, are now controlled by the state through a central bank. That may be changing. Welcome to the Internet, mobile devices, and the blockchain.
Mobile applications enable the creation of new forms of cash. Some leak private information more than others, but some are more secure than giving a card to a restaurant server who promptly disappears for five minutes. Many of the new digital currency applications (digital wallets) will exist on smartphones, with the additional security implemented in hardware, such as biometrics. Already, there are near field communication systems to pay at checkout and apps written by financial institutions. With everyone scrambling to join the new market, there is likely to be a severe winnowing. It should be fascinating to watch.
But the invention with the greatest potential may be the blockchain. It has at least two interesting characteristics - it can serve as the basis for a form of digital currency independent of governmental or other authority (decentralized), and the promise of security through transparency (open source). Blockchains are a type of public ledger based on peer-to-peer computer networks. The ledgers are stored as encrypted files where anyone can tell if there has been a change made. They are the underpinnings of bitcoin and several other currencies under development.
According to David Brin, Smith believed competition to be the greatest creative force in the universe. But Brin notes competition only works well when rules, referees, regulation, negotiation are imposed. Thus competition becomes ritualized - and far less destructive. He cites several examples of ritualized combat, including science, sports, markets, courts and democracy. So while fair markets might be very competitive, the competition is constrained by social forces.
So while markets are competitive, and competition is constrained by social forces such as strong antitrust regulation, those markets will tend to remain fair. So why do markets drift toward unfairness (and oligarchy) … it simply is easier to profitably manage a monopoly. As David Sloan Wilson rightly points out, regulation is required or economic cooperation will disappear.
Restoring balance to capitalism requires addressing these factors. One of the most important controls that society processes is anti-trust regulation. But our regulating agencies have become captured by the very corporations they're supposed to regulate, or neutered by politicians. For example, many large companies simply purchase smaller start-ups, saving them the expense of innovation while simultaneously removing the competition (like Meta's acquisition of Instagram). Additionally, politicians have absurdly lengthened patent expiration dates, rendering products expensive.
Another is more complex and subtle. A phenomenon called network effects lead to a natural convergence to a single “winner." For example, you may want to join a social network that has members you know or perhaps work in a similar area, but the network may invade your privacy or force you to watch advertising. But if a carefully written law were passed that forced social networks to allow free, anonymous access to their content (perhaps with a short time delay), the dilemma could be avoided.
Monopolistic practices have at least one critically important consequence: increasing inequality.
Robert Reich does such a superb job of explaining inequality that I can only offer 2 minor pieces of supporting evidence from simulation modeling.
The simulation models:
Some observations about inequality:
Meritocracy appears to be an illusion. How might we start to repair this? Here are a few suggestions.
Consider a lesson on cooperation from economics using David Ricardo's 1817 classic example of comparative advantage. Portugal and Britain both produce wine and cloth. Portugal takes 170 hours to produce one unit of wine (80 hours) and one unit of cloth (90 hours). Britain requires 220 hours to make the amount (120 for wine and 100 for cloth). Note Britain is less efficient at producing both products.
Now introduce trade (cooperation). If Portugal concentrates all 170 available hours of labor on just making wine, it will produce 2.125 units. Likewise, if Britain can make 2.2 units of cloth by forgoing wine production. Should Britain offer Portugal 1.1 units of cloth in exchange for 1 unit of wine, then Portugal would have 1.125 units of wine and 1.1 units of cloth, and Britain 1 unit of wine and 1.1 units of cloth.
Both countries gain with no increase in labor costs. Trade (and life) is more than a zero-sum game.
There are some shortcomings with this (economically optimal) solution, transportation costs aren’t included in the formulation, severe reduction in consumer choice, and it requires the ability to make significant changes in production.
However, Ricardo's comparative advantage could also be interpreted as a way to subjugate the poor and continue economic inequality.
David Brin suggests evidence of the world improving may be found in whether or not the world is dominated by zero-sum or positive-sum games.
To fully understand fairness, one must consider the intent of the agents involved in a transaction.
Nearly three centuries ago, a confluence of events in England occurred that would radically change the world. Automated production methods, improved foundry processes, the invention of steam power, and other technologies led to the onset of the industrial revolution. As a result, productivity (and wealth) improved exponentially. It was nearly flat before that.
This evolution of one complex system within another had begun. The economic system had discovered a means of meeting human needs such as social status, food, health, sexual attractiveness, stability and security.
It's not without problems - inequality, pollution, and the unsustainable drain on resources come to mind. But the human race is generally better off – for the present. An economy based upon acquisition must eventually crash - simply because resources are finite.
Stuart Kauffman also offers a model ( the adjacent possible) in which sudden and explosive growth occurs. At a point in time there exists n entities from which to construct objects. The possible number of objects is the sum of objects made of one entity plus the number of objects made from two entities plus …up to the number of objects made from n entities. So as time progresses, the total number of objects grows slowly then suddenly - exponentially.
Cities offer a special case of sudden growth – especially in effects upon their inhabitants. It has often been observed that cities are wealth (and inequality) generators but Geoffrey West notes some additional characteristics. City dwellers physically walk, talk and innovate (a cooperative process) at a faster pace. Tyson Yunkaporta finds cities unstable, temporary constructs.
As the tools of genetic engineering rapidly decline in cost while growing exponentially in capability, the roles we (and every other living thing) play will change – perhaps radically.
We have been able read DNA with advanced genome sequencing for some time, but it has been slow and expensive, although improving on both counts recently. Editing DNA has been nearly impossible with any precision.
CRISPR renders editing faster, easier, cheaper and more precise than anything before. It has three parts: a protein that seeks out a segment of DNA, a scissors to cut the genome at precisely, and a bit of donor DNA the cell will stitch in when repairing the DNA. As I'm writing this, replacing the DNA in a multicellular organism is very difficult, but that may soon change.
But there’s more. Gene drives allow alleles (bits of DNA) to increase its prevalence in a population beyond the expected. This generally happens via natural selection, but here becomes a means for artificial selection.
And when we learn how to change gametes, the keys to the kingdom of genetic evolution belong to us.
Genetic engineering (CRISPR, etc.) may allow us to extend somatic lifetimes indefinitely, forgoing reproduction, leading to the end of genetic evolution. But the overwhelmingly likely use of these tools is somatic preservation and a resulting slowing of evolution.
We can accept the proposition that we exist solely as a tool for genes to p ropagate, or rally to try to assert control of genetics (worryingly uninformed about potential consequences). Or perhaps merely accepting (or ignoring) our fate with the pursuit of natural philosophy, simple hedonism (as espoused by Socrates) or, lacking sufficient intellectual curiosity, religion.
Genetic engineering may allow us to extend somatic lifetimes indefinitely, forgoing reproduction, leading to the end of genetic evolution (with "intelligent design" replacing mutation in the evolutionary algorithms).
A likely goal is the attempted extension of somatic lifetimes. However, this might be more difficult than it appears. Recent research by Bar-Yam and others seems to indicate genes that code for finite somatic lifetimes may be naturally selected. Before this, biologists assumed that death was due to either factors in the environment or the body simply wearing out.
The brain and nervous system most likely evolved in animals to control movement, providing a coordinated stimulus-response mechanism, and eventually adding memory and learning capability — not to accurately perceive reality in all its rich detail.
For example, the Hick–Hyman law states the time it takes for a person to make a decision increases logarithmically with the number of choices. But coping with actual reality might require the ability to deal with a much larger set of contingencies, inflating the time required beyond the amount of a predator's pounce.
Kurt Andersen has authored a sobering text. He asserts that humans, and especially Americans, are prone to fantasy. And he makes a nearly irrefutable case. He begins with the irrationality of our seemingly infinite variety of religion, the widespread appeal of fantasy, and the invasion of religion into politics. The increasing cultural divide between the rational and the irrational camps is revealed in elaborate, well-researched detail. Economics, politics, education and everything else now becomes religion.
In 1999, Dunning and Kruger offered the first scientific peek of this p henomenon. They found humans have a cognitive bias where people of limited skills assessing their abilities much higher than they actually are. The findings have been repeatedly confirmed and expanded since.
Anil Seth proposes subjective feelings (emotions) are the result of the brain's inferences about the causes of signals from inside the body (interoceptive) in the same way that predictions are a response to signals coming from the outside world. So mechanisms which evolved to regulate and control the body are now being used to perceive and interact with the outside world. Which may offer a partial explanation for our often delusional behavior - we use the same apparatus to analyze our external environment as we evolved to control our internal systems.
And then there are human behaviors that are destructive to cooperation. Chief among these is Rene Girard’s Mimetic Desire, which proposes we all want what others want — our desire is provoked by the desire of another. Mimetic Desire is a form of imitation — very useful for learning in children, especially before language emerges — but unhealthy when carried forward to adulthood. Girard's theory replaces Maslow's Hierarchy of Needs with just a couple of basic biological needs, the remainder being a hodgepodge of imitative desires. Burgis explains imitation can lead to cycles of frustration — and eventual conflict, disappointment and despair
The competitive exclusion principle states that species competing for a limited resource cannot coexist at constant population values. There can be only one winner. The process of species adaptation can occur in many ways. An organism may move to an adjacent resource to avoid direct competition, or find a way to cheat, or migrate, or stumble upon a new resource, or enter into a symbiotic relationship with another.
The interplay between competition and cooperation is a wonderfully rich and convoluted system. Cooperative groups arise only to fall into ruin. Competitions secure resources for a competitor, only to see the losers adapt to a slightly different resource or environmental niche. Ephemeral cycles within cycles - the process never-ending as long as the second law survives.
Capitalists have forever attempted to gain monopolistic advantage or regulatory capture. Significantly, Peter Turchin has claimed unregulated capitalism (unfair competition) may destroy cooperation. Just as suffocating, naïve regulation (an unfortunate form of cooperation) may render capitalism ineffective at generating wealth. or, put another way, competition between groups (up to whole societies) fosters within-group cooperation, and competition within groups (between their members) destroys cooperation.
The randomness (unpredictability) of evolutionary processes (mutation, emergence) is critical here. It ensures there will always be something novel to contend with - to compete against - to cooperate with.
Try an experiment. Randomly partition humans into groups using some arbitrary contrivance (such as differently colored t-shirts). Then ask the groups to compete in a game. Within a remarkably short time you will find the members beginning to identify with their group - and thus increase the tendency for cooperative behavior.
There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’. - Issac Asimov
But this in-group bias (also known as xenophobia, tribalism or parochialism) may have evolved to encourage cooperation, it has a menacingly negative side - forming an in-group seems to generate animus toward the out-group. While this may provide an even stronger incentive (fear of “other”) for in-group cooperation, it brings with it the danger of conflict and violence. And, combined with ever-increasingly powerful and cheaper technology, brings the possibility of destruction of civilization.
Nicholas Christakis notes that mathematical analyses of models of human evolution suggest that, in the past, conditions were ripe for the emergence of both altruism and ethnocentrism, but — and here is the catch — only when both were present.
Sapolsky concurs this us/other divide is frightfully powerful. And humans form them effortlessly at an emotional level. He has even proposed suggestions for mitigating some of the animosity, but these require knowledge and rational thought about our human condition …characteristics not found in great abundance. Further thoughts are offered by Mark Moffett.
Tim Harford offers a bit of optimism to this conundrum – curiosity. It seems a curious person may be less prone to tribalism. Based on the work of Dan Kayan, curious personalities may be more open-minded to an evidence based argument and less influenced by his/her current belief. It remains to be seen if we can encourage more people to become curious.
Iain McGilchrist investigates left/right brain phenomena. He believes the normal balance between the brain hemispheres is becoming undone.
Animals must pay attention to dual things to survive. The first is access to food, requiring rapid identification and acquisition. The second is a constant awareness of possible predators. The solution nature seems to have arrived upon is to devote separate parts of the brain to each task, operating in parallel. In the case of mammals, especially humans, these are the brain's hemispheres. (Note the parallel between left/right hemispherical thinking and Nobel prize-winning economist Daniel Kehneman’s system 1 and 2.)
There are striking differences in the way the hemispheres view the world, albeit through the same array of sensory inputs. Here's but a sampling: the left brain contains the keys to language, the left tries to manipulate objects while the right tries to understand them and relate to them as a whole, the left's way of thinking is reductionist, and mechanistic while the right’s is holistic and tolerant, left brain breaks the world into unrelated parts to categorize - it cannot understand humor, relationships, movement, metaphor, insight, irony or the whole, the left hemisphere retains anger, is subject to identity & trigger words.
“I sometimes think of the right hemisphere as what enables Schrödinger's cat to remain on reprieve, and the left hemisphere as what makes it either alive or dead when you open the box. It collapses the infinite web of interconnected possibilities into a point-like certainty for the purposes of our interaction with the world.” - Iain McGilchrist
Left hemisphere thinking is taking control of the world. The social ramifications are enormous - making us act as if we all had right hemisphere damage.
Climate change, population growth, increasingly inadequate resources, inequality and tribalism will only accelerate. It appears we humans will soon be arriving at a crossroad. Will we continue on a socially cooperative path or erect yet more destructive us/them barriers.
When describing the mechanisms of life, most group the informational sequence and the interpretation functions together. This makes sense because both must be present and functional to work. In practice, sequences and their interpreters must have co-evolved - a sequence is just random noise without the ability to recognize and interpret the it meaningfully.
In Behavior and Culture in One Dimension: Sequences, Affordances, and the Evolution of Complexity, Dennis P. Waters ignores this convention to focus on sequence only. The results prove interesting. Among other things, he notes that sequences are one-dimensional entities that organize the three-dimensional world and they are used extensively in biological systems.
Recommended.
Additional sources: Dennis Waters on Behavior & Culture in One Dimension, The Jim Rutt Show.
The digitization of the world’s exponentially increasing written library has allowed access to vastly more information than ever before. Our accumulated information far exceeds our feeble capability to comprehend it all. How can one possibly organize, search for facts or patterns? That is the hope of knowledge management methods.
I have encountered a great many ideas over many decades, most of which are long forgotten — some probably for the best. I would like to gather some of those ideas and forge them into coherent groups to write about.
Finding the tools to accomplish this is expensive, time consuming …and maddening. My desires are for a Apple centric small collection of apps that transparently fill my needs. A place to start is to consider what I’m using now.
Pages works well for word processing duties. And I managed to publish a pair of short books with it. But I plan never to do that again. Ulysses is my writing app of choice and I it use heavily. For notes I use Apple Notes and Ulysses, but I plan on continuing to explore offerings like Craft, Bear and Drafts.
Read-it-Later apps like Pocket allow one to store web pages for later examination, highlighting text and tags. Unfortunately, it seems Pocket is lock-in software as exported links don’t include highlights or notes. Instapaper and GoodLinks are two others, perhaps less polished, candidates. A possible replacement is DEVONthink, a versatile database capable of serving as a read-it-later app. It can import nearly any type of file. It is Apple Finder on steroids.
Digital outlines such as OmniOutliner have become very powerful tools, allowing violations of strict hierarchies with links making connections across boundaries and even completely outside the outline. No longer confined to compact phrases, nodes can encompass paragraphs or even pages. Still outlines strike me as more of a writing tool rather than a knowledge management tool. Mind maps (like MindNode) are often just graphical representations of outlines, although additional links may be added, offering complications to the arrangement of ideas as necessary.
Flashcard apps are designed to help one learn, or memorize material. Readwise is an example. It collects highlights and notes from a variety of sources such as books, articles and read-it-later apps. Though interesting and occasionally useful, it falls short of my needs. Undecided about retaining it.
Suppose, deep within a file system, there lies a heavily annotated key reference. Furthermore, this reference may prove useful to other projects as well. So you copy it to another project folder. In addition to the positive effect of locating information where it may be needed, there are two serious negative effects of the action. The first is you have just doubled storage required (both primary and backup). The second is that, as the file is further modified, the versions will diverge — and therefore its usefulness decline. Further, The Verge has reported that the file/folder system is poorly understood by a growing number of users.
Hence the invention of tags (sometimes referred to as keywords). These allow notes with similar themes ideas or themes to be linked together (as if they were in several folders at once). Seems an ideal solution eliminating both negative effects. But tagging suffers from additional problems. The first is apps often use their own tagging systems, even though the device’s operation system is available to every app. Secondly, there is a strong tendency for the number of tags to grow with time, rendering them less useful. So, tags have a couple of issues. Nonetheless, I find them quite useful.
In 1963, Ted Nelson coined the term hypertext. It is a word or words, generally highlighted in some way, that points to (or links with) another (hopefully related) document. Hypertext links came to be a critical feature of the internet. These are generally one way links, meaning if you wish to return to the originating page, you must use the back function in your browser. But the concept of linking can be expanded to include bi-directional links, like the kind found in Obsidian or Roam, which can, in turn, be used to tie notes to other notes and construct knowledge graphs.
Do tags or links result in greater insight? For me, for now, I'm sticking with tagging while remaining very intrigued by linking.
And finally, the interface matters greatly (at least to me) — both functionally and aesthetically. Functionally, because I need to know how to accomplish something transparently and efficiently. Aesthetically, because I prefer to work in an uncluttered and pleasing environment.
Sources:
Recall as the size of an organism increases, the ratio of surface area to volume decreases, making it difficult for single-celled organisms to adsorb sufficient nutrients and distribute them throughout the cell.
Multicellular organisms have less of a problem here. Ratcliff and Travisano found simply selecting the fastest clumping yeast cells artificially evolves such cells that clump very rapidly. Matt Herron has since found this selection also can be driven by predation. Groups of cells may simply be too large to eat.
If you are prey, increased size reduces the number of potential predators. If you are a predator, size or cooperation (like pack hunting) increases available prey.
D’Arcy Thompson’s theory that life’s various forms are predominantly determined by the forces acting upon them. Although many biologists do not accept this as nearly as important as selection, I believe it actually embellishes the occasionally nebulous term.
Alan Turing's self-organizing patterns can be mesmerizing. He proposed that patterns reminiscent of those found in nature (as in animal coats), are produced by the diffusion rates of two chemicals, one encouraging growth, the other discouraging growth.
This module will generate 10 random deviates from the uniform, normal, or exponential distributions. Select the distribution, enter the parameters (if different from the default), and press go. If more than 10 are required, select and copy the current values to a text editor and repeat.
In 2009, Melanie Mitchell published Complexity: A Guided Tour in which she notes Seth Lloyd had compiled a list of 40 measures of complexity. I’m confident the list has only grown since. Yet these measures are not really predictive of properties we want to know about, such as surprise or emergence. They are not theories.
Recently Sabine Hossenfelder presented a YouTube video on complexity. She spends much of the time decrying the lack of a theoretical basis for complex systems, but concluded by mentioning assembly theory as a viable candidate.
Notable take-aways:
References
Complexity must be strikingly easy to define as everyone seems to have their own. Unfortunately, none seem to precisely agree. Many consist of an enumeration of one or more characteristics. To follow in that tradition:
References:
The mess started innocently enough, by routinely selecting a podcast to accompany me on a walk — Paul Middlebrooks’ podcast interview with Johannes Jäger to be precise. Within minutes I found myself leaning against a tree furiously taking notes and feeling vaguely upset about the presentation. I’ve been struggling with it since.
The subsequent investigation seems in retrospect like an exercise in reading Alice in Wonderland while under the influence of hallucinogens — covering causality, reductionism, emergence, agency, teleology, complexity, panpsychism, homeostasis …even the nature of reality. My conclusion is both limited and straightforward. And probably wrong — or at least incomplete.
It seems absurdly reductionist to consider genes separately from their organisms.
Why? Might I suggest two reasons. The first is it reduces the number of behaviors the organism can perform — its agency. Second, it diminishes the opportunity for analysis of emergent behaviors.
Antonio Damasio thinks the primary goal of an organism is simply to keep living, or homeostasis. And that is robust enough to serve as agency in a complex network. The homeostasis argument seems vaguely circular — is it truly causal? Is it rich enough to constitute agency?
A rock rolls down a hill because it wants to get to the bottom — or because the Gods willed it so, or the Law of Gravity insists, or someone/thing pushed it, or rain loosened the soil, or a mysterious random event occurred, or …
Many of us prefer to believe there must be a cause for everything. Or, an ultimate cause in the case of multiple causes.
Since the evolutionary synthesis of a century ago, the organism has been viewed by many as merely the interface between the environment (selection) and the gene (reproduction with variation), providing simple chemical or mechanical transformations. But a few others think the organism can provide more. And these can extend into agency (the organism possessing its own objectives or goals).
Think of a bacteria swimming toward a food source.
The problem with allowing agency is that it can open corridors to teleological or panpsychic explanation, which in turn can undermine causal and mechanistic explanations.
The controversy disappears if, instead of considering evolution a three part system consisting of genetic and environment with the organism relegated to a mere interface role, we combine the genetic and organism roles into one interacting with the environment. This also allows the system to resemble that we find in nature.
Some of us assign teleological motives to events or objects frequently because it seems amusing to view the world anthropomorphically. Most because they believe in a world of magic or spirits or religion.
I’ve become persuaded that granting agency to organisms makes sense.
Organisms are not simply reactors to inputs from the environment or their own genetic instructions but clearly are endowed with a goal or goals, however simple these may prove to be.
This view is more akin to that held by Darwin and well prior to the gene centric modern synthesis of a century ago.
Antonio Damasio thinks the primary goal of an organism is simply to keep living, or homeostasis. The homeostasis argument seems vaguely circular — is it truly causal? Is it enough to drive agency?
It seems absurdly reductionist to wish to consider genes as separate from their organisms. And just how far may one take reductionism? Until emergence disappears?
Biological agency is the capacity of living systems to participate in their own making & function. Agency perspectives may help us understand better why and how living systems work, persist, innovate the way they do.
About a decade or two ago, “distraction-free” writing environments became a thing. And they help, especially when combined with other measures, such as shutting off the phone and blocking notifications, locking your family out of the house, or moving to a remote island — living off roots and berries — occasionally silently nodding to the monk on the neighboring mountain.
These won’t ever fully work because a primary cause of distraction is always with you — your brain. Ideas arise from our subconscious mind at an astonishing rate. Most seem absurd or goofy or completely unrelated to what we are trying to focus on. But they are also the source of our creativity. It takes considerable practice and willpower to refine the flow long enough to complete a coherent piece of writing.
Also, you might consider pausing that interminable search for the perfect app before putting word to paper or screen. It’s yet another form of procrastination.
René Girard’s memetic desire is a theory of human behavior. Girard thought humans look to others and imitate their behavior. It may indicate one’s wish to belong. In this way, they are relived of the burden of constructing an identity themselves. It is a powerful idea, and at the very core of consumerism.
So one meme after another, one builds an identity, like a brick wall. An identity is only useful to highly social animals. People build their identities as an image with which they face society.
In The Strange & Curious Tale of the Last True Hermit, Michael Finkle reported a thoughtful and profound insight offered by the protagonist:
“Solitude did increase my perception. But here’s the tricky thing — when I applied my increased perception to myself, I lost my identity. With no audience, no one to perform for, I was just there. There was no need to define myself; I became irrelevant. The moon was the minute hand, the seasons the hour hand. I didn’t even have a name. I never felt lonely. To put it romantically: I was completely free.” — Chris Knight
Without an identity, the remaining self is diminished, or, reversing perspective, identity may be considered an embellishment to self that occurs in complex societies.
As Thomas Wolfe tells it in You Can’t Go Home Again, a writer completes a book about his thinly-disguised home town, only to return there and find anger & resentment over his allegedly unfair depiction.
But there is another, far deeper, rationale. Your hometown, like all living things, has changed, physically and socially, as have you. It only lives now in your very faulty memory - only ever has - not past or current reality.
"You can't go home again because home has ceased to exist except in the mothballs of memory." - John Steinbeck
References:
In A hidden law of nature, Robert Hazen proposes a new law of nature describing an increase in order or information. A law based upon evolution as selection for function. In his example he chooses how atoms may come together to form a mineral. Of the vast number of possibilities only a few will withstand the trial of survival over time.
In Groundbreaking chemist defines all of life in 2 words, Lee Cronin summarizes Assembly Theory as determining the minimum number of operations required to build a given molecule. This value as called the Assembly Index. This may also be thought of as the minimum amount of information required to make the molecule.
Assembly theory is clearly the more developed, but these theories address the same question: how do we get such wonderful complexity from random events. Both arguments rely heavily on evolution. Not only Darwinian (biological) evolution, but evolution more broadly. And the nature of evolutionary processes manifest complexity.
Cronin notes the random building of assemblies that persevere will still succumb to randomness (entropy) eventually unless another element is added - memory. This is enough to push the progress toward evermore complexity.
It is difficult to not find these similar ideas exciting.
A diary is a form of journaling in which a person you no longer know has written about their personal feelings in a past that no longer exists. The point completely escapes me.
A journal of say, medical events might be of some use to those treating you in the near future. I admit indulging in this exceedingly boring and vaguely vain enterprise occasionally.
Scribbling a note about some new scientific advance or personal observation is often satisfying, but quickly becomes tedious when dealing with link rot or organizing an ever-growing pile. And there is always the fear of your efforts going for naught when the digital application simply disappears — second brains tend to have a short life span.
But often simply writing without a target form or audience does seem to soothe a restlessness you weren't aware you had.