cience is about mapmaking. It’s about taking a complicated world and reducing it to some sparse set of markings on a map that provides new guidance across an otherwise incomprehensible, and potentially hostile, landscape. (Location 188)
so on. Maps—and science—are often more about what we leave out than what we put in. (Location 192)
Divorcing a map from its purpose inevitably leads to frustration. Too little of the right kind of detail, or too much of the wrong kind, encumbers our ability to understand the world. (Location 198)
Reductionism fails because even if you know everything possible about the individual pieces that compose a system, you know very little about how those pieces interact with one another when they form the system as a whole. Detailed knowledge of a piece of glass does not help you see, and appreciate, the image that emerges from a stained-glass window. (Location 203)
It is a science where simple things produce complexity and complex things produce simplicity. (Location 211)
Reduction gives us little insight into construction. And it is in construction that complexity abounds. (Location 227)
Sometimes this complexity arises shaped by natural forces such as evolution, as in the consciousness that emerges from our brains. (Location 229)
Interacting systems develop feedback loops among the agents, and these loops drive the system’s behavior. Such feedback is moderated or exacerbated depending on the degree of heterogeneity among the agents. Interacting systems also tend to be inherently noisy, and such randomness can have surprising global consequences. (Location 251)
Core principles such as feedback, heterogeneity, noise, and networks can be used to understand new layers of complexity. (Location 254)
Yet other systems, such as the members of a social movement, self-organize into critical states that begin to exhibit a common characteristic behavior. (Location 258)
One critical aspect of interactions is feedback. Sometimes feedback stabilizes the system, as happens when we install a not-too-touchy thermostat to control the furnace. (Location 263)
Recognizing heterogeneity not only changes our predictions about how a system will behave but also alters our policy prescriptions. Homogeneous systems tend to undergo rapid changes and oscillations, while heterogeneous ones tend to react more slowly. Thus, your ability to start, or quash, a social movement is tied to the degree of heterogeneity among the people involved. Similarly, markets may require some heterogeneity among the traders to remain stable. (Location 287)
seek quality by removing all sources of randomness from any process. (Location 292)
Randomness is fundamental to Darwin’s theory of evolution, which relies on the notion that errors (variations) during reproduction will provide grist for the mill of selection and result in “endless forms most beautiful and most wonderful.” (Location 293)
On simple landscapes, even simple searches can find good outcomes. On rugged landscapes, such searches founder. (Location 297)
Just as evolution relies on variation to uncover most wonderful forms, introducing errors into a search can be a powerful strategy for discovery. (Location 304)
More generally, it may be the case that carefully controlled, centralized systems are more of a modern artifact, driven by reductionist thinking, than a universal norm. (Location 306)
Effective decentralized decision making may be one of the best new old ideas to emerge from complex systems. (Location 309)
Over the last few decades entire academic fields have been devoted to understanding how humans make decisions. (Location 310)
One particularly important decision for a colony—the difference between its perpetuation and demise—is finding a new location when the old one becomes too crowded. (Location 321)
This decentralized process allows the sites to be sorted out and suitably investigated, and ultimately it results in the swarm tending to choose the best site relatively quickly without any central direction. (Location 324)
It also shows how decentralized mechanisms can be used to solve hard problems. (Location 327)
Take some agents with simple behavior, connect them together in a particular way, and some global behavior will result. (Location 331)
Given this, knowing how patterns of interactions—that is, networks—influence behavior is fundamental to understanding complex systems. (Location 332)
If neighbors can connect to one another, they can influence one another. Thus, the networks that define neighborhoods drive system-wide behavior. This behavior is often surprising. For example, a well-mixed world where neighbors are tolerant of others easily segregates into neighborhoods of homogeneous types. (Location 336)
that abounds is the existence of scaling laws. (Location 339)
Over time, this interplay of stability and instability self-organizes the sand pile into a critical state. (Location 368)
Various social systems may evolve toward similar critical states. (Location 371)
rare occasions, lead to a massive readjustment. Civilizations may be governed by political systems that tend to push people toward critical states, where small events occasionally result in the collapse of an ancient civilization or, as we saw in the Arab Spring, modern governments. (Location 373)
We will conclude our exploration of complexity by following an arc that begins with our desire to understand atomic interactions at the start of the atomic and information ages and ends with a new fundamental theorem about complex adaptive systems. (Location 375)
At the heart of complex adaptive systems are agents searching for better outcomes. (Location 381)
Thus, agents in such systems are, unknowingly, performing a dance governed by a cosmic algorithm. (Location 383)
This new theorem implies that as agents adapt in these complex systems, their adaptations are governed by probabilities tied to their underlying fitness. (Location 384)
there is always a (lower) chance that they will find themselves in suboptimal circumstances. (Location 386)
as it suggests that while agents will often find the best outcomes, they will inevitably fail on occasion. (Location 387)
At the core of modern science is a belief in the power of reductionism: the idea that to understand the world we only need to understand its pieces. (Location 404)
Similarly, in social systems, if we can understand a neuron, we will understand the brain, and thus know individual decision making, which allows us to understand group decision making, which gives us deep knowledge of governments and firms, and ultimately a full understanding of economics, politics, and society at large. (Location 407)
The key insight from the “more is different” hypothesis is that reductionism does not imply constructionism. (Location 409)
The notion that local interactions can result in interesting global patterns has some important implications for evolution. (Location 466)
As you can see, it results in a more regular and symmetric pattern, which is probably bad if you want to camouflage yourself. (Location 481)
cellular automata are able to capture the essence of an important phenomenon in a very exact and sparse way. (Location 486)
how simple local rules can have complex global implications. (Location 487)
systems where complex local behavior results in a simple global outcome. (Location 488)
that under certain conditions one can find a set of prices under which economic agents—each out for her own gain—will want to buy or sell just enough of each commodity to equilibrate prices and maximize society’s gains from trade. (Location 493)
the so-called demand side of the market—gives us only part of the picture. (Location 502)
Knowing the shape of the supply and demand curves alone is like knowing where the high- and low-pressure areas are on a weather map—somewhat interesting, but useful only if you have some theory for what happens when the two fronts interact. (Location 531)
Of course, there is nothing inherent in our world that suggests systems equilibrate, but such an assumption does have a few advantages. (Location 533)
Indeed, one can show that trading patterns other than the one resulting from competitive equilibrium will only reduce the amount of total profit earned by all of the traders. (Location 556)
Can we build an alternative complex-systems theory of markets from the bottom up? That is, can we make some simple assumptions about trading and, from these, show how global patterns of trades and prices emerge? (Location 582)
When they meet, traders blurt out a random offer, with the only proviso being that if that offer is accepted, the trader will not lose money. (Location 585)
Thus, under the bazaar model—with the potential for easy wordplay fully acknowledged—there is roughly a one-third chance that we will end up with the same trades that we see under competitive equilibrium, (Location 603)
In both worlds, traders acting only in their own interests produce an outcome that was no part of anyone’s intention, namely, a set of prices and a pattern of trades that result in some aggregate profit across society. (Location 605)
that interactions among individuals can result in the emergence of global outcomes—in this case, patterns of prices and trades—that were no part of anyone’s intention. (Location 632)
The idea of competitive equilibrium is an extreme version of such order, where the cacophony of shouts results in a single price sufficient to balance everyone’s desire to trade, with just enough goods being offered by the sellers to meet the buyers’ demands, and the resulting trades maximizing the total profit available in the market. (Location 640)
In this world, we dismiss the notion that a unique, global, competitive equilibrium price emerges, and we embrace the chaotic machinations of the traders. (Location 644)
The power of local interactions to form unexpected global patterns is remarkable. (Location 649)
These changes likely pushed the markets toward a critical state (an idea that we will explore in Chapter 11), where even a small event had the potential to cascade into a much larger chain reaction. (Location 684)
Such trades are not without hazards. The problem with selling such a large number of shares all at once is that you can easily cause the price to plummet if you are not careful. (Location 694)
Initially, if the market is liquid, there are some buyers around to purchase your shares at roughly the going price. (Location 696)
Such an algorithm should be programmed to track key data from the market, such as the current trading volume, price, and time of day. (Location 707)
The firm’s algorithm had one simple rule: feed in orders to the market so that these orders constitute less than 9 percent of the overall trading volume during the previous minute. (Location 710)
In essence, the algorithm hitches a free ride on the volume information coming from the market and uses this as a proxy for reasonable prices. By doing so, it avoids having to make any difficult predictions about when to sell. (Location 714)
The combination of highly connected markets and quickly conducted trades has formed a new kind of complex system unforeseen even a decade ago. (Location 728)
If the (unintended) feedback loops emerging from the various connections are negative, the reverberations in the system slowly die out, and the markets are better for the experience, as prices realign with one another. (Location 730)
creating something akin to the horrible screeching sound we hear when a microphone is held too close to a loudspeaker. (Location 732)
their net holdings of any given security are small. (Location 740)
One of our interests was what would happen in a hybrid market composed of both machine and human traders. (Location 744)
HFTs’ distinct speed advantages may be causing flurries of machine trading, punctuated by quieter periods where the machines remain in the background waiting to take advantage of human errors. (Location 749)
A game of high-stakes hot potato ensued, in which the HFTs began to buy and sell to one another, with only occasional leaks of the contracts out to other market participants. (Location 756)
The algorithm, blind to everything but the volume, saw this increased activity as a sign that the market was liquid and that prices were stable, and it began to dump even more shares into an already volatile mix. (Location 759)
This mechanism had been put in place by the exchange, and it was designed to recognize market conditions in which the execution of further trades would result in unnaturally large price swings. (Location 766)
By linking the number of trades to only volume, a positive feedback loop was unintentionally embedded in the algorithm: (Location 773)
However, with the rapidly trading HFTs and their desire to maintain relatively neutral share positions, a new market dynamic formed that embedded a positive feedback loop into the system. (Location 776)
Thus, the transactions that did occur were happening at prices that became more and more extreme over time. (Location 795)
At the most extreme, securities were traded at their stub prices, with some shares going for a penny and others for $100,000. (Location 797)
For example, the feedback loops induced by the HFTs could be dampened by imposing transaction taxes or redesigning markets to lessen the importance of nanosecond-scale speed. (Location 808)
At each stage of its creation, we have accrued additional complexity in the name of added benefits: connecting markets with one another will ensure that price discrepancies will be eliminated quickly, having high-frequency traders will guarantee a ready trading partner for any transaction, using derivatives will provide a means for farmers to hedge the risks of bad weather and for pension funds to insure their portfolios, and so (Location 811)
The human systems connected with financial institutions are also vulnerable. Indeed, there are examples where the actions of a single trader brought down an entire institution, as with the fall of the 233-year-old Barings Bank in 1995. (Location 826)
The impact of such an attack is hard to predict, but at the very least it would seriously erode confidence, and it could be far more consequential, leading to a partial collapse of the very markets that ensure our economic survival. (Location 831)
Thus, in a very real sense, economists and policy makers were fully equipped to understand each part of the system. (Location 843)
thinking that understanding the parts of a system implies that you understand the whole system is a sin that is committed all too often. (Location 845)
Higher house prices encouraged more buyers, lowered lending standards, and resulted in less risky derivatives and easier government policies, and each of these fed back on the others, reinforcing the chain of effects. (Location 849)
So it was with mortgages at the start of the financial crisis. No entity wanted to forgo any possible trade and lose some immediate profit. (Location 862)
However, if one firm fails to meet its obligation to pay in the case of a default (think American International Group, aka AIG), the entire system unravels. (Location 867)
but globally irrational, arrangements. Without well-placed firebreaks, these systems are subject to small events having catastrophic consequences. (Location 869)
Thus, in a flash crash, once liquidity dries up, the expectations of market makers may change to the point where they believe they will no longer be able to find reasonable trading partners, which causes them to withdraw their orders and realize their expectations, further exacerbating the liquidity crisis. (Location 875)
Complex systems, whether intentional or not, are playing an increasingly important role in our world. (Location 882)
Obviously, such an assumption greatly simplifies the resulting model, as the representative agent can stand in for a vast horde of individually quirky consumers who might be difficult to track one by one. (Location 894)
Never has a science, or supposed science, been so generously indulged. (Location 905)
And never have experiments left in their wake more wreckage, unpleasant surprises, blasted hopes and confusion, to the point (Location 906)
that the question seriously arises whether the wreckage is (Location 907)
For this sequence to be successful, it requires a narrow range of temperatures to be maintained inside the hive (close to 94 degrees Fahrenheit). (Location 910)
It turns out that worker bees have two temperature-related behaviors. When a worker gets too cold, it seeks out other bees and rapidly buzzes its wings to generate heat. When it gets too warm, it moves away from others and fans its wings to form air currents that will cool things down (see Figure 4.1). The temperature in the hive depends on the actions of its workers. (Location 913)
When the temperature creeps below the set point, large numbers of bees instantly huddle together and buzz their wings, causing a large increase in the temperature. As the temperature rises, it quickly goes past the ideal point, and all of the bees switch to their cooling behavior and scatter and fan, inducing a rapid drop in temperature. (Location 926)
So, at least in terms of a honeybee hive, the representative agent model would be very misleading, implying hive temperatures that oscillate wildly when in fact they are actually quite stable. (Location 941)
As prices go up, the information eventually changes to a point where, in perfect synchronicity, all of the traders want to sell, inducing a price crash. (Location 946)
As in the case of the hive, a market with homogeneous traders leads to wild price oscillations. (Location 947)
Thus, in a homogeneous world, it takes at least as many rabble-rousers as the fixed sensitivity level to catalyze a movement. (Location 957)
Both of the social worlds above are characterized by a critical tipping point, whereby below this point no one joins the movement and above it everyone does. (Location 965)
Note that in both worlds, the average threshold for the population is about fifty, so the different tipping points are due to the variations in the thresholds of the two worlds. (Location 968)
Thus, in the social movement model, we find a case where heterogeneity leads to instability rather than stability. (Location 970)
In the case of the social movement, there is positive feedback, and a graduated response is like a rolling snowball, where the accumulation of snow makes it bigger and heavier and more likely to pick up additional snow. (Location 974)
Policy can often influence the level of heterogeneity in the system and thus determine the system’s overall behavior. (Location 979)
However, if you want to quash a social rebellion, having a homogeneous population with a high threshold will prevent small events from growing into revolutions. (Location 981)
Manufacturing is in many respects the opposite of an emergent system. It is a system that thrives on homogeneity. (Location 997)
While error avoidance is useful in the manufacture of a well-defined good, it is a dangerous bias if we want to discover new things. (Location 998)
An obvious search strategy here is to simply look around our current fog-bound location and take a step uphill. Once we take that step, we can look around anew, as a bit of new territory will be revealed, and we can again step uphill. (Location 1008)
Here we note our coordinates and declare to the world that we have found a high point. This type of search strategy is known, not surprisingly, as hill climbing. (Location 1011)
we will be at a local high point. (Location 1013)
If, instead, the landscape resembles that of the Himalayas, Andes, or Rockies, then it will be quite likely that our hill climbers will end up on local rather than global peaks. (Location 1023)
Systems that have a lot of interaction among their various dimensions are known as nonlinear systems. (Location 1051)
systems, so science treats as some sideshow curiosity an aspect of the world that is actually the norm. (Location 1053)
Thus, optimizing on each dimension alone and ignoring the others is likely to lead to an overall ensemble that is a fashion faux pas. (Location 1061)
Similarly, consider the problem of finding the best cocktail, in either the mixed-drink or drug sense. (Location 1069)
However, what’s good for refining a manufacturing process is likely bad for discovery. (Location 1076)
While this is seemingly at odds with her overall goal of finding the highest point possible, this short-term loss offers the long-term potential of moving her to a new slope that might result in her discovering a much higher peak. (Location 1080)
While the algorithm still wants to march uphill, the noise makes it willing to accept occasional errors and march downhill as long as the temperature is high or the loss of elevation is small. (Location 1090)
lessening the algorithm’s tendency to take large downhill steps, until the temperature is so low that the algorithm reverts to its pure hill-climbing behavior. (Location 1092)
However, most diseases exist in the realm of complex biological systems, and such systems tend to have built-in redundancies that make the system as a whole robust in the face of a single line of attack. (Location 1111)
Perhaps the best-known drug cocktail is the set of antiretroviral drugs used in the treatment of HIV/AIDS. (Location 1115)
the costly efforts of thousands of researchers. Once this understanding arose, the strategy of simultaneously using multiple drugs, each of which defeated a different class of viral mutations, became an obvious approach. (Location 1123)
As we enter an era of cheap genotyping, it is likely that future cancer diagnoses will be tied to the genotype of an individual’s cancer. (Location 1188)
When the bacterium is moving toward the drop, it tends to spend more time going straight (Location 1227)
than tumbling. (Location 1227)
This type of behavior will, in general, cause the bacterium to move up the chemical gradient (Location 1228)
and he found that the relative strength of the two gradients is what matters—if the attractant gradient is stronger than that of the repellent, the bacterium moves forward, and if not, it moves away. (Location 1271)
Moreover, it demonstrates how those preferences can be influenced by the slime mold’s internal state: when it is starved, it will trade off additional risk (in the form of being exposed to more light) for extra food. (Location 1286)
Marketers are well aware of this attraction effect and will often introduce an inferior product to boost sales of an existing one. (Location 1295)
tend to have more of them than most other species? What neurons are really good at is transmitting signals across large distances. (Location 1320)
Perhaps larger-scale social systems, such as honeybees in a hive, people in an organization, or a collection of interconnected markets, may be performing thinking-like computations. We’ll explore this topic in the next chapter. (Location 1331)
Once a tendril happens upon a rich source of nutrients, the organism sends out additional tendrils that go directly to this location to help transport back the find. (Location 1344)
Tendrils emerge from this mass, too, but rather than seeking out nutrients they seem to be searching for a new outer membrane—much the way a hermit crab seeks a new shell. (Location 1350)
At this point, it is as if this tendril pulls the entire mass of particles into its new skin. (Location 1352)
The organism described above is not some strange new form of extraterrestrial life but rather a colony of honeybees viewed from enough distance that we have difficulty making out the individual bees. (Location 1355)
Alas, this comforting story of a finely tuned aristocracy could not be further from the truth, though the reality here is far more rich, fascinating, and useful. (Location 1359)
plentiful, honey has been stored, and the colony is rapidly growing. When the colony swarms, (Location 1373)
the old queen takes off with roughly half of the hive. (Location 1374)
The swarm, with the queen huddled in the middle, is vulnerable at this time, as it is exposed to the vagaries of weather and predation. (Location 1375)
The key to this step of the process is that the time spent dancing is tied to the scout’s perception of the quality of the site, with higher-quality sites receiving longer dances. (Location 1381)
This results in a positive feedback loop for the higher-quality sites. (Location 1385)
This mechanism allows the swarm to conduct many parallel searches and to find a new home relatively quickly. (Location 1395)
For example, at a quorum of five, the system picks the best choice about 50 percent of the time. At a quorum of twenty, there is a 70 percent chance of picking the best option. (Location 1431)
Risk aversion may be an important strategy in evolutionary systems, since a single failure of a species to reproduce will prune its branch off the evolutionary tree. (Location 1464)
An evolutionary strategy that relies on gambles versus sure things is, literally, quite risky. (Location 1467)
Do we see hivelike minds forming in human systems? Perhaps. (Location 1480)
Consumer goods, such as MP3 players and smartphones, may follow similar processes. At the start of these markets, early-adopting consumers go out and buy on a whim. (Location 1484)
Candidates get their buzz from contributions and the loyalty that they engender from potential voters who show up at rallies and promote the candidate to their friends and neighbors. (Location 1491)
When we considered the honeybee hive, we ignored the fact that the behavior of each scout bee was directed by a brain with around 1 million neurons (by comparison, ants have from 100,000 to 250,000 neurons, and we have around 11 billion). (Location 1498)
just considered each bee as being a particle following some simple rules. (Location 1500)
say, the neurons in a honeybee’s brain, can be treated abstractly at another level, say, by introducing the assumption that individual bees follow simple rules, regardless of how such rules are generated. (Location 1502)
For example, they present a monkey with a screen of randomly placed and moving dots, with some set proportion of these dots all moving either to the left or to the right. (Location 1518)
It may be that a lot of systems that seem intelligent are in fact the result of simple, interacting particles. (Location 1540)
Deborah Gordon and others have found that when an ant decides what to do, it is influenced by the actions of other ants. (Location 1544)
That will encourage other ants to seek food as well. (Location 1546)
Unfortunately, such a strategy can sometimes fail when a line of army ants inadvertently begins to follow its own trail, forming a circular mill (see Figure 7.5) that, with time, ends badly for all involved. (Location 1551)
Neurons sense the outside world and formulate useful decisions about what action to take. (Location 1563)
While the laws of physics may be fixed, the internal chemical environments in which these play out are not. (Location 1564)
As a result, evolutionary forces can form organisms relying on complex molecular interactions that allow metabolisms, information processing, reproduction, and even thoughtful decision making by individuals and superorganisms. (Location 1566)
Such a mechanism reemerged in the 1990s in discussions of “feebate” systems. In these auctions, fees imposed on less-efficient technologies, such as a gas-guzzling truck, are used to fund rebates that subsidize the purchase of more energy-efficient vehicles. (Location 1574)
Other rules, such as the English auction, persist over time, as both buyers and sellers find value in participating in such auctions. Moreover, societies that embrace these rules tend to thrive. (Location 1589)
Different religions try to invoke such beliefs in different ways, and even within a given religious branch there are often refinements (comedian George Carlin was able to reduce the Ten Commandments down to just two: (Location 1601)
Similar problems exist in social, government, military, and business domains, and perhaps these problems could be solved using related mechanisms. (Location 1611)
create decentralized decision systems that gather and highlight key information ranging from web-based searches to intelligence gathering for business or national security. (Location 1613)
Governed by simple rules, interacting systems can result in spontaneous, system-wide behavior that is both a part of those underling rules yet wholly disconnected from them. (Location 1617)
t the heart of any complex system is a set of interacting agents. If we track who interacts with whom, we can uncover a network of connections among the agents. (Location 1625)
Alas, as in most complex systems, such sensible intuitions are wrong. (Location 1651)
Thus, part of the dynamics of this system is a set of isolated islands of common action being established as pairs of neighbors happen upon the same action. (Location 1661)
Once established, these islands are likely to grow in size as they accrete like-actioned neighbors. (Location 1663)
Thus, Lakeland breaks down into a set of very stable groups pursuing very different actions, even though all of the residents follow the same behavioral rule. (Location 1679)
such as strategically targeting particular residents for behavioral changes that will result in large positive impacts on the overall state of the system. (Location 1686)
Similarly, views on political issues and choice of political party can be influenced by social networks. (Location 1693)
It has been found that changes to the structure of a network often have a big influence on system-wide behavior. (Location 1698)
While it is clear that small world networks should speed up message passing (after all, it can’t take any more steps than before, since you can always revert to the outer-ring, nearest-neighbor approach if need be), it is surprising how much less time it takes. (Location 1721)
We are only beginning to understand the impact of this new, hypernetworked world in terms of complex social dynamics. (Location 1777)
ammals live, on average, for roughly 1 billion heartbeats. (Location 1788)
Moreover, heartbeat is tied to other physiological features, such as body mass and metabolic rate, so these too can be predicted. (Location 1793)
In the 1930s, Max Kleiber noted that the metabolic rate of an animal scales to its mass raised to the ¾ power (that is, it scales sublinearly). (Location 1816)
power of ¾ implies that we only need two times the energy to sustain two and a half times the mass. (Location 1817)
this relationship implies that as animals get larger they are more efficient in the amount of energy needed per unit of mass. Since metabolism is tied to all kinds of other factors, such as oxygen intake and heart rate, (Location 1818)
Under this type of scaling, if you are sixteen times as big, you will live twice as long. (Location 1821)
When such laws appear, it suggests some underlying mechanism driving the entire system. (Location 1826)
The idea behind this mechanism is that even complex structures such as bodies face some constraints. (Location 1828)
It provides a nifty summary of the world around us, along with an identifiable reason for why such a law exists: physical constraints. (Location 1846)
For example, primates and parrots live about twice as long as one would predict using the scaling law. (Location 1848)
Thus—and oddly, given the scaling law—small dogs tend to live longer than large ones (though mice, guinea pigs, and rabbits tend to line up as expected, pet-buying parents beware). (Location 1852)
As can be seen, the higher the number of deaths in a given war, the fewer such wars we observe (thankfully). (Location 1864)
Zipf-like laws occur in other contexts as well. For example, the distribution of the size of cities or corporations also follows Zipf’s law. The largest city in a country has about twice the population of the second-largest, three times that of the third-largest, and so on. (Location 1886)
They have metabolisms tied to the flow of energy and people along various transportation and communication networks that produce knowledge and economic outputs along with various streams of waste that get processed and released into the surrounding air, water, and land. (Location 1894)
There are other metrics, such as economic output, inventive activity (measured by, say, patents or R&D employment), crime, and disease, that scale superlinearly. (Location 1904)
This superlinearity tends to be tied to the more social elements of cities. (Location 1906)
If the estimates are to be believed, as the world’s population grows, concentrating more and more people in urban areas, megacities will relieve some of the demands for resources such as roads and fuel. Unfortunately, increasing urbanization will not mitigate the demands for individual needs, such as housing and electricity, which rise linearly. (Location 1911)
The old woes of crime and disease—prominently featured in the dystopian views of most science fiction movies—will likely increase per capita as cities become bigger. (Location 1914)
This latter piece of ecosystem dynamics results in the second major externality found in the Balinese system—a farmer harvesting her rice crop may impose an uncompensated cost on neighboring farmers as the pests from her crop migrate to the neighboring crops. (Location 1956)
that tends to provide a definitive edge. “Tho’ Nature,” as Tennyson says, may be “red in tooth and claw,” the ability to cooperate rather than compete often allows a group to thrive far beyond its apparent means. (Location 2003)
To understand the embedded complexities of that system, it took a variety of contributions from across the sciences. (Location 2009)
If the prisoner’s dilemma were only about prisoners, it would be of passing interest. (Location 2030)
We use the repeated prisoner’s dilemma game as the foundational physics of the system. (Location 2070)
states, the surviving machines use very few of these, favoring simple structures that always defect. (Location 2099)
Thus, the evolving machines hijack their initial actions and repurpose them to serve as communication signals. (Location 2131)
If at least two cautiously cooperative strategies emerge simultaneously, they can receive higher-than-average payoffs, and evolution will favor their perpetuation. (Location 2133)
One way for this to happen is that the needed mutation simultaneously occurs in two machines. (Location 2153)
Another possibility is that a lone cautiously cooperative machine, since it is facing opponents that have only ever known defection, will inadvertently trigger cooperative behavior in some of the always-defecting machines. (Location 2160)
We often observe that complex systems, after having developed a beautiful and seemingly robust structure, can collapse in an instant. (Location 2191)
As the sand continues to pile up, we eventually reach a point where a grain can no longer balance where it falls, and a little avalanche ensues as the grain tumbles onto its neighbor. (Location 2198)
At other times the grain begins an avalanche that triggers a chain reaction of additional grains tumbling across the pile. (Location 2203)
At any given time during our sand-pouring experiment, we could pause and take stock of the conditions at any location on the table. (Location 2208)
Every grain of sand we add, every tumble that occurs, is continually pushing the system toward a critical state. (Location 2211)
After that avalanche has devastated the pile, the system has relaxed enough so that new additions of sand either stay where they land or result in only small, localized avalanches that are quickly absorbed by subcritical neighbors. (Location 2213)
Under these new conditions, the system is still driven toward critical states. (Location 2218)
For example, criticality in social systems might depend on features such as laws and regulations or financial risk. (Location 2221)
Or consider the banking and investment system, where various institutions try to maximize their returns by leveraging their assets and taking on risk. (Location 2225)
these systems can enter critical regimes where even small changes, perhaps the inability of one bank to repay a single loan, can result in a large avalanche as failure begets failure. (Location 2226)
As the taxes are raised, the farmers become increasingly disgruntled with the trade-off they must make. At some point, things may become so bad that a farmer might rebel or move elsewhere. (Location 2233)
As the government raises tax rates, it starts to push the system closer to criticality. (Location 2236)
Unlike physical systems, social systems are likely to embody additional endogenous forces that could accelerate their criticality. (Location 2245)
The rapid abandonment of Mayan cities in the Classic period could have been presaged by years of social policy that forced the system into a critical state. (Location 2251)
Any social system is continually perturbed by seemingly insignificant events such as bouts of bad weather, missteps by the ruler, and so on. (Location 2253)
Yet such actions and reactions slowly flow through the system and inexorably drive it to a critical state. (Location 2255)
Once the system entered such a state, even an inconsequential act could trigger large-scale change, the consequences of which we are only starting to grasp. (Location 2267)
captured and contributed to these events. Self-organized criticality is an interesting form of complexity where small pieces of the system interact locally with one another, mediated by a very simple rule governing change. (Location 2270)
But the lesson from self-organized criticality is that there are forces underlying systems such that even small events, normally inconsequential, can have huge impacts. (Location 2274)
In one type of reaction, energetic neutrons potentially collide with nearby nuclei, (Location 2289)
perhaps resulting in a fission event that releases some energy and even more energetic neutrons into the mix. (Location 2290)
Chance plays an important role in such systems. (Location 2291)
Depending on the speed of that transformation, we can get either warming leading to the carbon-free generation of civilian power or the destructive force of a nuclear blast. (Location 2293)
Such interactions embody the first branch of a complex trinity that leads us to a fundamental theorem about complex adaptive systems. (Location 2296)
John von Neumann was a consultant to Aberdeen. Upon learning about ENIAC, he realized that it might be repurposed to help solve the “thermonuclear problem”—that is, a bomb predicated on nuclear fusion rather than fission—being (Location 2304)
In the annals of complex systems, this represents an important milestone in the use of electronic computation to understand complex interactions with serious, real-world implications. (Location 2311)
The approach used computationally generated randomness to solve complex problems, and it marks the formal beginning of what has come to be known as the Monte Carlo method. (Location 2314)
Each particle interacts with the others, and we can calculate the overall energy for any given configuration. The challenge posed in the paper was to find the most likely configurations for this system. (Location 2323)
Amazingly, if we track the various configurations that this algorithm visits over time, we find that these configurations converge on the exact distribution underlying the measure of interest. That is, the system will spend relatively more time in those configurations that have the highest measure of interest. (Location 2340)
The first key piece is that the algorithm always uses the existing status quo as an anchor. (Location 2348)
As you might suspect, the latter approach gives us a very different set of probabilities and, in the case of weather, a much more accurate prediction, since the weather today is a good predictor of the weather tomorrow. (Location 2352)
The 1953 paper of Metropolis and colleagues used some of Markov’s ideas to create a new class of algorithms called Markov chain Monte Carlo (MCMC) methods. (Location 2356)
This requires that the probability of proposing x given y is identical to the probability of proposing y given x. (Location 2362)
Such systems collapse into a very well-behaved regime where the probability of finding it in any particular state is fixed and independent of where the system started. (Location 2365)
it takes the chain a certain amount of time to forget where it began and to find its more fundamental behavior. (Location 2368)
aspect of the MCMC’s remarkable behavior involves the acceptance criteria. (Location 2370)
as it guarantees that the algorithm converges to the underlying probability distribution implied by our measure of interest. (Location 2371)
as it requires information about all possible configurations of the system, and the number of such configurations is usually so large that performing this calculation is impossible. (Location 2373)
Detailed balance requires that when the system converges, the resulting transitions are reversible in the sense that the equilibrium probability of moving from one (Location 2375)
At the heart of Bayesian methods is the need to calculate some key probability distributions. Such distributions are often computationally impossible to calculate directly, but we can invoke the magic of MCMC. (Location 2384)
In short, a complex adaptive system behaves as if it were implementing a MCMC algorithm. (Location 2391)
The lily pads represent various states of the system, and the location of the frog marks the status quo configuration. (Location 2398)
The algorithm needs a measure of fitness for any given state of the system. (Location 2430)
In economics, we often think of agents pursuing profit (in the case of firms) or happiness (in the case of consumers). Thus, as long as agents in an adaptive system are pursuing a goal, we can use the measure of this goal as a means to drive our model. (Location 2433)
For example, if the proposal distribution is symmetric, an acceptance criterion that adopts the variant with a probability given by the variant’s fitness divided by the fitness of both the variant and the status quo also results in detailed balance. (Location 2440)
Suppose we have an adaptive system where an agent represents a possible state of the system. (Location 2445)
This leads to a fundamental theorem of complex adaptive systems: (Location 2447)
the above adaptive system, after sufficient burn-in time, the distribution of the agent (states) in the system is given by the normalized fitness distribution. (Location 2448)
This theorem implies that, in general, such adaptive systems converge to a distribution of states governed by the normalized fitness. (Location 2451)
Rather, they tend to concentrate on the better solutions (given sufficient time), though on rare occasions they will find themselves in suboptimal circumstances. (Location 2453)
One implication of the theorem is that while adaptive agents tend to do well, they are not perfect. (Location 2456)
adaptive systems tend to concentrate on the fitter outcomes in the world, on rarer occasions they will end up in the bad outcomes. (Location 2457)
Thus, the need to accept configurations of lower fitness is a necessary evil that prevents the system from getting stuck on local maxima. (Location 2462)
annealing schedule, the system will tend to lock into areas of higher fitness. (Location 2466)
But the design of such annealing schedules is tricky, and once the system is cooled, it would be unable to adapt to a change in the underlying fitness landscape. (Location 2467)
these initial influences dissolve away over time and the subsequent links in the chain are driven by the more fundamental forces characterizing the Markov process. (Location 2470)
As we increase the size of the underlying space, the burn-in time increases, since it takes longer to explore the larger space. (Location 2472)
Finally, the shape of the space itself can influence burn-in time. (Location 2476)
While our theorem guarantees that the adaptive system will eventually fall into the normalized fitness distribution, the speed at which this happens depends on how quickly it can traverse the burn-in period. (Location 2479)
as the fitness distribution is unchanging in the sense that the identical structure gets the same measure of fitness in perpetuity. (Location 2486)
MCMC algorithms tend to be fairly robust if the choice of the proposal distribution is reasonable, though, as noted, this choice may influence burn-in time. (Location 2492)
Reduction does not tell us about construction. This is the fundamental insight of the study of complex systems. (Location 2537)
To truly understand hives, markets, and brains, we need to understand how the interactions of honeybees, traders, and neurons result in system-wide, aggregate behavior. (Location 2539)
These abstract creations, linked with the utility of a computer to help visualize their implications, show how simple, local, decentralized processes can result in global patterns. (Location 2561)
When systems are simple, we can easily trace their behavior from step to step, and in so doing make accurate predictions about the overall system’s behavior. (Location 2566)
as each new trace alters previous ones, making it extremely tricky and at times impossible to predict what will happen—as (Location 2568)
we build in feedback loops. Some types of feedback result in stabilizing forces, calming the system as a whole. (Location 2571)
and the interconnections among the parts resulted in a series of feedback loops able to take down a worldwide economy. (Location 2575)
However, in a world of increasing complexity, we are simply perfecting our ability to create such storms. (Location 2576)
Homogeneous systems, with all of the agents taking the same actions based on the same cues, can have much more dramatic responses to new events than heterogeneous ones. (Location 2578)
When complexity abounds, networks of connections matter. These networks, determining the interaction possibilities of agents, result in the emergence of global patterns. (Location 2616)
For example, even if we begin with people who have only a slight preference to live among similar types of people, we can easily end up with a highly segregated society. (Location 2618)
Notwithstanding a world red in tooth and claw predicted by individual incentives, there are enough examples of cooperation emerging in complex systems to provide a ray of hope. (Location 2633)
Ultimately, complexity abounds in the challenges that we face as a society. Take any of the major issues confronting humanity—climate (Location 2653)