Dawn of Everything Conclusion

Preface. Clearly for their conclusion to make sense you’ll need to read the book and see the evidence for yourself.  Since they challenge just about all of the ideas currently in fashion, you can find some pretty damning reviews of their book, but several I’ve read entirely misstate what was actually written, the old straw man fallacy of inventing something that they didn’t say and shooting it down.  And their attitude is not at all “we’re right, you’re wrong”, no, quite the opposite.  They’re hoping to stir up fruitful avenues of inquiry, different and more meaningful ways of looking at the past, and my hope is that when the energy crisis brings civilization down, new societies can use this book as an inspiration for how to avoid authoritarian kings, brutal agricultural societies, and more.

Here is part of their summary, and at greater length below (though they are constantly summarizing arguments throughout the book, another reason you need to actually read it).

“In trying to synthesize what we’ve learned over the last 30 years, we asked question such as “what happens if we accord significance to the 5,000 years in which cereal domestication did not lead to the emergence of pampered aristocracies, standing armies or debt peonage, rather than just the 5,000 years in which it did? What happens if we treat the rejection of urban life, or of slavery, in certain times and places as something just as significant as the emergence of those same phenomena in others?

We’d never have guessed, for instance, that slavery was most likely abolished multiple times in history in multiple places; and that very possibly the same is true of war. Obviously, such abolitions are rarely definitive. Still, the periods in which free or relatively free societies existed are hardly insignificant.

Much of this book has been devoted to recalibrating how we view past societies, to remind us that people did actually live in other ways, often for many centuries, even millennia. In some ways, such a perspective might seem even more tragic than our standard narrative of civilization as the inevitable fall from grace. It means we could have been living under radically different conceptions of what human society is actually about. It means that mass enslavement, genocide, prison camps, even patriarchy or regimes of wage labor never had to happen.

But on the other hand it also suggests that, even now, the possibilities for human intervention are far greater than we’re inclined to think.”

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Graeber D, Wengrow D (2021) The Dawn of Everything: A New History of Humanity.

The case of North America not only throws conventional evolutionary schemes into chaos; it also clearly demonstrates that it’s simply not true to say that if one falls into the trap of ‘state formation’ there’s no getting out. Whatever happened in Cahokia, the backlash against it was so severe that it set forth repercussions we are still feeling today.

What we are suggesting is that indigenous doctrines of individual liberty, mutual aid and political equality, which made such an impression on French Enlightenment thinkers, were neither (as many of them supposed) the way all humans can be expected to behave in a State of Nature. Nor were they (as many anthropologists now assume) simply the way the cultural cookie happened to crumble in that particular part of the world.

Still, the societies that European settlers encountered, and the ideals expressed by thinkers like Kandiaronk, only really make sense as the product of a specific political history: a history in which questions of hereditary power, revealed religion, personal freedom and the independence of women were still very much matters of self-conscious debate, and in which the overall direction, for the last three centuries at least, had been explicitly anti-authoritarian.

A careful review of oral traditions, historical accounts and the ethnographic record shows that those who framed what we call the ‘indigenous critique’ of European civilization were not only keenly aware of alternative political possibilities, but for the most part saw their own social orders as self-conscious creations, designed as a barrier against all that Cahokia might have represented – or indeed, all those qualities they were later to find so objectionable in the French.

We have numerous versions of the foundation of the League of Five Nations (the Seneca, Oneida, Onondaga, Cayuga and Mohawk), an epic known as the Gayanashagowa, which represents political institutions as self-conscious human creations.  The heroes in this mythology set about winning over the people of each nation to agree on creating a formal structure for heading off disputes and creating peace. Hence the system of titles, nested councils, consensus-finding, condolence rituals and the prominent role of female elders in formulating policy.

Since Haudenosaunee names are passed on like titles, there has continued to be an Adodarhoh, just as there is also still a Jigonsaseh and Hiawatha, to this day. Forty-nine sachems, delegated to convey the decisions of their nation’s councils, continue to meet regularly. These meetings always begin with a rite of ‘condolence’, in which they wipe away the grief and rage caused by the memory of anyone who died in the interim, to clear their minds to go about the business of establishing peace. This federal system was the peak of a complex apparatus of subordinate councils, male and female, all with carefully designated powers – but none with actual powers of compulsion.

It is an anthropological commonplace that if you want to get a sense of a society’s ultimate values it is best to look at what they consider to be the worst sort of behavior; and that the best way to get a sense of what they consider to be the worst possible behavior is by examining ideas about witches. For the Haudenosaunee, the giving of orders is represented as being almost as serious an outrage as the eating of human flesh.

Representing Adodarhoh as a king might seem surprising, since there seems no reason to think that, before the arrival of Europeans, either the Five Nations or any of their immediate neighbors had any immediate experience of arbitrary command. This raises precisely the question often directed against arguments that indigenous institutions of chiefship were in fact designed to prevent any danger of states emerging: how could so many societies be organizing their entire political system around heading off something (i.e. ‘the state’) that they had never experienced? The straightforward response is that most of the narratives were gathered in the 19th century, by which time any indigenous American was likely to have had long and bitter experience of the United States government: men in uniforms carrying legal briefs, issuing arbitrary commands and much more besides. So perhaps this element was added to these narratives later? Anything is possible, of course, but this strikes us as unlikely.

The exact time and circumstance of the League of Five Nations’ creation is unclear; dates have been proposed ranging from AD 1142 to sometime around 1650. No doubt the creation of such confederacies was an ongoing process; and surely, like almost all historical epics, the Gayanashagowa patches together elements, many historically accurate, others less so, drawn from different periods of time. What we know from the archaeological record is that Iroquoian society as it existed in the 17th century began to take form around the same time as the heyday of Cahokia.

By around AD 1100 maize was being cultivated in Ontario, in what later became Attiwandaronk (Neutral) territory. Over the next several centuries, the ‘three sisters’ (corn, beans and squash) became ever more important in local diets – though Iroquoians were careful to balance the new crops with older traditions of hunting, fishing, and foraging. The key period seems to be what’s called the Late Owasco phase, from AD 1230 to 1375, when people began to move away from their previous settlements (and from their earlier patterns of seasonal mobility) along waterways, settling in palisaded towns occupied all year around in which longhouses, presumably based in matrilineal clans, became the predominant form of dwelling. Many of these towns were quite substantial, containing as many as 2,000 inhabitants (that is, something approaching a quarter of the population of central Cahokia).

References to cannibalism in the Gayanashagowa epic are not pure fantasy: endemic warfare and the torture and ceremonial sacrifice of war prisoners are sporadically documented from AD 1050. Some contemporary Haudenosaunee scholars think the myth refers to an actual conflict between political ideologies within Iroquoian societies at the time; turning especially on the importance of women, and agriculture, against defenders of an older male-dominated order where prestige was entirely based in war and hunting. Some kind of compromise between these two positions appears to have been reached around the 11th century AD, one result of which was a stabilizing of population at a modest level. Population numbers increased fairly quickly for two or three centuries after the widespread adoption of maize, squash and beans, but by the 15th century they had leveled off. The Jesuits later reported how Iroquoian women were careful to space their births, setting optimal population to the fish and game capacities of the region, not its potential agricultural productivity. In this way the cultural emphasis on male hunting actually reinforced the power and autonomy of Iroquoian women, who maintained their own councils and officials and whose power in local affairs at least was clearly greater than that of men.

It was precisely this combination of such conflicting ideological possibilities – and, of course, the Iroquoian penchant for prolonged political argument – that lay behind what we have called the indigenous critique of European society. It would be impossible to understand the origins of its particular emphasis on individual liberty, for instance, outside that context. Those ideas about liberty had a profound impact on the world. In other words, not only did indigenous North Americans manage almost entirely to sidestep the evolutionary trap that we assume must always lead, eventually, from agriculture to the rise of some all-powerful state or empire; but in doing so they developed political sensibilities that were ultimately to have a deep influence on Enlightenment thinkers and, through them, are still with us today.

In these accounts, the best we humans can hope for is some modest tinkering with our inherently squalid condition – and hopefully, dramatic action to prevent any looming, absolute disaster. The only other theory on offer to date has been to assume that there were no origins of inequality, because humans are naturally somewhat thuggish creatures and our beginnings were a miserable, violent affair; in which case ‘progress’ or ‘civilization’ – driven forward, largely, by our own selfish and competitive nature – was itself redemptive. This view is extremely popular among billionaires but holds little appeal to anyone else, including scientists, who are keenly aware that it isn’t in accord with the facts.

But over the course of the argument all parties have come to agree on one key point: that there was indeed something called ‘the Enlightenment’, that it marked a fundamental break in human history, and that the American and French Revolutions were in some sense the result of this rupture. The Enlightenment is seen as introducing a possibility that had simply not existed before: that of self-conscious projects for reshaping society in accord with some rational ideal. That is, of genuine revolutionary politics. Obviously, insurrections and visionary movements had existed before the 18th century. No one could deny that. But such pre-Enlightenment social movements could now largely be dismissed as so many examples of people insisting on a return to certain ‘ancient ways’ (that they had often just made up), or else claiming to act on a vision from God (or the local equivalent).

Pre-Enlightenment societies, or so this argument goes, were ‘traditional’ societies, founded on community, status, authority and the sacred. They were societies in which human beings did not ultimately act for themselves, individually or collectively. Rather, they were slaves of custom; or, at best, agents of inexorable social forces which they projected on to the cosmos in the form of gods, ancestors or other supernatural powers. Supposedly, only modern, post-Enlightenment people had the capacity to self-consciously intervene in history and change its course; on this everyone suddenly seemed to agree, no matter how virulently they might disagree about whether it was a good idea to do so.

All this might seem a bit of a caricature, and only a minority of authors were willing to state matters quite so bluntly. Yet most modern thinkers have clearly found it bizarre to attribute self-conscious social projects or historical designs to people of earlier epochs.

The British Empire maintained a system of indirect rule in various parts of Africa, India and the Middle East where local institutions like royal courts, earth shrines, associations of clan elders, men’s houses and the like were maintained in place, indeed fixed by legislation. Major political change – forming a political party, say, or leading a prophetic movement – was in turn entirely illegal, and anyone who tried to do such things was likely to be put in prison. This obviously made it easier to describe the people anthropologists studied as having a way of life that was timeless and unchanging.

In a Senegalese or Burmese village this might mean describing the daily round, seasonal cycles, rites of passage, patterns of dynastic succession, or the growing and splitting of villages, always emphasizing how the same structure ultimately endured. Anthropologists wrote this way because they considered themselves scientists (‘structural-functionalists’, in the jargon of the day). In doing so they made it much easier for those reading their descriptions to imagine that the people being studied were quite the opposite of scientists: that they were trapped in a mythological universe where nothing changed and very little really happened.

In traditional societies everything important has already happened. All the great founding gestures go back to mythic times, the dawn of everything, when animals could talk or turn into humans, sky and earth were not yet separated, and it was possible to create genuinely new things (marriage, or cooking, or war). If anyone in what was a traditional society does do something remarkable – establishes or destroys a city, creates a unique piece of music – the deed will eventually end up being attributed to some mythic figure anyway. The alternative notion, that history is actually going somewhere (the Last Days, Judgment, Redemption), is ‘linear time’, in which historical events take on significance in relation to the future, not just the past.  It is startling is that anyone ever took this sort of argument seriously.

Why does it seem so odd, even counter-intuitive, to imagine people of the remote past as making their own history (even if not under conditions of their own choosing)? Part of the answer no doubt lies in how we have come to define science itself, and social science in particular.

Social science has been largely a study of the ways in which human beings are not free: the way that our actions and understandings might be said to be determined by forces outside our control. Any account which appears to show human beings collectively shaping their own destiny, or even expressing freedom for its own sake, will likely be written off as illusory. This is one reason why most ‘big histories’ place such a strong focus on technology. Dividing up the human past according to the primary material from which tools and weapons were made (Stone Age, Bronze Age, Iron Age) or else describing it as a series of revolutionary breakthroughs (Agricultural Revolution, Urban Revolution, Industrial Revolution), they then assume that the technologies themselves largely determine the shape that human societies will take for centuries to come – or at least until the next abrupt and unexpected breakthrough comes along to change everything again.

We are hardly about to deny that technologies play an important role in shaping society. Obviously, technologies are important: each new invention opens up social possibilities that had not existed before. At the same time, it’s very easy to overstate the importance of new technologies in setting the overall direction of social change. To take an obvious example, the fact that Teotihuacanos or Tlaxcalteca employed stone tools to build and maintain their cities, while the inhabitants of Mohenjo-daro or Knossos used metal, seems to have made surprisingly little difference to those cities’ internal organization or even size. Nor does our evidence support the notion that major innovations always occur in sudden, revolutionary bursts, transforming everything in their wake.

Nobody, of course, claims that the beginnings of agriculture were anything quite like, say, the invention of the steam-powered loom or the electric light bulb. We can be fairly certain there was no Neolithic equivalent of Edmund Cartwright or Thomas Edison, who came up with the conceptual breakthrough that set everything in motion.

Instead of some male genius realizing his solitary vision, innovation in Neolithic societies was based on a collective body of knowledge accumulated over centuries, largely by women, in an endless series of apparently humble but in fact enormously significant discoveries. Many of those Neolithic discoveries had the cumulative effect of reshaping everyday life every bit as profoundly as the automatic loom or lightbulb.

Every time we sit down to breakfast, we are likely to be benefiting from a dozen such prehistoric inventions. Who was the first person to figure out that you could make bread rise by the addition of those microorganisms we call yeasts? We have no idea, but we can be almost certain she was a woman. What we also know is that such discoveries were, again, based on centuries of accumulated knowledge and experimentation – recall how the basic principles of agriculture were known long before anyone applied them systematically – and that the results of such experiments were known long before anyone applied them systematically – and that the results of such experiments were often preserved and transmitted through ritual, games, and forms of play.

Nor was this pattern of discovery limited to crops. Ceramics were first invented, long before the Neolithic, to make figurines, miniature models of animals and other subjects, and only later cooking and storage vessels. Mining is first attested as a way of obtaining minerals to be used as a way of obtaining minerals to be used as pigments, with the extraction of metals for industrial use coming only much later.

Choosing to describe history the other way round, as a series of abrupt technological revolutions, each followed by long periods when we were prisoners of our own creations, has consequences. Ultimately it is a way of representing our species as decidedly less thoughtful, less creative, less free than we actually turn out to have been.

One of the most striking patterns we discovered while researching this book – indeed, one of the patterns that felt most like a genuine breakthrough to us – was how, time and again in human history, that zone of ritual play has also acted as a site of social experimentation – even, in some ways, as an encyclopedia of social possibilities.

We have made the case that private property first appears as a concept in sacred contexts, as do police functions and powers of command, along with (in later times) a whole panoply of formal democratic procedures, like election and sortition, which were eventually deployed to limit such powers.

Here is where things get complicated. To say that, for most of human history, the ritual year served as a kind of compendium of social possibilities (as it did in the European Middle Ages, for instance, when hierarchical pageants alternated with rambunctious carnivals), doesn’t really do the matter justice. This is because festivals are already seen as extraordinary, somewhat unreal, or at the very least as departures from the everyday order. Whereas, in fact, the evidence we have from Paleolithic times onwards suggests that many – perhaps even most – people did not merely imagine or enact different social orders at different times of year, but actually lived in them for extended periods of time. The contrast with our present situation could not be starker. Nowadays, most of us find it increasingly difficult even to picture what an alternative economic or social order would be like. Our distant ancestors seem, by contrast, to have moved regularly back and forth between them.

If something did go terribly wrong in human history – and given the current state of the world, it’s hard to deny something did – then perhaps it began to go wrong precisely when people started losing that freedom to imagine and enact other forms of social existence, to such a degree that some now feel this particular type of freedom hardly even existed, or was barely exercised, for the greater part of human history.

The example of Eastern Woodlands societies in North America, suggests a more useful way to frame the problem. We might ask why it proved possible for their ancestors to turn their backs on the legacy of Cahokia, with its overweening lords and priests, and to reorganize themselves into free republics; yet when their French interlocutors effectively tried to follow suit and rid themselves of their own ancient hierarchies, the result seemed so disastrous. No doubt there are quite a number of reasons. But for us, the key point to remember is that we are not talking here about ‘freedom’ as an abstract ideal or formal principle (as in ‘Liberty, Equality and Fraternity!’). Over the course of these chapters we have instead talked about basic forms of social liberty which one might actually put into practice: (1) the freedom to move away or relocate from one’s surroundings (2) the freedom to ignore or disobey commands issued by others, and (3) the freedom to shape entirely new social realities, or shift back and forth between them.

Let us clarify some of the ways in which this ‘propping-up’ of the third freedom actually worked. As long as the first two freedoms were taken for granted, as they were in many North American societies when Europeans first encountered them, the only kings that could exist were always, in the last resort, play kings. If they overstepped the line, their erstwhile subjects could always ignore them or move someplace else. Similarly, a police force that operated for only three months of the year, and whose membership rotated annually, was in a certain sense a play police force – which makes it slightly less bizarre that their members were sometimes recruited directly from the ranks of ritual clowns.

The three basic freedoms have gradually receded, to the point where a majority of people living today can barely comprehend what it might be like to live in a social order based on them.

‘There is no way out of the imagined order,’ writes Yuval Noah Harari in his book Sapiens. ‘When we break down our prison walls and run towards freedom’, he goes on, ‘we are in fact running into the more spacious exercise yard of a bigger prison.’ As we saw in our first chapter, he is not alone in reaching this conclusion. Most people who write history on a grand scale seem to have decided that, as a species, we are stuck and there’s no escape from the institutional cages we’ve made for ourselves.

One important factor would seem to be the gradual division of human societies into what are sometimes referred to as ‘culture areas’; that is, the process by which neighboring groups began defining themselves against each other and, typically, exaggerating their differences. As we saw in the case of Californian foragers and their aristocratic neighbors on the Northwest Coast, such acts of cultural refusal could also be self-conscious acts of political contestation, marking the boundary (in this case) between societies where inter-group warfare, competitive feasting and household bondage were rejected.

The role of warfare warrants further discussion here, because violence is often the route by which forms of play take on more permanent features. For example, the kingdoms of the Natchez or Shilluk might have been largely theatrical affairs, their rulers unable to issue orders that would be obeyed even a mile or two away; but if someone was arbitrarily killed as part of a theatrical display, that person remained definitively dead even after the performance was over. It’s an almost absurdly obvious point to make, but it matters. Play kings cease to be play kings precisely when they start killing people; which perhaps also helps to explain the excesses of ritually sanctioned violence that so often ensued during transitions from one state to the other. The same is true of warfare. As Elaine Scarry points out, two communities might choose to resolve a dispute by partaking in a contest, and often they do; but the ultimate difference between war and most other kinds of contest is that anyone killed or disfigured in a war remains so, even after the contest ends.

While human beings have always been capable of physically attacking one another (and it’s difficult to find examples of societies where no one ever attacks anyone else, under any circumstances), there’s no actual reason to assume that war has always existed.

Technically, war refers not just to organized violence but to a kind of contest between two clearly demarcated sides. Technically war refers not just to organized violence to a contest between two clearly demarcated sides.  There is nothing particularly primordial about such arrangements; certainly, there is no reason to believe they are in any sense hardwired into the human psyche. On the contrary, it’s almost invariably necessary to employ some combination of ritual, drugs and psychological techniques to convince people, even adolescent males, to kill and injure each other in such systematic yet indiscriminate ways.

It would seem that for most of human history, no one saw much reason to do such things; or if they did, it was rare. Systematic studies of the Paleolithic record offer little evidence of warfare in this specific sense. Moreover, since war was always something of a game, it’s not entirely surprising that it has manifested itself in sometimes more theatrical and sometimes more deadly variations. Ethnography provides plenty of examples of what could best be described as play war: either with non-deadly weapons or, more often, battles involving thousands on each side where the number of casualties after a day’s ‘fighting’ amount to perhaps two or three. Even in Homeric-style warfare, most participants were basically there as an audience while individual heroes taunted, jeered and occasionally threw javelins or shot arrows at one another, or engaged in duels. At the other extreme, as we’ve seen, there is an increasing amount of archaeological evidence for outright massacres, such as those that took place among Neolithic village dwellers in central Europe after the end of the last Ice Age.

What strikes us is just how uneven such evidence is. Periods of intense inter-group violence alternate with periods of peace, often lasting centuries, in which there is little or no evidence for destructive conflict of any kind. War did not become a constant of human life after the adoption of farming; indeed, long periods of time exist in which it appears to have been successfully abolished. Yet it had a stubborn tendency to reappear, if only many generations later.

At this point another new question comes into focus. Was there a relationship between external warfare and the internal loss of freedoms that opened the way, first to systems of ranking and then later on to large-scale systems of domination, like those we discussed in the later chapters of this book: the first dynastic kingdoms and empires, such as those of the Maya, Shang or Inca? And if so, how direct was this correlation? One thing we’ve learned is that it’s a mistake to begin answering such questions by assuming that these ancient polities were simply archaic versions of our modern states.

The state, as we know it today, results from a distinct combination of elements – sovereignty, bureaucracy and a competitive political field – which have entirely separate origins.

Early states all deployed spectacular violence at the pinnacle of the system (whether that violence was conceived as a direct extension of royal sovereignty or carried out at the behest of divinities); and all to some degree modelled their centers of power – the court or palace – on the organization of patriarchal households. Is this merely a coincidence? On reflection, the same combination of features can be found in most later kingdoms or empires, such as the Han, Aztec or Roman. In each case there was a close connection between the patriarchal household and military might. But why exactly should this be the case? It’s hard to answer since existing debates almost invariably begin with terms derived from Roman Law, and for a number of reasons this is problematic. The roman Law of natural freedom is based on the power of a male head of household to dispose of his property as he sees fit.

Some have pointed out that Roman Law conceptions of property (and hence of freedom) essentially trace back to slave law. The reason it is possible to imagine property as a relationship of domination between a person and a thing is because, in Roman Law, the power of the master rendered the slave a thing, not a person with social rights or legal obligations to anyone else.

Slaves trimmed their hair, carried their towels, fed their pets, repaired their sandals, played music at their dinner parties and instructed their children in history and math. At the same time, in terms of legal theory these slaves were classified as captive foreigners who, conquered in battle, had forfeited rights of any kind. As a result, the Roman jurist was free to rape, torture, mutilate or kill any of them at any time and in any way he had a mind to, without the matter being considered anything other than a private affair.

Thinking back to examples like the ‘capturing societies’ of Amazonia or the process by which dynastic power took root in ancient Egypt, we can begin to see how important that particular nexus of violence and care has been. Rome took the entanglement to new extremes, and its legacy still shapes our basic concepts of social structure.

We’ve seen how, in various parts of the world, direct evidence of warfare and massacres – including the carrying-off of captives – can be detected long before the appearance of kingdoms or empires. Much harder to ascertain, for such early periods of history, is what happened to captive enemies: were they killed, incorporated or left suspended somewhere in between? As we learned from various Amerindian cases, things may not always be entirely clear-cut. There were often multiple possibilities.

In certain ways Wendat, and Iroquoian societies in general around that time, were extraordinarily warlike. There appear to have been bloody rivalries fought out in many northern parts of the Eastern Woodlands even before European settlers began supplying indigenous factions with muskets, resulting in the ‘Beaver Wars’. The early Jesuits were often appalled by what they saw, but they also noted that the ostensible reasons for wars were entirely different from those they were used to. All Wendat wars were, in fact, ‘mourning wars’, carried out to assuage the grief felt by close relatives of someone who had been killed. Typically, a war party would strike against traditional enemies, bringing back a few scalps and a small number of prisoners. Captive women and children would be adopted. The fate of men was largely up to the mourners, particularly the women, and appeared to outsiders at least to be entirely arbitrary. If the mourners felt it appropriate a male captive might be given a name, even the name of the original victim. The captive enemy would henceforth become that other person and, after a few years’ trial period, be treated as a full member of society. If for any reason that did not happen, however, he suffered a very different fate. For a male warrior taken prisoner, the only alternative to full adoption into Wendat society was excruciating death by torture.

True, the Jesuits conceded, the Wendat torture of captives was no more cruel than the kind directed against enemies of the state back home in France. What seems to have really appalled them, however, was not so much the whipping, boiling, branding, cutting-up – even in some cases cooking and eating – of the enemy, so much as the fact that almost everyone in a Wendat village or town took part, even women and children. The suffering might go on for days, with the victim periodically resuscitated only to endure further ordeals, and it was very much a communal affair. The violence seems all the more extraordinary once we recall how these same Wendat societies refused to spank children, directly punish thieves or murderers, or take any measure against their own members that smacked of arbitrary authority. In all other areas of social life they solved problems through calm and reasoned debate.

It would be easy to make an argument that repressed aggression must be vented in one way or another, so that orgies of communal torture are simply the flipside of a non-violent community. But it doesn’t really work. In fact, Iroquoia seems to be precisely one of those regions of North America where violence flared up only during certain specific historical periods and then largely disappeared in others. In what archaeologists term the ‘Middle Woodland’ phase, for instance, between 100 BC and AD 500 – corresponding roughly to the heyday of the Hopewell civilization there seems to have been a general peace. Later on signs of endemic warfare reappear. Clearly, at some points in their history people living in this region found effective ways to ensure that vendettas didn’t escalate into a spiral of retaliation or actual warfare, at other times the system broke down and the possibility of sadistic cruelty returned.

What, then, was the meaning of these theatres of violence? One way to approach the question is to compare them with what was happening in Europe around the same time. The Wendat who visited France were equally appalled by the tortures exhibited during public punishments and executions, but what struck them as most remarkable is that ‘the French whipped, hanged, and put to death men from among themselves’, rather than external enemies.

Wendat punitive actions against war captives (those not taken in for adoption) required the community to become a single body, unified by its capacity for violence. In France, by contrast, ‘the people’ were unified as potential victims of the king’s violence.  But the contrasts run deeper still.  As a Wendat traveler observed of the French system, anyone – guilty or innocent – might end up being made a public example. Among the Wendat themselves, however, violence was firmly excluded from the realm of family and household. A captive warrior might either be treated with loving care and affection or be the object of the worst treatment imaginable. No middle ground existed. Prisoner sacrifice was not merely about reinforcing the solidarity of the group but also proclaimed the internal sanctity of the family and the domestic realm as spaces of female governance where violence, politics and rule by command did not belong. Wendat households, in other words, were defined in exactly opposite terms to the Roman familia.

Public torture, in 17th century Europe, created searing, unforgettable spectacles of pain and suffering in order to convey the message that a system in which husbands could brutalize wives, and parents beat children, was ultimately a form of love. Wendat torture, in the same period of history, created searing, unforgettable spectacles of pain and suffering in order to make clear that no form of physical chastisement should ever be countenanced inside a community or household. Violence and care, in the Wendat case, were to be entirely separated. Seen in this light, the distinctive features of Wendat prisoner torture come into focus.

Time and again we found ourselves confronted with writing which simply assumes that the larger and more densely populated the social group, the more ‘complex’ the system needed to keep it organized. Complexity, in turn, is still often used as a synonym for hierarchy. Hierarchy, in turn, is used as a euphemism for chains of command (the ‘origins of the state’), which mean that as soon as large numbers of people decided to live in one place or join a common project, they must necessarily abandon the second freedom – to refuse orders – and replace it with legal mechanisms for, say, beating or locking up those who don’t do as they’re told. But complex systems don’t have to be organized top-down, either in the natural or in the social world. That we tend to assume otherwise probably tells us more about ourselves than the people or phenomena that we’re studying.

In fact, ‘exceptions’ are fast beginning to outnumber the rules. Take cities. It was once assumed that the rise of urban life marked some kind of historical turnstile, whereby everyone who passed through had to permanently surrender their basic freedoms and submit to the rule of faceless administrators, stern priests, paternalistic kings or warrior-politicians – simply to avert chaos (or cognitive overload). To view human history through such a lens today is really not all that different from taking on the mantle of a modern-day King James, since the overall effect is to portray the violence and inequalities of modern society as somehow arising naturally from structures of rational management and paternalistic care: structures designed for human populations who, we are asked to believe, became suddenly incapable of organizing themselves once their numbers expanded above a certain threshold.

This is difficult to reconcile with archaeological evidence of how cities actually began in many parts of the world: as civic experiments on a grand scale, which frequently lacked the expected features of administrative hierarchy and authoritarian rule.

None of this variability is surprising once we recall what preceded cities in each region. That was not, in fact, rudimentary or isolated groups, but far-flung networks of societies, spanning diverse ecologies, with people, plants, animals, drugs, objects of value, songs and ideas moving between them in endlessly intricate ways. While the individual units were demographically small, especially at certain times of year, they were typically organized into loose coalitions or confederacies. At the very least, these were simply the logical outcome of our first freedom: to move away from one’s home, knowing one will be received and cared for, even valued, in some distant place.

Of course, monarchy, warrior aristocracies or other forms of stratification could also take hold in urban contexts, and often did. When this happened the consequences were dramatic. Still, the mere existence of large human settlements in no way caused these phenomena, and certainly didn’t make them inevitable.

For the origins of these structures of domination we must look elsewhere. Hereditary aristocracies were just as likely to exist among demographically small or modest-sized groups, such as the ‘heroic societies’ of the Anatolian highlands, which took form on the margins of the first Mesopotamian cities and traded extensively with them. Insofar as we have evidence for the inception of monarchy as a permanent institution it seems to lie precisely there, and not in cities.

In other parts of the world, some urban populations ventured partway down the road towards monarchy, only to turn back. Such was the case at Teotihuacan in the Valley of Mexico, where the city’s population – having raised the Pyramids of the Sun and Moon – then abandoned such aggrandizing projects and embarked instead on a prodigious program of social housing, providing multi-family apartments for its residents.

Elsewhere, early cities followed the opposite trajectory, starting with neighborhood councils and popular assemblies and ending up being ruled by warlike dynasts, who then had to maintain an uneasy coexistence with older institutions of urban governance. Something along these lines took place in Early Dynastic Mesopotamia, after the Uruk period: here again the convergence between systems of violence and systems of care seems critical.

Sumerian temples had always organized their economic existence around the nurturing and feeding of the gods, embodied in their cult statues, which became surrounded by a whole industry and bureaucracy of welfare. Even more crucially, temples were charitable institutions. Widows, orphans, runaways, those exiled from their kin groups or other support networks would take refuge there: at Uruk, for example, in the Temple of Inanna, protective goddess of the city, overlooking the great courtyard of the city’s assembly.

CHARITY & THE RISE OF CHIEFS

What happens when expectations that make freedom of movement possible – the norms of hospitality and asylum, civility and shelter – erode? Why does this so often appear to be a catalyst for situations where some people can exert arbitrary power over others? Perhaps stateless societies do regularly organize themselves in such a way that chiefs have no coercive power, but then how did top-down forms of organization ever come into being?  Perhaps it all goes back to charity. In Amazonian societies, not only orphans but also widows, the mad, disabled or deformed – if they had no one else to look after them – were allowed to take refuge in the chief’s residence, where they received a share of communal meals. To these were occasionally added war captives, especially children taken in raiding expeditions. Among the Safwa or Lushai, runaways, debtors, criminals or others needing protection held the same status as those who surrendered in battle. All became members of the chief’s retinue, and the younger males often took on the role of police-like enforcers. How much power the chief actually had over his retainers would vary, depending on how easy it was for wards to run away and find refuge elsewhere or maintain ties to relatives, clans, or outsiders willing to stand up for them.

In all such cases, the process of giving refuge did generally lead to the transformation of basic domestic arrangements, especially as captured women were incorporated, further reinforcing the rights of fathers. It is possible to detect something of this logic in almost all historically documented royal courts, which invariably attracted those considered freakish or detached. There seems to have been no region of the world, from China to the Andes, where courtly societies did not host such obviously distinctive individuals; and few monarchs who did not also claim to be the protectors of widows and orphans.  One could easily imagine something along these lines was already happening in certain hunter-gatherer communities during much earlier periods of history. The physically anomalous individuals accorded lavish burials in the last Ice Age must also have been the focus of much caring attention while alive.

 

 

 

 

 

 

 

 

 

 

 

 

This entry was posted in Dawn of Everything and tagged , , . Bookmark the permalink.

Comments are closed.