Enhanced Geothermal Systems (EGS) have a long way to go

Geothermal plants like the largest in the world “The Geysers” in California can be built over rare hots spots which are also near population centers so that the transmission lines don’t end up costing as much or more than the geothermal plant.  These sites are very uncommon, and 94% of them are in California, with few spots left to be developed.

Clearly if we could drill down into the earth anywhere and pour water down the hole to heat it up for steam turbines there would be an enormous amount of energy that could be generated. This is what is meant by “enhanced” geothermal systems (EGS).

The drilling process to do this is very similar to fracking, but far more advanced techniques need to be developed for EGS.  The deeper the drill bit goes, the more likely it is to be destroyed from high temperatures.

Models to improve drilling techniques by simulating the movement of fluid through rocks at critical and supercritical conditions are not available yet.

Attempts to drill 2 km down or more have been going on since the early 1970s.  But the same problems that occurred back then are still occurring now:

  • Huge water losses (70 to 90% of the water disappears).
  • Permeability issues. Minerals such as silica and lithium precipitate through flow channels, reducing fracture permeability. Removing the minerals is an expensive, time-consuming process that limits the usefulness of enhanced geothermal recovery.
  • Short-circuiting between the injection and production wells
  • mechanical failures,
  • Insufficient connectivity between wells to meet economic requirements for reservoir productivity over its lifetime, etc.
  • Some of the few demonstration projects are “cheating” in that they’re on the edges of existing hydrothermal systems.
  • “Lessons learned” may not apply to new wells since the geology of each area is unique
  • Most of the areas under consideration are the Basin and Range faultlines, where there is very little water to pour down drilled holes.
  • In most areas of the country the depths to drill down to, or the kinds of rocks to drill through, or a lack of huge amounts of water to pour down the hole can rule out many areas.
  • Objections to these attempts by nearby residents include worries about earthquakes and the chemicals being injected.

The depth to be drilled down to is so deep that it is likely this technology will always be too expensive and use more energy to drill than obtained.  Though no one wants to look at EROI since that would prevent investors or research money from flowing.

I don’t want to waste time writing about a technology so far from commercial development, now or ever, with a likely negative EROI. Above all, EGS is useless because it generates electricity, and since transportation can’t be electrified, so what?

Here are a few articles on this technology FYI:

Selvans, Z. Jan 14, 2013. Enhanced Geothermal Systems promise dispatchable zero carbon power. cleanenergyaction.org

Ziagos, J., et al. Feb 11, 2013. A technology roadmap for strategic development of enhanced geothermal systems. proceedings of the 38th workshop on geothermal reservoir engineering, Stanford University, CA.

USDOE. 2008. Geothermal Tomorrow. US Department of Energy.


Posted in Geothermal | Leave a comment

Kurt Cobb Cheap oil, complexity and counter-intuitive conclusions

Kurt Cobb. March 22, 2015.   Cheap oil, complexity and counterintuitive conclusions. Resource Insights.

It is a staple of oil industry apologists to say that the recent swift decline in the price of oil is indicative of long-term abundance. This kind of logic is leading American car buyers to turn once again to less fuel efficient automobiles–trading efficiency for size essentially–as short-term developments are extrapolated far into the future.

The success of such argumentation depends on a disability in the audience reading it. The audience must have amnesia about the dramatic developments in the oil markets in the last 15 years which saw prices reach all-times highs in 2008 and then after recovering from post-crash lows linger at the highest average daily price ever from 2011 through most of 2014. And, that audience must have myopia about the future. It is an audience whose attention has narrowed to the present which becomes the only reference point for decision-making. History is bunk, and what is, always will be.

The alternative narrative is much more subtle and complex. As I’ve written before, the chief intellectual challenge of our age is that we live in complex systems, but we do not understand complexity. How can cheap oil be a harbinger of future supply problems in the oil market? Here’s where complexity, history and subtle thinking all have to combine at just the right intellectual temperature to reveal the answer.

Cheerleaders for cheap oil only seem to consider the salutary effects of low-priced oil on the broader economy and skip mentioning the deleterious effects of high-priced oil. They seem to ignore the possibility that the previously high price of oil actually caused the economy to slow and thereby dampened demand–which then led to a huge price decline.

If this is the primary driver behind cheaper oil, then cheaper oil in this case is not a sign of abundance, but of lack of affordability for many of the world’s people. It suggests that there is an oil price speed limit now in effect for the world economy above which it cannot grow for long.

If the ultimate significance of high-oil-prices-turned-to-low-oil-prices is a worldwide recession, then we will have a better idea whether such a price speed limit applies. The past does not offer much hope that it’s different this time. Economist James Hamilton has documented that 10 of the last 11 recessions were preceded by a significant rise in oil prices.

This time around we haven’t had a spike in prices, but rather persistently high prices above $100 a barrel for more than three and a half years prior to the oil plunge. This produced a different kind of pressure on the economy, but pressure nevertheless.

The Chinese economy is slowing down. The European economy is stagnant. Russia is or shortly will be in outright recession. Canada is teetering on the edge of recession and it seems Australia might go there, too. Japan continues its stagnant ways despite record monetary stimulus.

Cheap oil in its own way may be presaging, not a period of abundance, but one of austerity. That austerity has already hit the oil industry itself as it undergoes deep cuts in personnel and exploration and development spending.

The big question now is: Can oil be both abundant and cheap in the long run? Or are we living through the first period in history in which oil can only be “abundant” at high prices?

Of course, it’s only abundant if you can afford it. So, demand for oil would likely remain subdued under a high-price scenario suggesting that we’ve burned through the cheap stuff and must find alternative low-cost energy sources or possibly suffer ever worsening recessions until we do. We can only hope that the 2008 crash is not a prelude to even deeper recessions ahead.

This would also suggest that we are perilously close to a ceiling on oil production mediated by a combination of affordability, geology and the limits of technology. The risk is plain, and yet, it is faith that sustains the optimists in a rock-solid belief that the future will be like past–until, of course, it isn’t.

But faith isn’t a good basis for energy policy, even if it seems to have worked in the past. An intellectually honest consideration of all the complexities of our energy situation reveals risks to adequate oil supplies worldwide from here on out that we can only ignore at our peril.

Kurt Cobb is an author, speaker, and columnist focusing on energy and the environment. He is a regular contributor to the Energy Voices section of The Christian Science Monitor and author of the peak-oil-themed novel Prelude. In addition, he has written columns for the Paris-based science news site Scitizen, and his work has been featured on Energy Bulletin (now Resilience.org), The Oil Drum, OilPrice.com, Econ Matters, Peak Oil Review, 321energy, Common Dreams, Le Monde Diplomatique and many other sites. He maintains a blog called Resource Insights and can be contacted at kurtcobb2001@yahoo.com.

Posted in Inflation or Deflation, Kurt Cobb | Leave a comment

All About Coal

[Below are my notes, statistics, and so on about coal from many publications]

United States Coal Productioncoal mining regions USA



Coal Production Map by region (2011 million short tons, % change from 2010). Source: Quarterly Coal Report, October- December 2011 (April 2012), U.S. Energy Information Administration (EIA).

Coal is a solid with a high carbon content and a low hydrogen content, typically only 5%.

The cost of coal mining is going up, production is declining in the Appalachian region and shifting to the Powder River Basin where it’s cheaper to mine and the coal is lower in sulfur. Factors such as geology and the rising costs of complying with a variety of new regulations, transportation, explosives and wages are making coal mining more expensive. Appalachian coal has been facing a structural decline as its mine seams have become thinner, more difficult to mine and less productive.

Western region mines over half of the coal produced in the United States and has some of the largest coal deposits in the world in the Powder River Basin  in northeast Wyoming and southeast Montana.

Coal mining in the United States is a major industry, which peaked in 2008 at 1.2 billion short tons. Coal production is highly localized and depends on having access to a full network of services, transport and power plants.

Coal accounts for approximately 45% of railroad carloads and 25% of the annual revenues of freight rail Class I companies (Association of American Railroads, 2011). Trucks are often the quickest and easiest way to move coal and can easily be scaled up or down. They are used in shorter hauling, moving smaller quantities, and access to loading points to nearby electric and industrial plants. Barges only move coal from mines that have access to the U.S. river system. They are slower but more cost effective and fuel efficient. About 20% of the coal used in U.S. electricity generation travels by inland waterways. Many coal companies use a multimodal delivery system that includes rail (short and long haul), trucks, railcars and barges. Coal transportation cost, especially in the west, can exceed mining cost (EIA 2012).

Hard Coal: Most Energy, Least Water

  1. 1% Anthracite. Domestic/Industrial including smokeless fuel
  2. 52% Bituminous. Metallurgical (coking coal) used to make iron and steel, Thermal (steam coal) used for power generation, cement making, and other industrial uses

Low Ranking Coal: Least Energy, Most Water

  1. 30% Sub-bituminous. Power generation, cement making, industrial uses
  2. 17% Lignite. Power generation mainly

Northern Appalachia coal, rated at 13,000 Btu per pound, is the highest quality, while Powder River Basin coal (from Wyoming and other Rocky Mountain states) is the lowest quality, with a rating of just 8,800 Btu per pound.

Coal fired power plant retirement

  • Most coal-fired capacity established in the 1970s &1980s
  • About 19% of existing U.S. coal-fired capacity (63GW) is at least 50 years old
  • 62% of the capacity (212GW) is between 30 to 50 years old
  • Different scenarios estimated 35–65 GW of coal retirement by 2020, representing 10%–20% of total U.S. coal fleet
  • Top five lead firms announced 11GW of coal retirement by 2016, representing 15% of their coal portfolio

What’s it good for besides electricity? The steel industry is the second largest user of coal and coal by-products to make steel for automobiles, bridges and buildings (Spiegel, 2006). Nearly 70% of global steel production depends on coal (Ernst & Young, 2011). Other coal users include concrete, cement, aluminum, paper, chemical, wood and roofing companies. Coal gas by-products such as methanol and ethylene are used to make products such as plastics, medicines, fertilizers and tar.

Full-scale carbon capture, utilization, and storage (CCS) technology has yet to be demonstrated in practice or proven to be commercially acceptable for coal-power- generating units due to significant technology, financial and regulatory challenges.

Coal gasification and liquefaction technologies have been known for some time. Their products can range from transportation fuels and gases to valuable chemicals that can be used in the industrial gas, fertilizer, plastics, rubber and various other industries.

Europe: Coal to replace oil (IEA)

Europe faces a dilemma. Indigenous oil and gas reserves are limited; supplies are increasingly dependent on imports; prices are uncertain, but likely to fluctuate wildly, and [Russia] shows a willingness to use its oil and gas for political purposes. Thus, there is a strategic need for the EU to establish stable fuel supplies.

From this strategic perspective, the only primary fuel (apart from nuclear) which has the capacity and the infrastructure to meet this stability requirement is coal. Unlike oil and gas, coal is geographically widely distributed, with many countries trading it, limiting odds of a monopoly supply situation. Also, international trade represents less than 15% of total world production. Because of its large reserves, the price is likely to be more predictable than that of either oil or gas.

CTL plants will be expensive to build and expensive to run. Therefore, it will only be deemed worthwhile proceeding if concerns about the security of oil and gas supplies are such that substitute oil products via CTL can provide a level of reassurance at a price that is deemed worth paying. As with all ‘insurance policies’, this will always seem unnecessary until it is actually needed. Under-investment or failure to pay the premiums will mean that benefits will not be paid out when they are needed. CTL is capital-intensive and benefits substantially from economies of scale. Most studies on process economics have assumed that a full-scale commercial plant would produce 50,000-100,000 barrels/day of liquid products (DTI 1999). Such a plant would process 15,000-35,000 metric tons/day of bituminous coal or up to double that amount of sub-bituminous coal or lignite. To be worthwhile, 400 million metric tons over the project lifetime ned to be consumed. It’s likely that an 80,000 barrel/day plant would cost $5-6 billion US dollars with annual operating costs of $250 million (Kelly).

A major pinch point for future fuel supplies is transport, for which demand shows no real sign of abating, with a continuing and expanding need for liquid fuels. The uncertainties in oil supply will continue to impact heavily on the transport sector, where no clearly viable alternative has yet been identified (not least since both biofuels and hydrogen involve significant use of fossil fuels). Consequently, the production of liquid fuels from coal offers a potentially attractive route to meeting this requirement, as one aspect of a balanced energy portfolio.

This position is reflected in the rapid growth of interest in CTL worldwide, with major engineering projects now underway in China and detailed feasibility studies being undertaken in the USA, while South Africa continues to upgrade its CTL production capacity. In Europe, there is a corresponding upturn in interest, particularly in the former Soviet Union satellite countries that are now members of the EU, such as Poland, Estonia and the Czech Republic.

The main quality parameter for coal is the carbon/energy content and so the logistics chains differ for hard coals and low rank coals. Global transportation of hard coals and some sub-bituminous coals is commercially worthwhile, while lower-grade coals (e.g. lignites and those with high impurity contents) must be used close to the coalfield as it is not economically viable to transport such coals any significant distance.

Coal is a solid with a high carbon content and a low hydrogen content, typically only 5%. Transport fuels (gasoline/petrol, diesel and jet fuel) are currently derived overwhelmingly from crude oil, which has about twice the hydrogen content of coal. For coal to replace oil, it must be converted to liquids with similar hydrogen contents to oil and with similar properties. This can be achieved by removing carbon or by adding hydrogen, either directly or indirectly, while reducing the molecular size. Also during the process, elements such as sulfur, nitrogen and oxygen must be largely eliminated. Thus, the technical challenge is to increase the hydrogen/carbon (H/C) ratio in the product, and to produce molecules with the appropriate range of boiling points.

Hydrogen is also needed to reduce the oxygen, sulfur and nitrogen present. These are removed as H2O, H2S and NH3. A range of partially refined gasoline- and diesel-like products (as well as propane and butane) can be recovered from the synthetic crude by distillation. This provides a series of different temperature-range ‘cuts’, and each of the liquid products is made up of a mixture of different hydrocarbons appropriate to the boiling point range of the different components. These products tend to be highly aromatic, which can make them difficult to use as high quality transport fuels, although they can be rich in octane aromatics making a good gasoline substitute.

coal-to-liquid diesel chart of processICL Coal-to-liquids diesel

Water usage. CTL plants require substantial amounts of water, probably in the range of 5 to 10 barrels for each barrel of liquid products. There are several major requirements for water in a liquefaction plant. 1) process water is needed for the steam feed to gasifiers to make up the hydrogen requirements, water for use in the liquefaction processes, and wash water for syngas cleaning. 2) steam may be required for the water-gas shift reaction; 3) boiler feed water is needed to produce steam, and in many cases for on-site power generation; 4) cooling water to remove heat at different stages, and in particular from the FT reactors where the highly exothermic reactions need careful temperature control.

It is important to consider the influence of coal properties on technical aspects of plant operation. In this regard, the most important characteristic coal properties are particle size, water content, amount and composition of mineral matter, content of the sulfur, nitrogen and chlorine species in the organic coal matter. The required particle size of the feed coal depends mainly on the process characteristics. On one hand, the particle size has to be low enough to ensure stability of the coal oil slurry, because coarse particles favor sedimentation while the valves of the high pressure pumps are very sensitive to oversize particles. On the other hand too high a proportion of very fine solid material is not desirable as this enhances significantly the viscosity of the coal oil slurry which causes high pressure drops and decreases the heat transfer in the heat exchangers. A typical particle size for hard coal as feed material is <0.2 mm. The feed coal is usually dried in a combined drying and pulverizing step. A high residual moisture in the coal which cannot be further reduced is disadvantageous as the resulting steam reduces the hydrogen partial pressure in the reactor volume. For hard coals 0.5-2 wt.-% moisture contents are attainable whereas for lignite 5-10 wt.-% has to be accepted. The mineral matter of the feed coal is an inert burden in the process and should be as low as possible, since they occupy expensive high pressure reactor volume without a contribution to the oil yield. Furthermore, they cause erosive material damages to the valves.

Sasol is seeking to build on its experience and expertise by looking for opportunities where substantial deposits of low cost (possibly low grade) coal would support a CTL plant producing around 80,000 bbl/d, thus taking advantage of the potential economies of scale. This approach, however, means that any individual project size requires the investment of very large amounts of capital (of the order of US$5-6 billion) so that government guarantees relating to the value of production or other long-term support would be needed.

Underground Coal Gasification (IEA)

UCG has the potential to unlock vast amounts of previously inaccessible energy in unmineable coal resources. There are significant obstacles to be overcome before this is possible, many of which are associated with the fact that the process takes place deep underground in a context where it is difficult to monitor and control the conditions. Consequently, UCG requires a multi-disciplinary integration of knowledge from exploration, geology, hydrogeology, drilling, and of the chemistry and thermodynamics of gasification reactions in a cavity in a coal seam (Couch 2009). UCG has reached the stage of “proof of concept”, but different parts of the technology have been demonstrated/proved separately and each in unique circumstances. That said, the single most important decision that will determine the technical and economic performance of UCG is site selection. The field trials undertaken so far are grouped into two main categories, namely those conducted at shallow depths, some in thicker seams, and those at greater depth in thin seams. To date, all that has been established is that given the right conditions, coals of different rank can be gasified underground. Ultimately, it will require a series of successful demonstrations, building on what has already been established, in different geological settings to establish where UCG can be safely carried out cost effectively at a commercial scale and without environmental damage.


Ahmed, G, et al. February 2013. US Coal and the Technology Innovation Frontier: What role does coal play in our energy future?  Duke University (much of above came from this)

Couch G. 2009. Underground coal gasification, IEA Clean Coal Centre

EIA. 2012. Coal Mining and Transporation. Coal Explained. http://www.eia.gov/energyexplained/index.cfm?page=coal_mining

Ernst & Young. 2011. Global Steel – 2011 Trends, 2012 Outlook: Ernst & Young. http://www.ey.com/Publication/vwLUAssets/Global_steel_2011_trends_2012_outlook/$FILE/Global_Steel_Jan_2012.pdf

IEA. 2009. Review of worldwide coal to liquids R, D&D activities and the need for further initiatives within Europe. International Energy Agency.

Spiegel, C. 2006. Opportunities for Coal-Based Products: Clean Coal and Coal Processing Technologies. BCC Research: BCC Research.

Posted in Coal | Leave a comment

Richard Heinberg Only less (population) will Do

Only Less Will Do by Richard HeinbergPost Carbon Institute

[portions of this article were cut and rearranged]

Almost nobody likes to hear about the role of scale in our global environmental crisis.

That’s because if growth is our problem, then the only real solution is to shrink the economy and reduce population.

Back in the 1970s, many environmentalists recommended exactly that remedy, but then came the Reagan backlash—a political juggernaut promising endless economic expansion if only we allowed markets to work freely. Many environmentalists recalibrated their message, and the “bright green” movement was born, claiming that efficiency improvements would enable humans to eat their cake (grow the economy) and have it too (protect the planet for the sake of future generations).pop-energy-1980-vs-2013

Population has grown from 4.4 billion in 1980 to 7.1 billion in 2013. Per capita consumption of energy has grown from less than 70 gigajoules to nearly 80 GJ per year. Total energy use has expanded from 300 exajoules to 550 EJ annually.

We’ve used all that energy to extract raw materials (timber, fish, minerals), to expand food production (converting forests to farmland or rangeland, using immense amounts of freshwater for irrigation, applying fertilizers and pesticides). And we see the results: the world’s oceans are dying; species are going extinct at a thousand times the natural rate; and the global climate is careening toward chaos as multiple self-reinforcing feedback processes (including polar melting and methane release) kick into gear.


The environmental movement has responded to that last development by adopting a laser-like focus on reducing carbon emissions. Which is certainly understandable, since global warming constitutes the most pervasive and potentially deadly ecological threat in all of human history. But the proponents of “green growth,” who tend to dominate environmental discussions (sometimes explicitly but more often implicitly), tell us the solution is simply to switch energy sources and trade carbon credits; if we do those simple and easy things, we can continue to expand population and per-capita consumption with no worries.
In the quest to make human society sustainable, the problem of scale crops up absolutely everywhere. We can make a particular activity more energy-efficient and benign (for example, we can increase the fuel economy of our cars), but the improvement tends to be overwhelmed by changes in scale (economic expansion and population growth lead to an increase in the number of cars on the road, and to the size of the average vehicle, and hence to higher total fuel consumption).
Yet here we are, decades after the eclipse of old-style, conservation-centered environmentalism, and despite all sorts of recycling programs, environmental regulations, and energy efficiency improvements, the global ecosystem is approaching collapse at ever-greater speed.
In reality, entirely switching our energy sources will not be easy, as I have explained in a lengthy recent essay. And while climate change is the mega-crisis of our time, carbon is not our only nemesis. If global warming threatens to undermine civilization, so do topsoil, freshwater, and mineral depletion.
The math of compound growth leads to absurdities (one human for every square meter of land surface by the year 2750 at our current rate of population increase) and to tragedy.
If confronted by this simple math, bright greens will say, “Well yes, ultimately there are limits to population and consumption growth. But we just have to grow some more now, in order to deal with the problem of economic inequality and to make sure we don’t trample on people’s reproductive rights; later, once everyone in the world has enough, we’ll talk about leveling off. For now, substitution and efficiency will take care of all our environmental problems.”
Maybe the bright greens (or should I say, pseudo-greens?) are right in saying that “less” is a message that just doesn’t sell. But offering comforting non-solutions to our collective predicament accomplishes nothing. Maybe the de-growth prescription is destined to fail at altering civilization’s overall trajectory and it is too late to avoid a serious collision with natural limits. Why, then, continue talking about those limits and advocating human self-restraint? I can think of two good reasons. The first is, limits are real. When we decline to talk about what is real simply because it’s uncomfortable to do so, we seal our own fate. I, for one, refuse to drink that particular batch of Kool-Aid. The second and more important reason: If we can’t entirely avoid the collision, let us at least learn from it—and let’s do so as quickly as possible.
All traditional indigenous human societies eventually learned self-restraint, if they stayed in one place long enough. They discovered through trial and error that exceeding their land’s carrying capacity resulted in dire consequences. That’s why traditional peoples appear to us moderns as intuitive ecologists: having been hammered repeatedly by resource depletion, habitat destruction, overpopulation, and resulting famines, they eventually realized that the only way to avoid getting hammered yet again was to respect nature’s limits by restraining reproduction and protecting other forms of life. We’ve forgotten that lesson, because our civilization was built by people who successfully conquered, colonized, then moved elsewhere to do the same thing yet again; and because we are enjoying a one-time gift of fossil fuels that empower us to do things no previous society ever dreamed of. We’ve come to believe in our own omnipotence, exceptionalism, and invincibility. But we’ve now run out of new places to conquer, and the best of the fossil fuels are used up.
As we collide with Earth’s limits, many people’s first reflex response will be to try to find someone to blame. The result could be wars and witch-hunts. But social and international conflict will only deepen our misery. One thing that could help would be the widely disseminated knowledge that our predicament is mostly the result of increasing human numbers and increasing appetites confronting disappearing resources, and that only cooperative self-limitation will avert a fight to the bitter end. We can learn; history shows that. But in this instance we need to learn fast.
Posted in Birth Control, By People, Climate Change, Overpopulation, Richard Heinberg | Tagged , , , , , , , , | 2 Comments

Hydropower has a very low energy density

To store the energy contained in 1 gallon of gasoline requires over 55,000 gallons to be pumped up 726 feet (CCST 2012).

As a thought experiment look at what it would take generate all of America’s 4,058 TWh electricity, where Power = height of dam * cubic feet/second (cfs) water * turbine efficiency (~60 to 90%) / 11.8 (converts feet and second units into kilowatts).

Given that the 550-foot high Grand Coulee dam produces an average of 18 TWh a year with 50,000 cfs, at 90% efficiency, we’d need 225 more of them, using a grand of 58.4 billion cubic yards of water flowing through each dam, equal to 110 Lake Michigan’s, the world’s 6th largest fresh water lake.

You’d also have a hard time finding enough cement. The Grand Coulee used 11,975,500 cubic yards of concrete, so 225 would need 4 billion tons (2.7 billion cubic yards * 1.5 tons/cubic yard). Cement is 10 to 20% of concrete, so you’d need 3 to 6 times more cement than what America produces every year (USGS).


CCST. April 2012. California’s Energy Future: Electricity from Renewable Energy and Fossil Fuels with Carbon Capture and Sequestration. California Council on Science and Technology. (height of Hoover Dam)

USGS. 2011. Cement production. United States Geological Society. 127,200,000 long tons converted to 142,464,000 short tons (2,000 lbs)

Posted in Energy Storage, Hydropower | 1 Comment

Gail Tverberg The oil glut and low prices reflect an affordability problem

Tverberg, G. March 9, 2015. The oil glut and low prices reflect an affordability problem. ourfiniteworld.com

For a long time, there has been a belief that the decline in oil supply will come by way of high oil prices. Demand will exceed supply. It seems to me that this view is backward–the decline in supply will come through low oil prices.

The oil glut we are experiencing now reflects a worldwide affordability crisis. Because of a lack of affordability, demand is depressed. This lack of demand keeps prices low–below the cost of production for many producers. If the affordability issue cannot be fixed, it threatens to bring down the system by discouraging investment in oil production.

This lack of affordability is affecting far more than oil products. A recent article in The Economist talks about LNG prices being depressed. LNG capacity ramped up quickly in response to high prices a few years ago. Now there is a glut of LNG capacity, and prices are far below the cost of extraction and shipping for many LNG suppliers. At least temporary contraction seems likely in this sector.

If we look at World Bank Commodity Price data, we find that between 2011 and 2014, the inflation-adjusted price of Australian coal decreased by 41%. In the same period, the inflation-adjusted price of rubber is down 58%, and of iron ore is down 59%. With those types of price drops, we can expect huge cutbacks on production of many types of goods.

How Does this Lack of Affordability Come About?

The issue we are up against is diminishing returns. Diminishing returns mean that as we reach limits, it takes increased resources (usually both physical resources and human labor) to produce some type of product. Oil is product subject to diminishing returns. Metals of many kinds also are becoming increasingly expensive to extract. In many parts of the world, a shortage of water makes it necessary to use unusual techniques (desalination or long distance pipelines) to obtain adequate supply. The higher cost of pollution control can have a similar effect to diminishing returns on products with pollution issues.

When we graph of the cost of production of resources subject to diminishing reserves, the result is similar to that shown in Figure 1.

Figure 1. The way we would expect the cost of the extraction of energy supplies to rise, as finite supplies deplete.

What happens with diminishing returns is that cost increases tend to be quite small for a very long time, but then suddenly “turn a corner.” With oil, the shift to higher costs comes as we move from “conventional” oil to “unconventional” oil. With metals, the shift comes as high quality ores become depleted, and we need to move to mines that require moving a great deal more dirt to extract the same quantity of a given metal. With water, such a steep rise in diminishing returns comes when wells no longer provide a sufficient quantity of water, and we must go to extraordinary measures, such as desalination, to obtain water.

During the time when cost increases from diminishing returns were quite minor, it generally was possible to compensate for the small cost increases with technological improvements and efficiency gains elsewhere in the system. Thus, even though there was a small amount of diminishing returns going on, they could be hidden within the overall system.

Once the effect of diminishing returns becomes greater (as it has since about 2000), it becomes much harder to hide cost increases. The cost of finished products of many kinds (for example, food, gasoline, houses, and automobiles) starts rising, relative to the income of workers. Workers find that they must cut back on discretionary expenditures in order to have enough money to cover all of their expenses.

How Diminishing Returns Affect the Economy 

There are at least three ways that diminishing returns adversely affects the economy:

  1. Lower wages
  2. Less ability to borrow
  3. Squeezing out other sectors of the economy

The reason for lower wages relates to the fact that, as the cost of producing a commodity rises, the worker is, in some sense, becoming less and less productive. For example, if we calculate wages per worker in units of oil, as oil becomes more expensive to extract, we get something like this:

Figure 2. Wages per worker in units of oil produced, corresponding to amounts shown in Figure 1.

A similar chart would hold for other resources that are becoming more difficult to extract, or whose cost of production is becoming higher because of greater pollution controls. For example, we would expect the wages of coal workers to be falling as well.

Also, as we shift to higher cost types of energy, we become increasingly inefficient in energy production. Based on a 2013 analysis, in the United States, there are more solar energy workers than coal miners, even though we use far more coal than solar energy. The large number of workers required to produce solar energy is one of the reason that solar energy tends to be high-priced to produce.

When we look at wages of workers, we indeed see a pattern of falling wages, especially for workers below the median wage. Figure 3 from the Economic Policy Institute shows that even the most educated workers are experiencing declining inflation-adjusted wages.

Figure 3. Source:  Elise Gould, Even the Most Educated Workers Have Declining Wages.

A second major issue affecting affordability is debt saturation. Affordability is favorably affected by rising debt–for example, it is a lot easier to buy a new car or house, if the would-be purchaser can obtain a new loan. If debt levels stay the same or fall, this becomes a problem–fewer goods can be purchased.

Governments in particular are reaching the limits of their borrowing capacity. They cannot keep adding new debt, and remain within historic debt to GDP ratios.

Another way debt saturation occurs relates to young people with student loans. They find it too expensive to borrow more money for a new car or for a home. Furthermore, the fact that wages are not keeping up with price increases for many workers reduces the borrowing ability of the workers with lagging wages. This is true, even if no student loans are involved.

As mentioned above, a third issue is the fact that the inefficient sectors tend to squeeze out other portions of the economy by gobbling up a disproportionate share of workers and resources. The use of all of these resources doesn’t produce a lot of goods in the traditional sense–a desalination plant is expensive, but the amount of water produced per dollar of investment is not large. To the extent that the high costs of inefficient sectors are passed on to consumers, consumers find that they must cut back on discretionary spending. This cut-back in spending squeezes out discretionary spending, leading to cutbacks in discretionary sectors, and to reduced employment overall.

Figure 4. Author's view of the effect of diminishing returns on economy.

Wishful Thinking by Economists

Back before diminishing returns started becoming a major problem, economists created models regarding how the economy would react to higher cost of energy production and other symptoms of diminishing returns. In their view, if the cost of oil extraction rises, oil prices will rise to match these higher costs. Alternatively, substitution will take place, or technological changes will allow greater efficiency, or customers will cut back on their use of the high cost product. Somehow, these changes will take place without a particularly adverse impact on the economy.

Unfortunately, the models don’t correspond very well to what happens in practice–at least not for very long. It takes inexpensive energy to produce goods that workers can afford. Higher priced energy does not work well in this regard. Feedbacks that are not reflected in economic models reduce both wages and debt, making it harder to buy goods requiring the use of more-expensive energy products.

Furthermore, if the price of one commodity, for example oil, rises, then countries with very much oil in their energy mix find themselves handicapped in trade with other countries that use less oil in their energy mix. For example, a country that depends on tourism (which depends on oil use) for very much of its revenue, such as Greece, finds it difficult to find customers when oil prices are high. Lack of revenue can lead to financial problems for the country.

Because of the networked way the economy really works, prices for commodities can’t rise for the long-term. They may rise for a while, as consumers and governments borrow more, in an attempt to continue business as usual. Ultimately, though, the situation can’t “work.”  Customers can’t afford to buy more homes and cars, unless their own wages are rising in inflation adjusted terms, and governments can’t collect enough tax revenue.

The issue we are dealing with here is lack of affordability. This is what will bring the system down–not the high priced scenario imagined by many. Decline will come through low prices, and a glut in oil supply, even if we are not looking for it from that direction.

Can commodity prices rise again?

It is not all that clear that they can rise again. It would be a lot easier for commodity prices to rise, if the problem were simply inadequate prices of one commodity, leading to a lack of that commodity. If the problem is inadequate demand for crude oil, coal, LNG, and iron ore the problem is much greater–especially if wages are still lagging.

Posted in By People, Debt, Gail Tverberg, Inflation or Deflation | Leave a comment

Jared Diamond Why Societies Collapse

Professor Jared Diamond, Professor of Physiology at UCLA, speaking at Princeton University about what we can learn from the collapse of ancient societies. Professor Diamond won the Pulitzer Prize for his book, ‘Guns, Germs and Steel’ in 1997.

October 27, 2002. Why Societies Collapse: Jared Diamond at Princeton University


Jared Diamond: Why did the ancient civilisations that built Angkor Wat, the Mayan civilization, the Easter Islands, Greater Zimbabwe, and the Indus Valley abandon their cities after building them with such great effort? Why these ancient collapses? This question isn’t just a romantic mystery. It’s also a challenging intellectual problem. Why is it that some societies collapsed while others did not collapse?

But even more, this question is relevant to the environmental problems that we face today; problems such as deforestation, the impending end of the tropical rainforests, over-fishing, soil erosion, soil salinization, global climate change, full utilization of the world’s fresh water supplies, bumping up against the photosynthetic ceiling, exhaustion of energy reserves, accumulation of toxics in water, food and soil, increase of the world’s population, and increase of our per capita input. The main problems that threaten our existence over the coming decades. What if anything, can the past teach us about why some societies are more unstable than others, and about how some societies have managed to overcome their environmental problems. Can we extract from the past any useful guidance that will help us in the coming decades?

“Some of these romantic mystery collapses have been self-inflicted ecological suicides, resulting from inadvertent human impacts on the environment.”

There’s overwhelming recent evidence from archaeology and other disciplines that some of these romantic mystery collapses have been self-inflicted ecological suicides, resulting from inadvertent human impacts on the environment, impacts similar to the impacts causing the problems that we face today. Even though these past societies like the Easter Islanders and Anasazi had far fewer people, and were packing far less potent destructive practices than we do today.

It turns out that these ancient collapses pose a very complicated problem. It’s not just that all these societies collapsed, but one can also think of places in the world where societies have gone on for thousands of years without any signs of collapse, such as Japan, Java, Tonga and Tikopea. What is it then that made some societies weaken and other societies robust? It’s also a complicated problem because the collapses usually prove to be multi-factorial. This is not an area where we can expect simple answers.

What I’m talking about is the collapses of societies and their applications to the risks we face today. This may sound initially depressing, but you’ll see that my main conclusions are going to be upbeat.

Not many years ago, Montana was one of the wealthiest in America, their wealth based on copper mining, forestry and agriculture. Now it’s very poor. Mining has gone, leaving terrible environmental damage, 70% of the children in Montana are on Food Aid, logging and farming are in decline. What happened was that the mining, forestry and agriculture which earned so much wealth, became destructive. Montana now has terrible forest fires, salinization, erosion, weeds and animal diseases, and population decline.

If Montana were an isolated country, Montana would be in a state of collapse. Montana is not going to collapse, because it’s supported by the rest of the United States, and yet other societies have collapsed in the past, and are collapsing now or will collapse in the future, from problems similar to those facing Montana. The same problems that we’ve seen throughout human history, problems of water, forests, topsoil, irrigation, salinisation, climate change, erosion, introduced pests and disease and population; problems similar to those faced by Montanans today are the ones posing problems in Afghanistan, Pakistan, China, Australia, Nepal, Ethiopia and so on. But those countries, Afghanistan, Pakistan etcetera have the misfortune not to be embedded within a rich country that supports them, like the United States.

Visiting Montana again just brought home to me that these problems of ancient civilizations are not remote problems of romantic mysterious people, they’re problems of the modern world including of the United States. I mentioned then that there’s a long list of past societies that did collapse, but there were also past societies that did not collapse. What is it then that makes some societies more vulnerable than others? Environmental factors clearly play a role, archaeological evidence accumulated over the last several decades has revealed environmental factors behind many of these ancient collapses. Again, to appreciate the modern relevance of all this, if one asked an academic ecologist to name the countries in the modern world that suffer from most severe problems of environmental damage and of over-population, and if this ecologist never read the newspapers and didn’t know anything about modern political problems, the ecologist would say “Well that’s a no-brainer, the countries today that have ecologic al and populations, there are Haiti, Somalia, Rwanda, Burundi, Iraq, Afghanistan, Pakistan, Nepal, the Philippines, Indonesia, Solomon Islands.” Then you ask a politician who doesn’t know, or a strategic planner who knows or cares nothing about ecological problems, what you see is the political tinderboxes of the modern world, the danger spots, and the politician or strategic planner would say “It’s a no-brainer; Haiti, Somalia, Rwanda, Burundi, Iraq, Afghanistan, Pakistan, Nepal, the Philippines, Indonesia, Solomon Islands”, the same list. And that simply makes the point that countries that get into environmental trouble are likely to get into political trouble both for themselves and to cause political troubles around the world.

In trying to understand the collapses of ancient societies, I quickly realized that it’s not enough to look at the inadvertent impact of humans on their environment. It’s usually more complicated. Instead I’ve arrived at a checklist of five things that I look at to understand the collapses of societies, and in some cases all five of these things are operating. Usually several of them are.

FIVE MAIN REASONS FOR COLLAPSE 1) The first of these factors is environmental damage, inadvertent damage to the environment through means such as deforestation, soil erosion, salinisation, over-hunting etc.

2) The second item on the checklist is climate change, such as cooling or increased aridity. People can hammer away at their environment and get away with it as long as the climate is benign, warm, wet, and the people are likely to get in trouble when the climate turns against them, getting colder or drier. So climate change and human environmental impact interact, not surprisingly.

3) Still a third consideration is that one has to look at a society’s relations with hostile neighbors. Most societies have chronic hostile relations with some of their neighbors and societies may succeed in fending off those hostile neighbors for a long time. They’re most likely to fail to hold off the hostile neighbors when the society itself gets weakened for environmental or any other reasons, and that’s given rise for example, to the long-standing debate about the fall of the Western Roman Empire. Was the conquest by Barbarians really a fundamental cause, or was it just that Barbarians were at the frontiers of the Roman Empire for many centuries? Rome succeeded in holding them off as long as Rome was strong, and then when Rome got weakened by other things, Rome failed, and fell to the Barbarians. And similarly, we know that there were military factors in the fall of Angkor Wat in Cambodia. So relations with hostiles interacts with environmental damage and climate change.

“If one of those friendly societies itself runs into environmental problems and collapses for environmental reasons, that collapse may then drag down their trade partners.”

4) Similarly, relations with friendlies interacts. Almost all societies depend in part upon trade with neighboring friendly societies, and if one of those friendly societies itself runs into environmental problems and collapses for environmental reasons, that collapse may then drag down their trade partners. It’s something that interests us today, given that we are dependent for oil upon imports from countries that have some political stability in a fragile environment.

5) And finally in addition to those four factors on the checklist, one always has to ask about people’s cultural response. Why is it that people failed to perceive the problems developing around them, or if they perceived them, why did they fail to solve the problems that would eventually do them in? Why did some peoples perceive and recognise their problems and others not?

I’ll give you four examples of these past societies that collapsed. One is Easter Island, I’ll discuss it first because Easter is the simplest case we’ve got, the closest approximation to a collapse resulting purely from human environmental damage.

The second case are the collapses of Henderson and Pitcairn Island in the Pacific, which were due to the combination of self-inflicted environmental damage, plus the loss of external trade due to the collapse of a friendly trade partner.

Third I’ll discuss, closer to home the Anasazi in the US south-west whose collapse was a combination of environmental damage and climate change.

And then finally I’ll mention the Greenland Norse who ended up all dead because of a combination of all five of these factors.

So let’s take then the first of these examples, the collapse of Easter Island society. Any of you here in this room, have any of you had the good fortune to have visited Easter Island? Good for you, you lucky person, I’m going there next month, I’ve wanted for decades to go there. And Easter is the most remote habitable scrap of land in the world; it’s an island in the Pacific, 2,000 miles west of the coast of Chile, and something 1300 miles from the nearest Polynesian island. It was settled by other Polynesians coming from the west, sometime around AD800 and it was so remote that after Polynesians arrived at Easter Island, nobody else arrived there. Nobody left Easter as far as we know, and so the Easter story is uncomplicated by relations with external hostiles or friendlies. There weren’t any. Easter Islanders rose and fell by themselves.

Easter is a relatively fragile environment, dry with 40 inches of rain per year. It’s most famous because of the giant stone statutes – those big statues weighing up to 80 tons – stone statues that were carved in a volcanic quarry and then dragged up over the lift of the quarry and then 13 miles down to the coast and then raised up vertically onto platforms, all this accomplished by people without any draught animals, without pulleys, without machines. These 80 ton statues were dragged and erected under human muscle power alone. And yet when Europeans arrived at Easter in 1722, the statues that the islanders themselves had erected at such great personal effort, the islanders were in the process of throwing down their own statues, Easter Island society was in a state of collapse. How, why and who erected the statues, and why were they thrown down?

Well the how, why and who has been settled in the last several decades by archaeological discoveries. Easter Islanders were typical Polynesians, and the cause of the collapse became clear from archaeological work in the last 15 years, particularly from paeleo-botannical work and identification of animal bones in archaeological sites. Today Easter Island is barren. It’s a grassland, there are no native trees whatsoever on Easter Island, not a likely setting for the development of a great civilization, and yet these paeleo-botannical studies, identifying pollen grains and lake cores show that when the Polynesians arrived at Easter Island, it was covered by a tropical forest that included the world’s largest palm tree and dandelions of tree height. And there were land birds, at least six species of land birds, 37 species of breeding sea-birds – the largest collection of breeding sea-birds anywhere in the Pacific.

Polynesians settled Easter, they began to clear the forest for their gardens, for firewood, for using as rollers and levers to raise the giant statues, and then to build canoes with which to go out into the ocean and catch porpoises and tuna. In the oldest archaeological one sees the bones of porpoises and tuna that the people were eating. They ate the land birds, they ate the sea-birds, they ate the fruits of the palm trees. The population of Easter grew to an estimated about 10,000 people, until by the year 1600 all of the trees and all of the land birds and all but one of the sea-birds on Easter Island itself were extinct. Some of the sea-birds were confined to breeding on offshore stacks.

“The largest animal left to eat with the disappearance of porpoises and tuna were humans…”

The deforestation and the elimination of the birds had consequences for people. First without trees, they could no longer transport and erect the statues, so they stopped carving statues. Secondly, without trees they had no firewood except of their own agricultural wastes. Thirdly, without trees to cover the ground, they suffered from soil erosion and hence agricultural yields decreased, and then without trees they couldn’t build canoes, so they couldn’t go out to the ocean to catch porpoises, there were only a few sea-birds left because they didn’t have pigs the largest animal left to eat with the disappearance of porpoises and tuna were humans. And Polynesian society then collapsed in an epidemic of cannibalism. The spear points from that final phase still litter the ground of Easter Island today. The population crashed from about 10,000 to an estimated 2,000 with no possibility of rebuilding the original society because the trees, most of the birds and some of the soil were gone.

I think one of the reasons that the collapse of Easter Island so grabs people is that it looks like a metaphor for us today. Easter Island, isolated in the middle of the Pacific Island, nobody to turn to to get help, nowhere to flee once Easter Island itself collapsed. In the same way today, one can look at Planet Earth in the middle of the galaxy and if we too get into trouble, there’s no way that we can flee, and no people to whom we can turn for help out there in the galaxy.

I can’t help wondering what the Islander who chopped down the last palm tree said as he or she did it. Was he saying, ‘What about our jobs? Do we care more for trees than for our jobs, of us loggers?’ Or maybe he was saying, ‘What about my private property rights? Get the big government of the chiefs off my back.’ Or maybe he was saying, ‘You’re predicting environmental disaster, but your environmental models are untested, we need more research before we can take action.’ Or perhaps he was saying, ‘Don’t worry, technology will solve all our problems.’

My next example involves the Anasazi in our south west, in the four corners area of Arizona, New Mexico, Colorado, Utah. How many of you here have been to either Mesa Verde or Chaco Canyon? OK, looks like nearly half of you. It’s very striking to visit say Chaco Canyon where there are still the ruins of the biggest skyscrapers erected in the United States until the Chicago skyscrapers erected in Chicago’s loop in the 1870s and 1880s. But the skyscrapers of Chaco Canyon were erected by native Americans, the Anasazi. Up to 6-storey buildings, with up to 600 rooms. The Anasazi build-up began around AD600 with the arrival of the Mexican crops of corn, squash and beans, and in that relatively dry area. Again it’s very striking today to drive through an area where today either nobody is living at all, or nobody’s living by agriculture. At Chaco Canyon itself there are a couple of houses of National Park Rangers importing their food, and then nobody else living within 20 or 30 miles. And yet to realise, and to see the remains on the ground, this used to be a densely populated agricultural environment.

The Anasazi were ingenious at managing to survive in that environment, with low fluctuating, unpredictable rainfall, and with nutrient-poor soils. The population built up. They fed themselves with agriculture, in some cases irrigation agriculture, channelled very carefully to flood out over the fields. They cut down trees for construction and firewood. In each area they would develop environmental problems by cutting down trees and exhausting soil nutrients, but they dealt with those problems by abandoning their sites after a few decades and moving on to a new site. It’s possible to reconstruct Anasazi history in great detail for two reasons: tree rings, because this is a dry climate, the south-west. From tree-rings you can identify from the rings on the roof beams, what year – 1116, not 1115 AD – what year the tree in that roof was cut down, and also those cute little rodents in the south-west, pack rats, that run around gathering bits of vegetation in their nests and then abandoning their nests after 50 years, a pack rat midden is basically a time capsule of the vegetation growing within 50 yards of a pack rat midden over a period of 50 years. And my friend Julio Betancourt who was near an Anasazi ruin and happened to see a pack rat midden whose dating he knew nothing about. He was astonished to see in what’s now a treeless environment, in this pack rat midden were the needles of pinion pine and juniper. So Julio wondered whether that was an old midden. He took it back, radio carbon-dated it, and lo and behold it was something like AD 800. So the pack-rat middens are time capsules of local vegetation allowing us to reconstruct what happened.

What happened is that the Anasazi deforested the area around their settlements until they were having to go further and further away for their fuel and their construction timber. At the end they were getting their logs, neatly cut logs, uniform weighing on the average 600 pounds, 16 feet logs, were cut at the end on tops of mountains up to 75 miles away and about 4,000 feet above the Anasazi settlements, and then dragged back by people with no transport or pack animals, to the Anasazi settlements themselves. So deforestation spread. That was the one environmental problem.

The other environmental problem was the cutting of arroyos. In the south-west when water flow gets channeled for example in irrigation ditches, then vast water flow is run off in desert rains. It digs a trench in the channel, and digs a trench deeper and deeper so those of you who’ve been to Chaco Canyon will have seen those arroyos up to 30 feet deep. And today, if the water level drops down in the arroyos, that’s not a problem for farmers, because we’ve got pumps, but the Anasazi did not have pumps, and so when the irrigation ditches became incised by arroyo cutting and when the water level in the ditches dropped down below the field levels, they could no longer do irrigation agriculture. For a while they got away with these inadvertent environmental impacts. There were droughts around 1040 and droughts around 1090, but at both times the Anasazi hadn’t yet filled up the landscape, so they could move to other parts of the landscape not yet exploited. And the population continued to grow.

And then in Chaco Canyon when a drought arrived in 1117, at that point there was no more unexploited landscape, no more empty land to which to shift. In addition at that point, Chaco Canyon was a complex society. Lots of stuff was getting imported into Chaco – stone tools, pottery, turquoise, probably food was being imported into Chaco. Archaeologists can’t detect any material that went out of the Chaco Valley, and whenever you see a city into which material stuff is moving and no material stuff is leaving, you think that the modern world – the model could be of New York City or Rome, or Washington and Rome – that is to say you suspect that out of that city is having political control or religious control in return for which the peasants in the periphery are supplying their imported goods.

“When you see a rich place without a wall, you can safely infer that the rich place was on good terms with its poor neighbors, and when you see a wall going up around the rich place, you can infer that there was now trouble with the neighbors. ”

When the drought came in 1117 it was a couple of decades before the end. Again any of you who have been to Pueblo Benito, will have seen that Pueblo Benito was the six storey skyscraper. Pueblo Benito was a big, unwalled plaza, until about 20 years before the end, when a high wall went up around the plaza. And when you see a rich place without a wall, you can safely infer that the rich place was on good terms with its poor neighbors, and when you see a wall going up around the rich place, you can infer that there was now trouble with the neighbors. So probably what was happening was that towards the end, in the drought, as the landscape is filled up, the people out on the periphery were no longer satisfied because the people in the religious and political centre, were no longer delivering the goods. The prayers to the gods were not bringing rain, there was not all the stuff to redistribute and they began making trouble. And then at the drought of 1117, with no empty land to shift to, construction of Chaco Canyon ceased, Chaco was eventually abandoned. Long House Valley was abandoned later. The Anasazi had committed themselves irreversibly to a complex society, and once that society collapsed, they couldn’t rebuild it because again they deforested their environment.

In this case then, the Anasazi case, we have the interaction of well understood environmental impact and very well understood climate change from the tree rings, from the width of the tree rings, we know how much rainfall was falling in each year and hence we know the severity of the drought.

My next to last example involves Norse Greenland. As the Vikings began to expand over and terrorize Europe in their raids. The Vikings also settled six islands in the North Atlantic. So we have to compare not 80 islands as in the Pacific, but 6 islands. Viking settlements survived on Orkney, Shetland, Faeroe and Iceland, albeit it with severe problems due to environmental damage on Iceland. The Vikings arrived in Greenland, settled Greenland AD 984, where they established a Norwegian pastoral economy, based particularly on sheep, goats and cattle for producing dairy products, and then they also hunted caribou and seal. Trade was important. The Vikings in Greenland hunted walruses to trade walrus ivory to Norway because walrus ivory was in demand in Europe for carving, since at that time with the Arab conquest, elephant ivory was no longer available in Europe. Vikings vanished in the 1400s. There were two settlements; one of them disappeared around 1360 and the other sometime probably a little after 1440. Everybody ended up dead.

The vanishing of Viking Greenland is instructive because it involves all five of the factors that I mentioned, and also because there’s a detailed, written record from Norway, a bit from Iceland and just a few fragments from Greenland: a written record describing what people were doing and describing what they were thinking. So we know something about their motivation, which we don’t know for the Anasazi and the Easter Islanders.

Of the five factors, first of all there was ecological damage due to deforestation in this cold climate with a short growing season, cutting turf, soil erosion. The deforestation was especially expensive to the Norse Greenlanders because they required charcoal in order to smelt iron to extract iron from bogs. Without iron, except for what they could import in small quantities from Norway, there were problems in getting iron tools like sickles. It got to be a big problem when the Inuit, who had initially been absent in Greenland, colonized Greenland and came into conflict with the Norse. The Norse then had no military advantage over the Inuit. It was not guns, germs and steel. The Norse of Greenland had no guns, very little steel, and they didn’t have the nasty germs. They were fighting with the Inuit on terms of equality, one people with stone and wooden weapons against another.

So problem No.1, ecological damage, problem No.2, climate change. The climate in Greenland got colder in the late 1300s and early 1400s as part of what’s called the Little Ice Age, cooling of the North Atlantic. Hay production was a problem. Greenland was already marginal because it’s high latitude short growing season, and as it got colder, the growing season got even shorter, hay production got less, and hay was the basis of Norse sustenance. Thirdly, the Norse had military problems with their neighbors the Inuit. For example, the only detailed example we have of an Inuit attack on the Norse is that the Icelandic annals of the years 1379 say ‘In this year the scralings (which is an old Norse word meaning wretches, the Norse did not have a good attitude towards the Inuit), the wretches attacked the Greenlanders and killed 18 men and captured a couple of young men and women as slaves.’ Eighteen men doesn’t seem like a big deal in this century of body counts of tens of millions of people, but when you consider the population of Norse Greenland at the time, probably about 4,000 people, 18 adult men stands in the same proportion to the Norse population then as if some outsiders were to come into the United States today and in one raid kill 1,700,000 adult male Americans. So that single raid by the Inuit did make a big deal to the Norse, and that’s just the only raid that we know about.

Fourthly, there was the cut-off of trade with Europe because of increasing sea-ice, with a cold climate in the North Atlantic. The ships from Norway gradually stopped coming. Also as the Mediterranean reopened Europeans got access again to elephant ivory, and they became less interested in the walrus ivory, so fewer ships came to Greenland. And then finally cultural factors, the Norse were derived from a Norwegian society that was identified with pastoralism, and particularly valued calves. In Greenland it’s easier to feed and take care of sheep and goats than calves, but calves were prized in Greenland, so the Norse chiefs and bishops were heavily invested in the status symbol of calves. The Norse, because of their bad attitude towards the Inuit did not adopt useful Inuit technology, so the Norse never adopted harpoons, hence they couldn’t eat whales like the Inuit. They didn’t fish, incredibly, while the Inuit were fishing. They didn’t have dog sleighs, they didn’t have skin boats, they didn’t learn from the Inuit how to kill seals at breeding holes in the winter. So the Norse were conservative, had a bad attitude towards the Inuit, they built churches and cathedrals, the remains of the Greenland cathedral is still standing today at Gardar. It’s as big as the cathedral of Iceland, and the stone churches, some of the three-stone churches in Greenland are still standing. So this was a society that invested heavily in their churches, in importing stained-glass windows and bronze bells for the churches, when they could have been importing more iron to trade to the Inuit, to get seals and whale meat in exchange for the iron.

“Greenland then is particularly instructive in showing us that collapse due to environmental reasons isn’t inevitable. It depends upon what you do.”

So there were cultural factors also while the Norse refused to learn from the Inuit and refused to modify their own economy in a way that would have permitted them to survive. And the result then was that after 1440 the Norse were all dead, and the Inuit survived. Greenland then is particularly instructive in showing us that collapse due to environmental reasons isn’t inevitable. It depends upon what you do. Here are two peoples and one did things that let them survive, and the other things did not permit them to survive.

There are a series of factors that make people more or less likely to perceive environmental problems growing up around them. One is misreading previous experience. The Greenlanders came from Norway where there’s a relatively long growing season, so the Greenlanders didn’t realise, based on their previous experience, how fragile Greenland woodlands were going to be. The Greenlanders had the difficulty of extracting a trend from noisy fluctuations; yes we now know that there was a long-term cooling trend, but climate fluctuates wildly up and down n Greenland from year to year; cold, cold, warm, cold. So it was difficult for a long time perceive that there was any long-term trend. That’s similar to the problems we have today with recognising global warming. It’s only within the last few years that even scientists have been able to convince themselves that there is a global long-term warming trend. And while scientists are convinced, the evidence is not yet enough to convince many of our politicians.

Problem No. 3, short time scale of experience. In the Anasazi area, droughts come back every 50 years, in Greenland it gets cold every 500 years or so; those rare events are impossible to perceive for humans with a life span of 40, 50, 70 years. They’re perceptible today but we may not internalize them. For example, my friends in the Tucson area. There was a big drought in Tucson about 40 years ago. The city of Tucson almost over-draughted its water aquifers and Tucson went briefly into a period of water conservation, but now Tucson is back to building big developments and golf courses and so Tucson will have trouble with the next drought.

Fourthly the Norse were disadvantaged by inappropriate cultural values. They valued cows too highly just as modern Australians value cows and sheep to a degree appropriate to Scotland but inappropriate to modern Australia. And Australians now are seriously considering whether to abandon sheep farming completely as inappropriate to the Australian environment.

Finally, why would people perceive problems but still not solve their own problems?

A theme that emerges from Norse Greenland as well as from other places, is insulation of the decision making elite from the consequences of their actions. That is to say, in societies where the elites do not suffer from the consequences of their decisions, but can insulate themselves, the elite are more likely to pursue their short-term interests, even though that may be bad for the long-term interests of the society, including the children of the elite themselves.

In the case of Norse Greenland, the chiefs and bishops were eating beef from cows and venison and the lower classes were left to eating seals and the elite were heavily invested in the walrus ivory trade because of let them get their communion gear and their Rhineland pottery and the other stuff that they wanted. Even though in the long run, what was good for the chiefs in the short run was bad for society. We can see those differing insulations of the elite in the modern world today. Of all modern countries the one with by far the highest level of environmental awareness is Holland. In Holland, a higher percentage of people belong to environmental organisations than anywhere else in the world. And the Dutch are also a very democratic people. There are something like 42 political parties but none of them ever comes remotely close to a majority, but this which would be a recipe for chaos elsewhere, modern Holland, the Dutch are very good for reaching decisions. And on my last visit to Holland I asked my Dutch friends Why is it this high level of environmental awareness in Holland? And they said, ‘Look around. Most of us are living in Polders, in these lands that have been drained, reclaimed from the sea, they’re below sea level and they’re guided by the dykes’. In Holland everybody lives in the Polders, whether you’re rich or poor. It’s not the case that the rich people are living high up on the dykes and the poor people are living down in the Polders. So when the dyke is breached or there’s a flood, rich and poor people die alike. In particular in the North Sea floods in Holland in the late ’40s and ’50s, when the North Sea was swept by winds and tides 50 to 100 miles inland, all Dutch in the path of the floods died whether they were rich or poor. So my Dutch friends explained it to me that in Holland, rich people cannot insulate themselves from consequences of their actions. They’re living in the Polders and therefore there is not the clash between their short-term interests and the long-term interests of everybody else. The Dutch have had to learn to reach communal decisions.

Whereas in much of the rest of the world, rich people live in gated communities and drink bottled water. That’s increasingly the case in Los Angeles where I come from. So that wealthy people in much of the world are insulated from the consequences of their actions.

Well, finally then. I’ve talked mostly about the past. What about the situation today? There are obvious differences between the environmental problems that we face today and the environmental problems in the past. Some of those differences are things that make the situation for us today scarier than it was in the past. Today there are far more people alive, packing far more potent per capita destructive technology. Today there are 6-billion people chopping down the forests with chains and bulldozers, whereas on Easter Island there were 10,000 people with stone axes. Today, countries like the Solomon Islands – wet, relatively robust environments, where people lived without being able to deforest the islands for 32,000 years, within the past 15 years the Solomon Islands have been almost totally deforested, leading to a civil war and collapse of government within the last year or two.

Another big difference between today and the past is globalisation. In the past, you could get solitary collapses. When Easter Island society collapsed, nobody anywhere else in the world knew about it, nobody was affected by it. The Easter Islanders themselves, as they were collapsing, had no way of knowing that the Anasazi had collapsed for similar reasons a few centuries before, and that the Mycenaean Greeks had collapsed a couple of thousand years before and that the dry areas of Hawaii were going downhill at the same time. But today we turn on the television set and we see the ecological damage in Somalia and Afghanistan, or Haiti, and we pick up a book and we read about the ecological damage caused in the past. So we have knowledge both in space and time, that ancient peoples did not. Today we are not immune from anybody’s problems. Again, if 20 years ago you would ask someone in strategic assessments to mention a couple of countries in the world (in fact I was in on such a conversation) completely irrelevant to American interests. The two countries mentioned as most irrelevant to American interests were two countries that are remote, poor, landlocked, with no potential for causing the United States trouble: Somalia and Afghanistan. Which illustrates that today anybody can cause trouble for anybody else in the world. A collapse of a society anywhere is a global issue, and conversely, anybody anywhere in the world now has ways of reaching us. We used to think of globalisation as a way that we send to them out there our good things, like the Internet and Coca Cola, but particularly in the time since September 11th we’ve realised that globalisation also means that they can send us their bad things like terrorists, cholera and uncontrollable immigration. So those are things that are against us, but things that are for us is that globalisation also means that exchange of information and that information about the past, so we are the only society in world history that has the ability to learn from all the experiments being carried out elsewhere in the world today, and all the experiments that have succeeded and failed in the past. And so at least we have the choice of what we want to do about it. Thank you.

Man: The impression I get is that you are talking about them primarily in relation to environmental factors, you’re talking about an elite that becomes isolated, insular and operates without being affected by the consequences of environmental degradation. What about other cultural forces, such as the development of political instability, civil wars, people who are low down in the hierarchy that are challenging the order. And could it be the societies simply over time devolve towards political instability. What about other factors such as disease for example, could they play a role as well?

“The single factor that is the best predictor of the collapse of societies in the last couple of decades is infant and child mortality.”

Jared Diamond: Absolutely. In two minutes I did not do justice to cultural factors. There’s a large literature on causes of instability and civil wars and collapse of States and civil unrest, and it turns out that you will go home and say Jared Diamond has a list of eight explanations for everything. There are eight variables that people have been able to identify: With risk of civil war, for example there’s a data base of all cases of State failures and civil wars and violent government transitions in the last 30 years. People have mined this data base. Would anybody like to guess what is the single factor that is the best predictor of the collapse of societies in the last couple of decades? This is an unfair question because it’s so surprising. The strongest predictor is infant and child mortality. Countries that have had high infant or child mortality are more likely to undergo State collapse, and there are many links, including difficulties in the workforce, high ratio of children to adults. But in brief, yes, there is a large literature of other cultural factors that contribute to the collapse of societies.

Jared Diamond: Interesting question. For those of you who didn’t hear it: Do I think that today there’s more reliance that technology will come and somehow save us, even though we can’t specify how? Yes there certainly is, and many of my friends, particularly in the technology sector don’t take environmental problems so seriously. I’ll give you a specific example. After ‘Guns, Germs and Steel’ was published, it was reviewed by Bill Gates who liked it and gave it a favourable review, and the result was that I had a two-hour discussion with Bill Gates, who is a very thoughtful person, and he’s interested in lots of things. He probes deeply and he has seriously considered positions of his own. The subject turned to environmental issues and I mentioned that that’s the thing that most concerned me for the future of my children, Bill Gates has young children. He paused in his thoughtful way and he said, not in a dismissing way, ‘I have the feeling that technology will solve our environmental problems, but what really concerns me is biological terrorism.’ Look that’s a thoughtful response, but many people in the technology sector assume that technology will solve our problems. I disagree with that for two reasons.

One is that technology has created the explosion of modern problems while also providing the potential for solving them. But the first thing that happens is technology creates the problem and then maybe later it solves it, so at best there’s a lag.

The second thing is that the lesson we’ve learned again and again in the environmental area is it’s cheaper, much cheaper and more efficacious to prevent a problem at the beginning than to solve it by high technology later on. So it’s costing billions of dollars to clean up the Hudson River, and it costs billions of dollars to clean up Montana, it would cost a trivial amount to do it right in the beginning. Therefore, I do not look to technology as our saviour.

Kirsten Garrett: Professor Jared Diamond of UCLA, speaking at Princeton University earlier this month about what we can learn from the collapse of ancient societies. Professor Diamond won the Pulitzer Prize for his book, ‘Guns, Germs and Steel’ in 1997. © ABC 2002


January 1, 2005 The Ends of the World as We Know Them  By JARED DIAMOND Los Angeles — NEW Year’s weekend traditionally is a time for us to reflect, and to make resolutions based on our reflections. In this fresh year, with the United States seemingly at the height of its power and at the start of a new presidential term, Americans are increasingly concerned and divided about where we are going. How long can America remain ascendant? Where will we stand 10 years from now, or even next year?

Such questions seem especially appropriate this year. History warns us that when once-powerful societies collapse, they tend to do so quickly and unexpectedly. That shouldn’t come as much of a surprise:
peak power usually means peak population, peak needs, and hence peak vulnerability. What can be learned from history that could help us avoid joining the ranks of those who declined swiftly? We must expect the answers to be complex, because historical reality is complex: while some societies did indeed collapse spectacularly, others have managed to thrive for thousands of years without major reversal.

When it comes to historical collapses, five groups of interacting factors have been especially important: the damage that people have inflicted on their environment; climate change; enemies; changes in friendly trading partners; and the society’s political, economic and social responses to these shifts. That’s not to say that all five causes play a role in every case. Instead, think of this as a useful checklist of factors that should be examined, but whose relative importance varies from case to case.

For instance, in the collapse of the Polynesian society on Easter Island three centuries ago, environmental problems were dominant, and climate change, enemies and trade were insignificant; however, the latter three factors played big roles in the disappearance of the medieval Norse colonies on Greenland. Let’s consider two examples of declines stemming from different mixes of causes: the falls of classic Maya civilization and of Polynesian settlements on the Pitcairn Islands.

Maya Native Americans of the Yucatan Peninsula and adjacent parts of Central America developed the New World’s most advanced civilization before Columbus. They were innovators in writing, astronomy, architecture and art. From local origins around 2,500 years ago, Maya societies rose especially after the year A.D. 250, reaching peaks of population and sophistication in the late 8th century.

Thereafter, societies in the most densely populated areas of the southern Yucatan underwent a steep political and cultural collapse:
between 760 and 910, kings were overthrown, large areas were abandoned, and at least 90 percent of the population disappeared, leaving cities to become overgrown by jungle. The last known date recorded on a Maya monument by their so-called Long Count calendar corresponds to the year 909. What happened?

A major factor was environmental degradation by people:
deforestation, soil erosion and water management problems, all of which resulted in less food. Those problems were exacerbated by droughts, which may have been partly caused by humans themselves through deforestation. Chronic warfare made matters worse, as more and more people fought over less and less land and resources.

Why weren’t these problems obvious to the Maya kings, who could surely see their forests vanishing and their hills becoming eroded? Part of the reason was that the kings were able to insulate themselves from problems afflicting the rest of society. By extracting wealth from commoners, they could remain well fed while everyone else was slowly starving.

What’s more, the kings were preoccupied with their own power struggles. They had to concentrate on fighting one another and keeping up their images through ostentatious displays of wealth. By insulating themselves in the short run from the problems of society, the elite merely bought themselves the privilege of being among the last to starve.

Whereas Maya societies were undone by problems of their own making, Polynesian societies on Pitcairn and Henderson Islands in the tropical Pacific Ocean were undone largely by other people’s mistakes. Pitcairn, the uninhabited island settled in 1790 by the H.M.S. Bounty mutineers, had actually been populated by Polynesians 800 years earlier. That society, which left behind temple platforms, stone and shell tools and huge garbage piles of fish and bird and turtle bones as evidence of its existence, survived for several centuries and then vanished. Why?

In many respects, Pitcairn and Henderson are tropical paradises, rich in some food sources and essential raw materials. Pitcairn is home to Southeast Polynesia’s largest quarry of stone suited for making adzes, while Henderson has the region’s largest breeding seabird colony and its only nesting beach for sea turtles. Yet the islanders depended on imports from Mangareva Island, hundreds of miles away, for canoes, crops, livestock and oyster shells for making tools.

Unfortunately for the inhabitants of Pitcairn and Henderson, their Mangarevan trading partner collapsed for reasons similar to those underlying the Maya decline: deforestation, erosion and warfare. Deprived of essential imports in a Polynesian equivalent of the 1973 oil crisis, the Pitcairn and Henderson societies declined until everybody had died or fled.

The Maya and the Henderson and Pitcairn Islanders are not alone, of course. Over the centuries, many other societies have declined, collapsed or died out. Famous victims include the Anasazi in the American Southwest, who abandoned their cities in the 12th century because of environmental problems and climate change, and the Greenland Norse, who disappeared in the 15th century because of all five interacting factors on the checklist. There were also the ancient Fertile Crescent societies, the Khmer at Angkor Wat, the Moche society of Peru – the list goes on.

But before we let ourselves get depressed, we should also remember that there is another long list of cultures that have managed to prosper for lengthy periods of time. Societies in Japan, Tonga, Tikopia, the New Guinea Highlands and Central and Northwest Europe, for example, have all found ways to sustain themselves. What separates the lost cultures from those that survived? Why did the Maya fail and the shogun succeed?

Half of the answer involves environmental differences: geography deals worse cards to some societies than to others. Many of the societies that collapsed had the misfortune to occupy dry, cold or otherwise fragile environments, while many of the long-term survivors enjoyed more robust and fertile surroundings. But it’s not the case that a congenial environment guarantees success: some societies (like the Maya) managed to ruin lush environments, while other societies – like the Incas, the Inuit, Icelanders and desert Australian Aborigines – have managed to carry on in some of the earth’s most daunting environments.

The other half of the answer involves differences in a society’s responses to problems. Ninth-century New Guinea Highland villagers, 16th-century German landowners, and the Tokugawa shoguns of 17th- century Japan all recognized the deforestation spreading around them and solved the problem, either by developing scientific reforestation (Japan and Germany) or by transplanting tree seedlings (New Guinea). Conversely, the Maya, Mangarevans and Easter Islanders failed to address their forestry problems and so collapsed.

Consider Japan. In the 1600’s, the country faced its own crisis of deforestation, paradoxically brought on by the peace and prosperity following the Tokugawa shoguns’ military triumph that ended 150 years of civil war. The subsequent explosion of Japan’s population and economy set off rampant logging for construction of palaces and cities, and for fuel and fertilizer.

The shoguns responded with both negative and positive measures. They reduced wood consumption by turning to light-timbered construction, to fuel-efficient stoves and heaters, and to coal as a source of energy. At the same time, they increased wood production by developing and carefully managing plantation forests. Both the shoguns and the Japanese peasants took a long-term view: the former expected to pass on their power to their children, and the latter expected to pass on their land. In addition, Japan’s isolation at the time made it obvious that the country would have to depend on its own resources and couldn’t meet its needs by pillaging other countries. Today, despite having the highest human population density of any large developed country, Japan is more than 70 percent forested.

There is a similar story from Iceland. When the island was first settled by the Norse around 870, its light volcanic soils presented colonists with unfamiliar challenges. They proceeded to cut down trees and stock sheep as if they were still in Norway, with its robust soils. Significant erosion ensued, carrying half of Iceland’s topsoil into the ocean within a century or two. Icelanders became the poorest people in Europe. But they gradually learned from their mistakes, over time instituting stocking limits on sheep and other strict controls, and establishing an entire government department charged with landscape management. Today, Iceland boasts the sixth- highest per-capita income in the world.

What lessons can we draw from history? The most straightforward:
take environmental problems seriously. They destroyed societies in the past, and they are even more likely to do so now. If 6,000 Polynesians with stone tools were able to destroy Mangareva Island, consider what six billion people with metal tools and bulldozers are doing today. Moreover, while the Maya collapse affected just a few neighboring societies in Central America, globalization now means that any society’s problems have the potential to affect anyone else. Just think how crises in Somalia, Afghanistan and Iraq have shaped the United States today.

Other lessons involve failures of group decision-making. There are many reasons why past societies made bad decisions, and thereby failed to solve or even to perceive the problems that would eventually destroy them. One reason involves conflicts of interest, whereby one group within a society (for instance, the pig farmers who caused the worst erosion in medieval Greenland and Iceland) can profit by engaging in practices that damage the rest of society. Another is the pursuit of short-term gains at the expense of long- term survival, as when fishermen overfish the stocks on which their livelihoods ultimately depend.

History also teaches us two deeper lessons about what separates successful societies from those heading toward failure. A society contains a built-in blueprint for failure if the elite insulates itself from the consequences of its actions. That’s why Maya kings, Norse Greenlanders and Easter Island chiefs made choices that eventually undermined their societies. They themselves did not begin to feel deprived until they had irreversibly destroyed their landscape.

Could this happen in the United States? It’s a thought that often occurs to me here in Los Angeles, when I drive by gated communities, guarded by private security patrols, and filled with people who drink bottled water, depend on private pensions, and send their children to private schools. By doing these things, they lose the motivation to support the police force, the municipal water supply, Social Security and public schools. If conditions deteriorate too much for poorer people, gates will not keep the rioters out. Rioters eventually burned the palaces of Maya kings and tore down the statues of Easter Island chiefs; they have also already threatened wealthy districts in Los Angeles twice in recent decades.

In contrast, the elite in 17th-century Japan, as in modern Scandinavia and the Netherlands, could not ignore or insulate themselves from broad societal problems. For instance, the Dutch upper class for hundreds of years has been unable to insulate itself from the Netherlands’ water management problems for a simple reason:
the rich live in the same drained lands below sea level as the poor. If the dikes and pumps keeping out the sea fail, the well-off Dutch know that they will drown along with everybody else, which is precisely what happened during the floods of 1953.

The other deep lesson involves a willingness to re-examine long-held core values, when conditions change and those values no longer make sense. The medieval Greenland Norse lacked such a willingness: they continued to view themselves as transplanted Norwegian pastoralists, and to despise the Inuit as pagan hunters, even after Norway stopped sending trading ships and the climate had grown too cold for a pastoral existence. They died off as a result, leaving Greenland to the Inuit. On the other hand, the British in the 1950’s faced up to the need for a painful reappraisal of their former status as rulers of a world empire set apart from Europe. They are now finding a different avenue to wealth and power, as part of a united Europe.

In this New Year, we Americans have our own painful reappraisals to face. Historically, we viewed the United States as a land of unlimited plenty, and so we practiced unrestrained consumerism, but that’s no longer viable in a world of finite resources. We can’t continue to deplete our own resources as well as those of much of the rest of the world.

Historically, oceans protected us from external threats; we stepped back from our isolationism only temporarily during the crises of two world wars. Now, technology and global interconnectedness have robbed us of our protection. In recent years, we have responded to foreign threats largely by seeking short-term military solutions at the last minute.

But how long can we keep this up? Though we are the richest nation on earth, there’s simply no way we can afford (or muster the troops) to intervene in the dozens of countries where emerging threats lurk – particularly when each intervention these days can cost more than $100 billion and require more than 100,000 troops.

A genuine reappraisal would require us to recognize that it will be far less expensive and far more effective to address the underlying problems of public health, population and environment that ultimately cause threats to us to emerge in poor countries. In the past, we have regarded foreign aid as either charity or as buying support; now, it’s an act of self-interest to preserve our own economy and protect American lives.

Do we have cause for hope? Many of my friends are pessimistic when they contemplate the world’s growing population and human demands colliding with shrinking resources. But I draw hope from the knowledge that humanity’s biggest problems today are ones entirely of our own making. Asteroids hurtling at us beyond our control don’t figure high on our list of imminent dangers. To save ourselves, we don’t need new technology: we just need the political will to face up to our problems of population and the environment.

I also draw hope from a unique advantage that we enjoy. Unlike any previous society in history, our global society today is the first with the opportunity to learn from the mistakes of societies remote from us in space and in time. When the Maya and Mangarevans were cutting down their trees, there were no historians or archaeologists, no newspapers or television, to warn them of the consequences of their actions. We, on the other hand, have a detailed chronicle of human successes and failures at our disposal. Will we choose to use it?

Jared Diamond, who won the 1998 Pulitzer Prize in general nonfiction for “Guns, Germs and Steel: The Fates of Human Societies,” is the author of the forthcoming “Collapse: How Societies Choose or Fail to Succeed.”


“Collapse,” Jared Diamond shows how societies destroy themselves. THE VANISHING by MALCOLM GLADWELL In “Collapse,” Jared Diamond shows how societies destroy themselves.

<http://www.newyorker.com/critics/books/>  2005-01-03

A thousand years ago, group of Vikings led by Eric the Red set sail from Norway for the vast Arctic landmass west of Scandinavia which came to be known a Greenland. It was largely uninhabitable-a forbidding expanse of snow and ice. But along the southwestern coast there were two deep fjord protected from the harsh winds and saltwater spray o the North Atlantic Ocean, an as the Norse sailed upriver they saw grassy slope flowering with buttercups dandelions, and bluebells, an thick forests of willow an birch and alder. Two colonies were formed, three hundred miles apart, known as the Eastern and Western Settlements. The Norse raise sheep, goats, and cattle. The turned the grassy slopes into pastureland. They hunted sea and caribou. They built string of parish churches and magnificent cathedral, the remains of which are still standing. They traded actively with mainland Europe, an tithed regularly to the Roma Catholic Church. The Norse colonies in Greenland were law-abiding, economically viable, fully integrate communities, numbering a their peak five thousand people. They lasted for four hundred and fifty years-an then they vanished

The story of the Eastern and Western Settlements of Greenland is told in Jared Diamond’s “Collapse: How Societies Choose to Fail or Succeed” (Viking; $29.95). Diamond teaches geography at U.C.L.A. and is well known for his best-seller “Guns, Germs, and Steel,” which won a Pulitzer Prize. In “Guns, Germs, and Steel,” Diamond looked at environmental and structural factors to explain why Western societies came to dominate the world. In “Collapse,” he continues that approach, only this time he looks at history’s losers-like the Easter Islanders, the Anasazi of the American Southwest, the Mayans, and the modern-day Rwandans. We live in an era preoccupied with the way that ideology and culture and politics and economics help shape the course of history. But Diamond isn’t particularly interested in any of those things-or, at least, he’s interested in them only insofar as they bear on what to him is the far more important question, which is a society’s relationship to its climate and geography and resources and neighbors. “Collapse” is a book about the most prosaic elements of the earth’s ecosystem-soil, trees, and water-because societies fail, in Diamond’s view, when they mismanage those environmental factors.

There was nothing wrong with the social organization of the Greenland settlements. The Norse built a functioning reproduction of the predominant northern-European civic model of the time-devout, structured, and reasonably orderly. In 1408, right before the end, records from the Eastern Settlement dutifully report that Thorstein Olafsson married Sigrid Bjornsdotter in Hvalsey Church on September 14th of that year, with Brand Halldorstson, Thord Jorundarson, Thorbjorn Bardarson, and Jon Jonsson as witnesses, following the proclamation of the wedding banns on three consecutive Sundays.

The problem with the settlements, Diamond argues, was that the Norse thought that Greenland really was green; they treated it as if it were the verdant farmland of southern Norway. They cleared the land to create meadows for their cows, and to grow hay to feed their livestock through the long winter. They chopped down the forests for fuel, and for the construction of wooden objects. To make houses warm enough for the winter, they built their homes out of six-foot-thick slabs of turf, which meant that a typical home consumed about ten acres of grassland.

But Greenland’s ecosystem was too fragile to withstand that kind of pressure. The short, cool growing season meant that plants developed slowly, which in turn meant that topsoil layers were shallow and lacking in soil constituents, like organic humus and clay, that hold moisture and keep soil resilient in the face of strong winds. “The sequence of soil erosion in Greenland begins with cutting or burning the cover of trees and shrubs, which are more effective at holding soil than is grass,” he writes. “With the trees and shrubs gone, livestock, especially sheep and goats, graze down the grass, which regenerates only slowly in Greenland’s climate. Once the grass cover is broken and the soil is exposed, soil is carried away especially by the strong winds, and also by pounding from occasionally heavy rains, to the point where the topsoil can be removed for a distance of miles from an entire valley.” Without adequate pastureland, the summer hay yields shrank; without adequate supplies of hay, keeping livestock through the long winter got harder. And, without adequate supplies of wood, getting fuel for the winter became increasingly difficult.

The Norse needed to reduce their reliance on livestock-particularly cows, which consumed an enormous amount of agricultural resources. But cows were a sign of high status; to northern Europeans, beef was a prized food. They needed to copy the Inuit practice of burning seal blubber for heat and light in the winter, and to learn from the Inuit the difficult art of hunting ringed seals, which were the most reliably plentiful source of food available in the winter. But the Norse had contempt for the Inuit-they called them skraelings, “wretches”-and preferred to practice their own brand of European agriculture. In the summer, when the Norse should have been sending ships on lumber-gathering missions to Labrador, in order to relieve the pressure on their own forestlands, they instead sent boats and men to the coast to hunt for walrus. Walrus tusks, after all, had great trade value. In return for those tusks, the Norse were able to acquire, among other things, church bells, stained-glass windows, bronze candlesticks, Communion wine, linen, silk, silver, churchmen’s robes, and jewelry to adorn their massive cathedral at Gardar, with its three-ton sandstone building blocks and eighty-foot bell tower. In the end, the Norse starved to death.

Diamond’s argument stand in sharp contrast to the conventional explanations for a society’s collapse. Usually we look for some kind of cataclysmic event. The aboriginal civilization of the Americas was decimated by the sudden arrival of smallpox. European Jewry was destroyed by Nazism Similarly, the disappearance of the Norse settlements is usually blamed on the Little Ice Age, which descended o Greenland in the early fourteen-hundreds, ending several centuries of relative warmth. (One archeologist refers to this as the “It got to cold, and they died argument.) What all these explanations have in common is the idea that civilization are destroyed by force outside their control, by act of God

But look, Diamond says, at Easter Island. Once, it was home to a thriving culture that produced the enormous stone statues that continue to inspire awe. It was home to dozens of species of trees, which created and protected an ecosystem fertile enough to support as many as thirty thousand people. Today, it’s a barren and largely empty outcropping of volcanic rock. What happened? Did a rare plant virus wipe out the island’s forest cover? Not at all. The Easter Islanders chopped their trees down, one by one, until they were all gone. “I have often asked myself, ‘What did the Easter Islander who cut down the last palm tree say while he was doing it?'” Diamond writes, and that, of course, is what is so troubling about the conclusions of “Collapse.” Those trees were felled by rational actors-who must have suspected that the destruction of this resource would result in the destruction of their civilization. The lesson of “Collapse” is that societies, as often as not, aren’t murdered. They commit suicide: they slit their wrists and then, in the course of many decades, stand by passively and watch themselves bleed to death.

This doesn’t mean that acts of God don’t play a role. It did get colder in Greenland in the early fourteen-hundreds. But it didn’t get so cold that the island became uninhabitable. The Inuit survived long after the Norse died out, and the Norse had all kinds of advantages, including a more diverse food supply, iron tools, and ready access to Europe. The problem was that the Norse simply couldn’t adapt to the country’s changing environmental conditions. Diamond writes, for instance, of the fact that nobody can find fish remains in Norse archeological sites. One scientist sifted through tons of debris from the Vatnahverfi farm and found only three fish bones; another researcher analyzed thirty-five thousand bones from the garbage of another Norse farm and found two fish bones. How can this be? Greenland is a fisherman’s dream: Diamond describes running into a Danish tourist in Greenland who had just caught two Arctic char in a shallow pool with her bare hands. “Every archaeologist who comes to excavate in Greenland . . . starts out with his or her own idea about where all those missing fish bones might be hiding,” he writes. “Could the Norse have strictly confined their munching on fish to within a few feet of the shoreline, at sites now underwater because of land subsidence? Could they have faithfully saved all their fish bones for fertilizer, fuel, or feeding to cows?” It seems unlikely. There are no fish bones in Norse archeological remains, Diamond concludes, for the simple reason that the Norse didn’t eat fish. For one reason or another, they had a cultural taboo against it.

Given the difficulty that the Norse had in putting food on the table, this was insane. Eating fish would have substantially reduced the ecological demands of the Norse settlements. The Norse would have needed fewer livestock and less pastureland. Fishing is not nearly as labor-intensive as raising cattle or hunting caribou, so eating fish would have freed time and energy for other activities. It would have diversified their diet.

Why did the Norse choose not to eat fish? Because they weren’t thinking about their biological survival. They were thinking about their cultural survival. Food taboos are one of the idiosyncrasies that define a community. Not eating fish served the same function as building lavish churches, and doggedly replicating the untenable agricultural practices of their land of origin. It was part of what it meant to be Norse, and if you are going to establish a community in a harsh and forbidding environment all those little idiosyncrasies which define and cement a culture are of paramount importance. “The Norse were undone by the same social glue that had enabled them to master Greenland’s difficulties,” Diamond writes. “The values to which people cling most stubbornly under inappropriate conditions are those values that were previously the source of their greatest triumphs over adversity.” He goes on:

“To us in our secular modern society, the predicament in which the Greenlanders found themselves is difficult to fathom. To them, however, concerned with their social survival as much as their biological survival, it was out of the question to invest less in churches, to imitate or intermarry with the Inuit, and thereby to face an eternity in Hell just in order to survive another winter on Earth.”

Diamond’s distinction between social and biological survival is a critical one, because too often we blur the two, or assume that biological survival is contingent on the strength of our civilizational values. That was the lesson taken from the two world wars and the nuclear age that followed: we would survive as a species only if we learned to get along and resolve our disputes peacefully. The fact is, though, that we can be law-abiding and peace-loving and tolerant and inventive and committed to freedom and true to our own values and still behave in ways that are biologically suicidal. The two kinds of survival are separate.

Diamond points out that the Easter Islanders did not practice, so far as we know, a uniquely pathological version of South Pacific culture. Other societies, on other islands in the Hawaiian archipelago, chopped down trees and farmed and raised livestock just as the Easter Islanders did. What doomed the Easter Islanders was the interaction between what they did and where they were. Diamond and a colleague, Barry Rollet, identified nine physical factors that contributed to the likelihood of deforestation-including latitude, average rainfall, aerial-ash fallout, proximity to Central Asia’s dust plume, size, and so on-and Easter Island ranked at the high-risk end of nearly every variable. “The reason for Easter’s unusually severe degree of deforestation isn’t that those seemingly nice people really were unusually bad or improvident,” he concludes. “Instead, they had the misfortune to be living in one of the most fragile environments, at the highest risk for deforestation, of any Pacific people.” The problem wasn’t the Easter Islanders. It was Easter Island.

In the second half of “Collapse,” Diamond turns his attention to modern examples, and one of his case studies is the recent genocide in Rwanda. What happened in Rwanda is commonly described as an ethnic struggle between the majority Hutu and the historically dominant, wealthier Tutsi, and it is understood in those terms because that is how we have come to explain much of modern conflict: Serb and Croat, Jew and Arab, Muslim and Christian. The world is a cauldron of cultural antagonism. It’s an explanation that clearly exasperates Diamond. The Hutu didn’t just kill the Tutsi, he points out. The Hutu also killed other Hutu. Why? Look at the land: steep hills farmed right up to the crests, without any protective terracing; rivers thick with mud from erosion; extreme deforestation leading to irregular rainfall and famine; staggeringly high population densities; the exhaustion of the topsoil; falling per-capita food production. This was a society on the brink of ecological disaster, and if there is anything that is clear from the study of such societies it is that they inevitably descend into genocidal chaos. In “Collapse,” Diamond quite convincingly defends himself against the charge of environmental determinism. His discussions are always nuanced, and he gives political and ideological factors their due. The real issue is how, in coming to terms with the uncertainties and hostilities of the world, the rest of us have turned ourselves into cultural determinists.

For the past thirty years Oregon has had one of the strictest sets of land-us regulations in the nation requiring new development to be clustered in and around existing urban development The laws meant that Oregon has done perhaps the best job in the nation in limiting suburban sprawl, an protecting coastal lands an estuaries. But this November Oregon’s voters passed ballot referendum, known a Measure 37, that rolled back many of those protections Specifically, Measure 37 said that anyone who could show that the value of his land was affected by regulation implemented since it purchase was entitled to compensation from the state If the state declined to pay, the property owner would be exempted from the regulations

To call Measure 37-and similar referendums that have been passed recently in other states-intellectually incoherent is to put it mildly. It might be that the reason your hundred-acre farm on a pristine hillside is worth millions to a developer is that it’s on a pristine hillside: if everyone on that hillside could subdivide, and sell out to Target and Wal-Mart, then nobody’s plot would be worth millions anymore. Will the voters of Oregon then pass Measure 38, allowing them to sue the state for compensation over damage to property values caused by Measure 37?

It is hard to read “Collapse,” though, and not have an additional reaction to Measure 37. Supporters of the law spoke entirely in the language of political ideology. To them, the measure was a defense of property rights, preventing the state from unconstitutional “takings.” If you replaced the term “property rights” with “First Amendment rights,” this would have been indistinguishable from an argument over, say, whether charitable groups ought to be able to canvass in malls, or whether cities can control the advertising they sell on the sides of public buses. As a society, we do a very good job with these kinds of debates: we give everyone a hearing, and pass laws, and make compromises, and square our conclusions with our constitutional heritage-and in the Oregon debate the quality of the theoretical argument was impressively high.

The thing that got lost in the debate, however, was the land. In a rapidly growing state like Oregon, what, precisely, are the state’s ecological strengths and vulnerabilities? What impact will changed land-use priorities have on water and soil and cropland and forest? One can imagine Diamond writing about the Measure 37 debate, and he wouldn’t be very impressed by how seriously Oregonians wrestled with the problem of squaring their land-use rules with their values, because to him a society’s environmental birthright is not best discussed in those terms. Rivers and streams and forests and soil are a biological resource. They are a tangible, finite thing, and societies collapse when they get so consumed with addressing the fine points of their history and culture and deeply held beliefs-with making sure that Thorstein Olafsson and Sigrid Bjornsdotter are married before the right number of witnesses following the announcement of wedding banns on the right number of Sundays-that they forget that the pastureland is shrinking and the forest cover is gone.

When archeologists looked through the ruins of the Western Settlement, they found plenty of the big wooden objects that were so valuable in Greenland-crucifixes, bowls, furniture, doors, roof timbers-which meant that the end came too quickly for anyone to do any scavenging. And, when the archeologists looked at the animal bones left in the debris, they found the bones of newborn calves, meaning that the Norse, in that final winter, had given up on the future. They found toe bones from cows, equal to the number of cow spaces in the barn, meaning that the Norse ate their cattle down to the hoofs, and they found the bones of dogs covered with knife marks, meaning that, in the end, they had to eat their pets. But not fish bones, of course. Right up until they starved to death, the Norse never lost sight of what they stood for.


January 11, 2005

http://www.nytimes.com/2005/01/11/books/11kaku.html By MICHIKO KAKUTANI

COLLAPSE – How Societies Choose to Fail or Succeed  By Jared Diamond

Jared Diamond’s fascinating but not always convincing new book, “Collapse: How Societies Choose to Fail or Succeed,” tries hard to live up to its apocalyptic title.

It begins with the stories of several historical collapses, including the demise of the Easter Islanders, remembered now for the iconic stone heads they left behind on their Pacific island home; the fall of the ancient Mayan cities that were once the hub of the New World’s most advanced Native American civilization; and the disappearance of the Norse colony on Greenland after surviving for 450 years as Europe’s most remote outpost. In all these cases, Mr. Diamond diagnoses a similar pattern of catastrophe: environmental damage (usually deforestation leading to soil erosion, food shortages and eventually social and political crises), worsened by other factors like climate change, shifting trade patterns and shortsighted or venal leadership.

From these dire historical tales, Mr. Diamond – the author of the Pulitzer Prize-winning best seller “Guns, Germs and Steel” – quickly fast-forwards, suggesting that these ancient examples may hold a lesson for our environmentally challenged world today. He argues that current environmental problems include the same ones that undermined past societies, plus four new ones: “human-caused climate change, buildup of toxic chemicals in the environment, energy shortages and full human utilization of the earth’s photosynthetic capacity.” Many of these problems, he adds, are expected to “become globally critical within the next few decades.”

“Much more likely than a doomsday scenario involving human extinction or an apocalyptic collapse of industrial civilization,” he writes, “would be ‘just’ a future of significantly lower living standards, chronically higher risks and the undermining of what we now consider some of our key values. Such a collapse could assume various forms, such as the worldwide spread of diseases or else of wars, triggered ultimately by scarcity of environmental resources.”

Already, he argues, societal collapse has become a palpable specter in some troubled third-world countries. He contends that environmental problems and resulting land and food shortages played a key role in fueling the ethnic slaughter that plagued Rwanda in the 1990’s, and that similar environmental destruction has contributed to Haiti’s current plight.

With this volume, Mr. Diamond wants very much to write a kind of bookend to “Guns, Germs and Steel.” That earlier book attempted to explain why Western civilizations developed the technologies and social and political strategies that enabled them to dominate the world; this volume attempts to explain in a far more haphazard manner why some societies failed to flourish and eventually vanished from the face of the earth.

Mr. Diamond – who has academic training in physiology, geography and evolutionary biology – is a lucid writer with an ability to make arcane scientific concepts readily accessible to the lay reader, and his case studies of failed cultures are never less than compelling. He presents some intriguing digressions about methods used by scientists and historians to diagnose the trajectory of long dead societies, and provides some provocative analyses of current environmental problems in Australia, the United States and China.

Noting that China is rapidly progressing toward its goal of achieving a first-world economy, Mr. Diamond writes that if that gigantic nation’s per-capita consumption rates do in fact rise to first-world levels, it will result in the approximate doubling of the “entire world’s human resource use and environmental impact.” Something has to give way, he concludes: “That is the strongest reason why China’s problems automatically become the world’s problems.”

Toward the end of “Collapse,” Mr. Diamond poses the question of why some societies undermine themselves and even commit suicide by making disastrous decisions. He comes up with four fuzzily defined categories: 1) failure to anticipate a problem (i.e. the British introduction of foxes and rabbits to Australia – two alien mammals that have cost billions of dollars in damage and control expenditures); 2) failure to perceive a problem that has actually arrived (often because a slow trend like global warming is concealed by wide up and down fluctuations); 3) failure to attempt to solve a problem once it has been identified (usually because leaders put self-interest before the public good or focus on short-term benefits over long-term needs); and 4) failure to find a viable solution to the problem (frequently because of prohibitive costs or because too little has been done too late).

Such discussions are useful in getting the reader to think about the big picture – about matters like the sustainability of current consumption patterns in a world of shrinking resources, and the role that cultural values can play in a society’s welfare. Did the reluctance of the Norse settlers in Greenland to learn survival skills from their Inuit rivals help seal their fate? Did the obsession of Easter Island chiefs with trying to outdo their rivals by building bigger and bigger statues (which consumed precious natural resources and labor) effectively doom their civilization?

Interesting as such questions might be, this book remains, in the end, a messy hodgepodge of case studies, glued together with speculation and questionable analogies. For one thing, Mr. Diamond’s selection of failed civilizations from the past seems arbitrary in the extreme: Why Easter Island and not ancient Rome? Why the Anasazi of the American Southwest and not the Minoans of ancient Crete?

In addition, the reader is left wondering if the examples he has selected truly offer useful analogies to the world’s current situation. After all, as Mr. Diamond himself points out, there are huge differences between the historical examples he cites and the plight of the world today: most notably, the role that technology plays in accelerating change (speeding up both environmental damage and possible solutions to that damage) and the role of globalization in linking the fates of wildly disparate and distant societies. Although Mr. Diamond talks about these differences, he does so in a highly cursory manner – more as a pre-emptive strike against possible critics than as part of a carefully considered analysis central to this book.


69605 Dick Lawrence Jan 31, 2005 Easterbrook wrote a pathetically stupid and envious review in the NYT of Diamond’s book, there is massive criticism on the web on a variety of forums, here’s one from a non-energy group Here’s another critique of Easterbrook’s review of “Collapse”. When a not-very successful wannabe author reviews a much more successful author (who got a Pulitzer as well), watch out.


There are so many fundamental problems with Easterbrook’s review, I don’t know where to start. I’ll just give two:

1. Easterbrook says that since most of Diamond’s examples involve islands, they’re not applicable to the Earth as a whole since it’s not primarily made up of islands. But Diamond chose islands as his examples because they have limited resources, and are consequently more sensitive to impacts from pre-industrial and early industrial societies with lower resource requirements than our modern societies. 6 billion people, and the resources required to maintain a modern industrial society, will have the same kind of resource impacts on the entire planet, and the same effects on its ecosystems, that less-demanding societies had on the limited resources of islands. And if we could launch Gregg Easterbrook into space, which many would argue would be a good thing, he would quickly see that the Earth as a whole *is* an island.

2. Easterbrook argues that since we’re so much more advanced now then earlier societies, we’re bound to come up with a solution that will fix any of those problems, including harvesting resources from the rest of the universe. This isn’t a fact, it’s a belief, it’s faith-based resource management; “crunch all you want, we’ll make more”. Other societies have also felt that things could go on the way they always had, and they’d always be able to find solutions when they needed to; hasn’t always worked out the way. Someone needs to tell Gregg Easterbrook that “deus ex machina” isn’t a viable strategy for solving our problems. Leszek Pawlowicz


Jan Steinman  Feb 7, 2005  Jared Diamonds Collapse and Peak Oil I attended a talk of Diamond’s in Portland (Oregon), sponsored by Powell’s Books.

I stood up and asked, “The Easter Islanders died for want of a single 40 watt bulb’s worth of exosomatic energy. North Americans consume the equivalent of seven hair dryers, running 24/7/365. Populations levels have risen as a direct result of cheap energy, from about 1 billion before oil, to over 6 billion today. Each calorie of food we eat currently requires ten calories of fossil fuel to produce. Petroleum production has reached — or will soon reach — a peak, after which it will decline, while demand is still increasing. Natural gas will follow soon after. Recognizing that human population may have to retreat to pre-oil numbers, or that 5 of every 6 people in this room may have to ‘go away,’ how can you be ‘cautiously optimistic’ that our own civilization won’t appear in someone else’s ‘Collapse’ book, many years from now?”

He did a lot of arm-waving and mentioned hybrid cars, but basically hurried on to the next question.

Posted in Other Experts, Roman Empire | Leave a comment

Bryan Appleyard Waiting for the lights to go out

Bryan Appleyard. October 16, 2005. Waiting for the lights to go out. We’ve taken the past 200 years of prosperity for granted. Humanity’s progress is stalling, we are facing a new era of decay, and nobody is clever enough to fix it. Is the future really that black?


The greatest getting-and-spending spree in the history of the world is about to end. The 200-year boom that gave citizens of the industrial world levels of wealth, health and longevity beyond anything previously known to humanity is threatened on every side. Oil is running out; the climate is changing at a potentially catastrophic rate; wars over scarce resources are brewing; finally, most shocking of all, we don’t seem to be having enough ideas about how to fix any of these things.

It’s been said before, of course: people are always saying the world will end and it never does. Maybe it won’t this time, either. But, frankly, it’s not looking good. Almost daily, new evidence is emerging that progress can no longer be taken for granted, that a new Dark Age is lying in wait for ourselves and our children.

To understand how this could happen, it is necessary to grasp just how extraordinary, how utterly unprecedented are the privileges we in the developed world enjoy now. Born today, you could expect to live 25 to 30 years longer than your Victorian forebears, up to 45 years longer than your medieval ancestors and at least 55 years longer than your Stone Age precursors. It is highly unlikely that your birth will kill you or your mother or that, in later life, you will suffer typhoid, plague, smallpox, dysentery, polio, or dentistry without anaesthetic. You will enjoy a standard of living that would have glazed the eyes of the Emperor Nero, thanks to the 2% annual economic growth rate sustained by the developed world since the industrial revolution. You will have access to greater knowledge than Aristotle could begin to imagine, and to technical resources that would stupefy Leonardo da Vinci. You will know a world whose scale and variety would induce agoraphobia in Alexander the Great. You should experience relative peace thanks to the absolute technological superiority of the industrialised world over its enemies and, with luck and within reason, you should be able to write and say anything you like, a luxury denied to almost all other human beings, dead or alive. Finally, as this artificially extended sojourn in paradise comes to a close, you will attain oblivion in the certain knowledge that, for your children, things can only get better.

Such staggering developments have convinced us that progress is a new law of nature, something that happens to everything all the time. Microsoft is always working on a better version of Windows. Today’s Nokia renders yesterday’s obsolete, as does today’s Apple, Nike or Gillette. Life expectancy continues to rise. Cars go faster, planes fly further, and one day, we are assured, cancer must yield. Whatever goes wrong in our lives or the world, the march of progress continues regardless. Doesn’t it?

Almost certainly not. The first big problem is our insane addiction to oil. It powers everything we do and determines how we live. But, on the most optimistic projections, there are only 30 to 40 years of oil left. One pessimistic projection, from Sweden’s Uppsala University, is that world reserves are massively overstated and the oil will start to run out in 10 years. That makes it virtually inconceivable that there will be kerosene-powered planes or petroleum-powered cars for much longer. Long before the oil actually runs out, it will have become far too expensive to use for such frivolous pursuits as flying and driving. People generally assume that we will find our way round this using hydrogen, nuclear, wave or wind power. In reality, none of these technologies are being developed anything like quickly enough to take over from oil. The great nations just aren’t throwing enough money at the problem. Instead, they are preparing to fight for the last drops of oil. China has recently started making diplomatic overtures to Saudi Arabia, wanting to break America’s grip on that nation’s 262 billion barrel reserve.

Even if we did throw money at the problem, it’s not certain we could fix it. One of the strangest portents of the end of progress is the recent discovery that humans are losing their ability to come up with new ideas.

Jonathan Huebner is an amiable, very polite and very correct physicist who works at the Pentagon’s Naval Air Warfare Center in China Lake, California. He took the job in 1985, when he was 26. An older scientist told him how lucky he was. In the course of his career, he could expect to see huge scientific and technological advances. But by 1990, Huebner had begun to suspect the old man was wrong. “The number of advances wasn’t increasing exponentially, I hadn’t seen as many as I had expected — not in any particular area, just generally.”

Puzzled, he undertook some research of his own. He began to study the rate of significant innovations as catalogued in a standard work entitled The History of Science and Technology. After some elaborate mathematics, he came to a conclusion that raised serious questions about our continued ability to sustain progress. What he found was that the rate of innovation peaked in 1873 and has been declining ever since. In fact, our current rate of innovation — which Huebner puts at seven important technological developments per billion people per year — is about the same as it was in 1600. By 2024 it will have slumped to the same level as it was in the Dark Ages, the period between the end of the Roman empire and the start of the Middle Ages.

The calculations are based on innovations per person, so if we could keep growing the human population we could, in theory, keep up the absolute rate of innovation. But in practice, to do that, we’d have to swamp the world with billions more people almost at once. That being neither possible nor desirable, it seems we’ll just have to accept that progress, at least on the scientific and technological front, is slowing very rapidly indeed.

Huebner offers two possible explanations: economics and the size of the human brain. Either it’s just not worth pursuing certain innovations since they won’t pay off — one reason why space exploration has all but ground to a halt — or we already know most of what we can know, and so discovering new things is becoming increasingly difficult. We have, for example, known for over 20 years how cancer works and what needs to be done to prevent or cure it. But in most cases, we still have no idea how to do it, and there is no likelihood that we will in the foreseeable future.

Huebner’s insight has caused some outrage. The influential scientist Ray Kurzweil has criticised his sample of innovations as “arbitrary”; K Eric Drexler, prophet of nanotechnology, has argued that we should be measuring capabilities, not innovations. Thus we may travel faster or access more information at greater speeds without significant innovations as such.

Huebner has so far successfully responded to all these criticisms. Moreover, he is supported by the work of Ben Jones, a management professor at Northwestern University in Illinois. Jones has found that we are currently in a quandary comparable to that of the Red Queen in Through the Looking Glass: we have to run faster and faster just to stay in the same place. Basically, two centuries of economic growth in the industrialised world has been driven by scientific and technological innovation. We don’t get richer unaided or simply by working harder: we get richer because smart people invent steam engines, antibiotics and the internet. What Jones has discovered is that we have to work harder and harder to sustain growth through innovation. More and more money has to be poured into research and development and we have to deploy more people in these areas just to keep up. “The result is,” says Jones, “that the average individual innovator is having a smaller and smaller impact.”

Like Huebner, he has two theories about why this is happening. The first is the “low-hanging fruit” theory: early innovators plucked the easiest-to-reach ideas, so later ones have to struggle to crack the harder problems. Or it may be that the massive accumulation of knowledge means that innovators have to stay in education longer to learn enough to invent something new and, as a result, less of their active life is spent innovating. “I’ve noticed that Nobel-prize winners are getting older,” he says. “That’s a sure sign it’s taking longer to innovate.” The other alternative is to specialise — but that would mean innovators would simply be tweaking the latest edition of Windows rather than inventing the light bulb. The effect of their innovations would be marginal, a process of making what we already have work slightly better. This may make us think we’re progressing, but it will be an illusion.

If Huebner and Jones are right, our problem goes way beyond Windows. For if innovation is the engine of economic progress — and almost everybody agrees it is — growth may be coming to an end. Since our entire financial order — interest rates, pension funds, insurance, stock markets — is predicated on growth, the social and economic consequences may be cataclysmic.

Is it really happening? Will progress grind to a halt? The long view of history gives conflicting evidence. Paul Ormerod, a London-based economist and author of the book Why Most Things Fail, is unsure. “I am in two minds about this. Biologists have abandoned the idea of progress — we just are where we are. But humanity is so far in advance of anything that has gone before that it seems to be a qualitative leap.”

For Ormerod, there may be very rare but similar qualitative leaps in the organisation of society. The creation of cities, he believes, is one. Cities emerged perhaps 10,000 years ago, not long after humanity ceased being hunter-gatherers and became farmers. Other apparently progressive developments cannot compete. The Roman empire, for example, once seemed eternal, bringing progress to the world. But then, one day, it collapsed and died. The question thus becomes: is our liberal-democratic-capitalist way of doing things, like cities, an irreversible improvement in the human condition, or is it like the Roman empire, a shooting star of wealth and success, soon to be extinguished?

Ormerod suspects that capitalism is indeed, like cities, a lasting change in the human condition. “Immense strides forward have been taken,” he says. It may be that, after millennia of striving, we have found the right course. Capitalism may be the Darwinian survivor of a process of natural selection that has seen all other systems fail.

Ormerod does acknowledge, however, that the rate of innovation may well be slowing — “All the boxes may be ticked,” as he puts it — and that progress remains dependent on contingencies far beyond our control. An asteroid strike or super-volcanic eruption could crush all our vanities in an instant. But in principle, Ormerod suspects that our 200-year spree is no fluke.

This is heartily endorsed by the Dutch-American Joel Mokyr, one of the most influential economic historians in the world today. Mokyr is the author of The Lever of Riches and The Gifts of Athena, two books that support the progressive view that we are indeed doing something right, something that makes our liberal-democratic civilisation uniquely able to generate continuous progress. The argument is that, since the 18th-century Enlightenment, a new term has entered the human equation. This is the accumulation of and a free market in knowledge. As Mokyr puts it, we no longer behead people for saying the wrong thing — we listen to them. This “social knowledge” is progressive because it allows ideas to be tested and the most effective to survive. This knowledge is embodied in institutions, which, unlike individuals, can rise above our animal natures. Because of the success of these institutions, we can reasonably hope to be able, collectively, to think our way around any future problems. When the oil runs out, for example, we should have harnessed hydrogen or fusion power. If the environment is being destroyed, then we should find ways of healing it. “If global warming is happening,” says Mokyr, “and I increasingly am persuaded that it is, then we will have the technology to deal with it.”

But there are, as he readily admits, flies in the ointment of his optimism. First, he makes the crucial concession that, though a society may progress, individuals don’t. Human nature does not progress at all. Our aggressive, tribal nature is hard-wired, unreformed and unreformable. Individually we are animals and, as animals, incapable of progress. The trick is to cage these animal natures in effective institutions: education, the law, government. But these can go wrong. “The thing that scares me,” he says, “is that these institutions can misfire.”

Big institutions, deeply entrenched within ancient cultures, misfired in Russia in 1917 and Germany in 1933, producing years of slaughter on a scale previously unseen in human history. For Mokyr, those misfirings produced not an institutionalism of our knowledge but of our aggressive, animal natures. The very fact that such things can happen at all is a warning that progress can never be taken for granted.

Some suggest that this institutional breakdown is now happening in the developed world, in the form of a “democratic deficit”. This is happening at a number of levels. There is the supranational. In this, either large corporations or large institutions — the EU, the World Bank — gradually remove large areas of decision-making from the electorate, hollowing out local democracies. Or there is the national level. Here, massively increased political sophistication results in the manipulation, almost hypnotising, of electorates. This has been particularly true in Britain, where politics has been virtualised by new Labour into a series of presentational issues. Such developments show that merely calling a system “democratic” does not necessarily mean it will retain the progressive virtues that have seemed to arise from democracy. Democracy can destroy itself. In addition, with the rise of an unquantifiable global terrorist threat producing defensive transformations of legal systems designed to limit freedom and privacy, the possibility arises of institutional breakdown leading to a new, destructive social order. We are not immune from the totalitarian faults of the past.

The further point is that capitalism is one thing, globalisation another. The current globalisation wave was identified in the 1970s.

It was thought to represent the beginning of a process whereby the superior performance of free-market economics would lead a worldwide liberalisation process. Everybody, in effect, would be drawn into the developed world’s 200-year boom. Increasingly, however, it is becoming clear that it hasn’t happened as planned. The prominent Canadian thinker John Ralston Saul argues in his book The Collapse of Globalism that globalisation is, in fact, over and is being replaced by a series of competing local and national interests. Meanwhile, in his book Why They Don’t Hate Us, the Californian academic Mark LeVine shows that the evidence put forward by globalisation’s fans, such as the World Trade Organization, conceals deep divisions and instabilities in countries like China and regions like the Middle East. Globalisation, he argues, is often just making the rich richer and the poor poorer. It is also destroying local culture and inspiring aggressive resistance movements, from student demonstrators in the West to radical Islamicists in the Middle East. Progress is built on very fragile foundations.

Or perhaps it never happens at all. John Gray, professor of European thought at the London School of Economics, is the most lucid advocate of the view that progress is an illusion. People, he says, are “overimpressed by present reality” and assume, on the basis of only a couple of centuries of history, that progress is eternal. In his book Al Qaeda and What It Means to Be Modern, he argues that human nature is flawed and incorrigible, and its flaws will be embodied in whatever humans make. Joel Mokyr’s institutions, therefore, do not rise above human nature: they embody it. Science, for Gray, does indeed accumulate knowledge. But that has the effect of empowering human beings to do at least as much damage as good. His book argues that, far from being a medieval institution as many have suggested, Al-Qaeda is a supremely modern organisation, using current technology and management theory to spread destruction. Modernity does not make us better, it just makes us more effective. We may have anaesthetic dentistry, but we also have nuclear weapons. We may or may not continue to innovate. It doesn’t matter, because innovation will only enable us to do more of what humans do. In this view, all progress will be matched by regress. In our present condition, this can happen in two ways. Either human conflict will produce a new ethical decline, as it did in Germany and Russia, or our very commitment to growth will turn against us.

On the ethical front, Gray’s most potent contemporary example is torture. For years we thought the developed world had banished torture for ever or that, if it occasionally happened here, it was an error or oversight, a crime to be punished at once. Not being torturers was a primary indicator of our civilised, progressive condition. But now suicide terrorism has posed a terrible question. If we have a prisoner who knows where a suitcase nuclear weapon is planted and refuses to talk, do we not have the right to torture him into revealing the information? Many now reluctantly admit that we would. Even the means of his torture has been discussed: a sterilised needle inserted beneath the fingernail. Having suffered this pain for a few seconds when having an anaesthetic injection prior to the removal of a nail, I can personally attest that it would work.

The Harvard law professor Alan Dershowitz is now arguing for giving proper legal status to torture. “Torture is a matter that has always been unacceptable, beyond discussion. Let’s not pretend, those days are passed. We now have ticking-bomb terrorists and it’s an empirical fact that every civilised democracy would use torture in those circumstances.” Dershowitz doesn’t like the “surreptitious hypocrisy” that allows torture but pretends it doesn’t. Look, he says, at the case of Khalid Sheikh Mohammed, the Al-Qaeda planner captured in 2003 in Pakistan. American interrogators subjected him to “water-boarding”, effectively threatening him with drowning. This wasn’t classified as torture because he wasn’t hurt, but of course it was.

Dershowitz thinks a legal basis for torture would prevent abuses like the horrors perpetrated in Abu Ghraib prison in Iraq. If, for example, Tony Blair or George Bush had to sign a torture warrant, the whole business would be kept visible and legal. For Gray, torture represents obvious regress. Dershowitz partly agrees but argues that progressives must be ready to do deals. “Terrorism is a major step backwards in civilisation. Hitler was a major step backwards. Sometimes we have to step backwards too to combat such things. But progress happens in other areas. A generation now growing up may have to accept more security measures and less privacy, but in other areas like sexual conduct we are making progress. I don’t think overall we are making a step back.”

Progress, therefore, is faltering but, on aggregate, it moves in the right direction. Hitler was defeated and judicial torture may, in time, defeat terrorism. We just have to accept that three steps forward also involves two steps back. The point is to keep the faith.

But what if it is just faith? What if the very “fact” of progress is ultimately self-destructive? There are many ways in which this might turn out to be true. First, the human population is continuing to rise exponentially. It is currently approaching 6.5 billion, in 1900 it was 1.65 billion, in 1800 it was around a billion, in 1500 it was 500m. The figures show that economic and technological progress is loading the planet with billions more people. By keeping humans alive longer and by feeding them better, progress is continually pushing population levels. With population comes pollution. The overwhelming scientific consensus is that global warming caused by human activity is happening. According to some estimates, we will pass the point of no return within a decade. Weather systems will change, huge flooding will occur, and human civilisation if not existence will be at risk. This can be avoided if the US and China cut their carbon-dioxide emissions by 50% at once. This won’t happen, as they are fighting an economic war with progress as the prize. There are many other progress-created threats. Oil is one diminishing resource, and fresh water is another, even more vital one. Wars are virtually certain to be fought to gain control of these precious liquids.

In addition, antibiotic drugs are currently failing through overuse. No new generation of medicines is likely to be available to replace them in the near future. People may soon be dying again from sore throats and minor cuts. The massive longevity increase in the 20th century may soon begin to reverse itself.

Joel Mokyr’s response to all this is that our open-knowledge societies will enable these problems to be solved. John Gray replies: “This is faith, not science.” We believe we can fix things, but we can’t be sure. And if we can’t, then the Earth will fix them herself, flicking the human species into oblivion in the process.

Of course, the end of the world has been promised by Jews, Christians, Muslims and assorted crazies with sandwich boards for as long as there has been a human world to end. But those doomsdays were the product of faith; reason always used to say the world will continue. The point about the new apocalypse is that this situation has reversed. Now faith tells us we will be able to solve our problems; reason says we have no answers now and none are likely in the future. Perhaps we can’t cure cancer because the problem is simply beyond our intellects. Perhaps we haven’t flown to the stars because our biology and God’s physics mean we never can. Perhaps we are close to the limit and the time of plenty is over.

The evidence is mounting that our two sunny centuries of growth and wealth may end in a new Dark Age in which ignorance will replace knowledge, war will replace peace, sickness will replace health and famine will replace obesity. You don’t think so? It’s always happened in the past. What makes us so different? Nothing, I’m afraid.

WHY I AM SAVING THE WORLD [here is where Bryan becomes unhinged by the knowledge, as so many of us felt initially]

So, as a new Dark Age approaches, are you just going to carry on living your life as if nothing has changed? John-Paul Flintoff, for one, decided he couldn’t bury his head in the sand. He explains how he went on a one-man crusade to show that humanity can adapt and survive

I had just dropped my daughter at the nursery when I began to save the world. I mention this detail because it’s important to emphasise that Nancy loves her nursery. If she didn’t, I wouldn’t drive four miles from home — into London’s congestion zone, at a cost of £8 a day. I wouldn’t have found myself in Connaught Square that morning, fretting about newspaper stories suggesting the price of petrol was going up. I wouldn’t have seen a woman sitting inside a peculiar car parked beside me. Nor would I have noticed, on returning to my VW Golf from the nursery, that the car had moved some yards away and the woman had disappeared.

Intrigued, I wandered over and scribbled in my notebook. When I got home I began to investigate what I had seen. It may seem grandiose to describe my actions that morning, and in the days that followed, as “saving the world”. It may be factually incorrect, because I may not have averted global catastrophe after all. You decide — but first get your head round the following, rather terrifying background information. A barrel of oil contains the equivalent of almost 25,000 hours of human labour. A gallon of petrol contains the energy equivalent of 500 hours — enough to propel a three-ton 4×4 along 10 miles; to push it yourself would take nearly three weeks. To support economic growth, the world currently requires more than 30 billion barrels of oil a year. That requirement is constantly increasing, owing to population growth, debt-servicing, and the rapid industrialisation of developing countries such as India and China. But we are about to enter an era in which less oil will be available each year. And many believe that industrial society is doomed. Are we really running out?

Well, half of all supplies come from “giant” oilfields, of which 95% are at least 25 years old; 50% have been producing for 40 years or more. In the North Sea, production peaked in 1999. Late last year, Britain began to import more oil than we export. Worldwide, discoveries of new oilfields peaked in the 1960s; and despite technological advances, new discoveries are at an all-time low. A recent story in The New York Times suggested that oil companies are failing to recoup exploration costs: significant discoveries are so scarce that looking for them is a monetary loser. Not that I normally read The New York Times’ coverage of the oil business — like most people, I have tended to consider news about the oil industry to be extremely dull. That started to change when it crept out of the business pages and into the general news, and into advertisements. Practically every day, it seemed, a big oil company took a whole page to promote the fact that we are facing a crisis. One, paid for by Chevron, called on readers to help find a solution. I visited Chevron’s website, www.willyoujoinus.com, where a whirring clock monitored worldwide oil consumption: nearly 1,500 barrels a second. The more I read, the scarier it became. Michael Meacher, who was Britain’s environment minister for six years, is plainly terrified. “The implications are mind-blowing… Civilisation faces the sharpest and perhaps most violent dislocation in recent history.”

Matthew Simmons, a Houston-based energy-industry financier and adviser to George Bush and Dick Cheney, was asked in 2003 if there is a solution. He replied: “The solution is to pray.”

These people are not loonies. Optimists believe that the market — the law of supply and demand — will solve the problem. As oil becomes more expensive, we’ll shift to some other energy source. But do high prices really cut demand? Since early 1999, oil prices have risen by about 350%. Meanwhile, demand growth in 2004 was the highest in 25 years. That’s bad news, because the market won’t push energy companies into pursuing alternative sources of energy until oil reaches considerably higher prices. And then it will be too late to make the switch.

The former oil-industry executive Jan Lundberg reckons the crisis will be sudden. “Market-based panic will, within a few days, drive prices skyward,” he says. “And the market will become paralysed at prices too high for the wheels of commerce and daily living.” So forget the price at the pump: when oil becomes truly unaffordable, you will be more worried about the collapse of distribution networks, and the absence of food from local shops.

Ecologists use a technical term, “die-off”, to describe what happens when a population grows too big for the resources that sustain it. Where will die-off occur this time? Everywhere. By some estimates, 5 billion of the world’s 6½ billion population would never have been able to live without the blessed effects of fossil fuels, and oil in particular: oil powered the pumps that drained the land, and from oil came the chemicals that made intensive farming possible.

If oil dries up, we can assume, those 5 billion must starve. And they won’t all be in Africa this time. You too may be fighting off neighbours to protect a shrinking stash of canned food, and, when that runs out, foraging for insects in suburban gardens.

Dr Richard Duncan, of the Institute on Energy and Man, has monitored the issue for years. “I became deeply depressed,” he notes, “when I first concluded that our greatest scientific achievements will soon be forgotten and our most cherished monuments will crumble to dust.” Of course, this isn’t the first time people have predicted imminent apocalypse. During the late 19th century, Londoners feared they would all be killed by the methane in horse manure. But oil is certain to run out eventually, and most experts believe that will happen during the lifetimes of people now living. Pollyannas point out that the size of official oil reserves went up dramatically in the 1980s, and the same will happen again as oil companies discover new oilfields. But geologists say the world has been thoroughly searched already.

Not everyone believes we’re doomed. Cheerier prognostications suggest that our future will more closely resemble 1990s Cuba. The American trade embargo, combined with the collapse of Cuba’s communist allies in eastern Europe, suddenly deprived the island of imports. Without oil, public transport shut down and TV broadcasts finished early in the evening to save power. Industrial farms needed fuel and spare parts, pesticides and fertiliser — none of which were available. Consequently, the average Cuban diet dropped from about 3,000 calories per day in 1989 to 1,900 calories four years later. In effect, Cubans were skipping a meal a day, every day, week after month after year. Of necessity, the country converted to sustainable farming techniques, replacing artificial fertiliser with ecological alternatives, rotating crops to keep soil rich, and using teams of oxen instead of tractors. There are still problems supplying meat and milk, but over time Cubans regained the equivalent of that missing meal. And ecologists hailed their achievement in creating the world’s largest working model of largely sustainable agriculture, largely independent of oil.

Can we steer ourselves towards the Cuban ideal? If so, how?

Well, let me tell you what I did. First I switched exclusively to wind power as the source of my domestic electricity, through a company called Ecotricity, which promises the price will not differ significantly from what I paid before. Then I got a man round to give us a quote for installing double-glazed sash windows. The latest, high-specification glass, I was told, traps domestic heat but allows sunlight to pass through, which means you can turn the thermostat right down in winter. I contacted a company that specialises in solar power. If I acted quickly, I could get government subsidies. I put my name down for a domestic wind turbine — apparently, traffic at the end of my street makes a greater racket, but I would need planning permission. The turbine would cover roughly a third of my electricity needs. The cost: £1,500.

I bought a tray for sprouting seeds (highly nutritious, apparently) and started the long process, as yet unresolved, of persuading my wife that we must dig up our flowerbeds and turn the garden into an allotment. I even got in touch with a local vicar who keeps chickens in his garden, and asked how I might do the same.

Does this really amount to “saving the world”? I’ve saved the best till last. Remember Nancy’s nursery, and the peculiar car I saw in Connaught Square? The car is called a G-Wiz; it runs entirely on electricity, has four seats and storage in the bonnet, and is no bigger than a Smart car. A G-Wiz costs as little as £7,000. It does not incur road tax. It’s in the cheapest insurance bracket, and exempt from the congestion charge. In Westminster you can park for nothing in pay-and-display spaces, or in your local car park, with free electricity to charge the batteries.

The downside? It can’t go faster than 40mph, and the batteries go flat after about 40 miles. That didn’t bother me: we’d use it in London, and for trips further afield we could hire a car. There was one problem. Unless local councils install a socket on the pavement, the only people who can run an electric car are the lucky few with off-street parking.

So I started a campaign. I wrote a letter to drop through my neighbours’ doors, explaining about the coming oil crisis and describing the electric car. I promised to write to the council urging it to install electric sockets if at least a few of my neighbours would do the same. Within hours, two names appeared. Over the next couple of weeks, eight others had joined them. With this support, I wrote to my local councillors. For good measure, I sent through government proposals to subsidise that kind of installation by up to 60%. Placing my order for the G-Wiz, I popped a non-refundable cheque for £1,250 in the post. I would just have to hope Barnet council comes through before the car arrives.

I felt proud to belong to a district that was saving the world. And, to be honest, I felt rather pleased with myself. I sent for some fake parking tickets to leave on the windows of petrol-guzzling 4x4s. And I wrote a letter to the Saudi oil minister, urging him to invest in alternative energy technology before it’s too late.

It has been a long and tiring campaign. I realise it may not work. I don’t honestly believe most people will be motivated to match my shining example. Eventually, the government will impose the kind of restrictions normally used in wartime. When that happens, we’ll move out of London to begin a new life of genuine self-sufficiency.

Oil isn’t only useful as fuel

Most oil we consume is burnt as fuel. But hundreds of everyday objects are made from petrochemicals. We take them for granted now, but to drive your car, or fly away on a holiday that might just as well have taken place near home, is to burn a valuable resource that can be used to make products like these:

Household: Ballpoint pens, battery cases, bin bags, candles, carpets, curtains, detergents, drinking cups, dyes, enamel, lino, paint, brushes and rollers, pillows, refrigerants, refrigerator linings, roofing, safety glass, shower curtains, telephones, toilet seats, water pipes.

Personal: Cold cream, hair colour, lipstick, shampoo, shaving cream, combs, dentures, denture adhesive, deodorant, glasses, sunglasses, contact lenses, hand lotion, insect repellent, shoes, shoe polish, tights, toothbrushes, toothpaste, vitamin capsules.

Medical: Anaesthetics, antihistamines, antiseptics, artificial limbs, aspirin, bandages, cortisone, hearing aids, heart valves.

Leisure: cameras, fishing rods, footballs, golf balls, skis, stereos, tennis rackets, tents.

Agriculture: Fertilisers, insecticides, preservatives.

Other: Antifreeze, boats, lifejackets, glue, solvents, motorcycle helmets, parachutes, tyres.

How to survive when the oil runs out

Living without oil, if we don’t start to prepare for it, will not be like returning to the late 1700s, because we have now lost the infrastructure that made 18th-century life possible. We have also lost our basic survival skills. Dr Richard Duncan, of the Institute on Energy and Man, believes that we will return to living in essentially Stone Age conditions. Here is a taste of how to deal with the essentials.

Water: Animal trails lead to water. Watch the direction in which bees fly. Make containers from animal bladders and gourds.

Food: To remove the bitterness from acorns, soak them in a running stream for a few days. The common dandelion is a versatile and delicious plant. Open pine cones in the heat of a fire to release the nuts inside.

Luxuries: Make soap using lye (from hardwood ash) and animal fat. For candles, sheep fat is best, followed by beef. (Pork fat is very smelly and burns with thick smoke.)

Medicine: Use hypnosis for pain control. Frame suggestions positively. Use the present tense. Be specific and use repetition. Keep it simple.

Develop a survivor personality: Survivors spend almost no time getting upset. They have a good sense of humour and laugh at mistakes.

From: When Technology Fails: A Manual for Self-Reliance and Planetary Survival, by Matthew Stein

Posted in Other Experts, Roman Empire | Tagged , | Leave a comment

The Empire of the United States, Roman Empire, War, etc.

[Several articles and comments from energy groups follow]

Steven Strauss. December 31, 2012. 8 Striking Parallels Between the U.S. and the Roman Empire. Is our republic coming to an unceremonious end? History may not be on America’s side.

Lawrence Lessig’s “Republic Lost: documents the corrosive effect of money on our political process. Lessig persuasively makes the case that we are witnessing the loss of our republican form of government, as politicians increasingly represent those who fund their campaigns, rather than our citizens. <http://republic. lessig.org/>

Anthony Everitt’s “Rise of Rome” is a fascinating history and a great read. It tells the story of ancient Rome, from its founding (circa 750 BCE) to the fall of the Roman
Republic (circa 45 BCE).

When read together, striking parallels emerge — between our failings
and the failings that destroyed the Roman Republic. As with Rome just
before the Republic’s fall, America has seen:

Staggering Increase in the Cost of Elections, with Dubious Campaign Funding Sources: Our 2012 election reportedly cost $3 billion. All of it was raised from private sources — often creating the appearance, or the reality, that our leaders are beholden to special
interest groups. During the late Roman Republic, elections became staggeringly expensive, with equally deplorable results. Caesar reportedly borrowed so heavily for one political campaign, he feared he would be ruined, if not elected.
<http://penelope. uchicago. edu/Thayer/ E/Roman/Texts/ Plutarch/ Lives/Caesar* .html>

Politics as the Road to Personal Wealth: During the late Roman Republic period, one of the main roads to wealth was holding public office, and exploiting such positions to accumulate personal wealth. As Lessig notes: Congressman, Senators and their staffs leverage their government service to move to private sector positions — that pay three to ten times their government compensation. Given this financial arrangement, “Their focus is therefore not so much on the people who sent them to Washington. Their focus is instead on those who will make them rich.” (/Republic Lost/)


Continuous War: A national state of security arises, distracting attention from domestic challenges with foreign wars.* Similar to the late Roman Republic, the US — for the past 100 years — has either been fighting a war, recovering from a war, or preparing for a new war: WW I (1917-18), WW II (1941-1945), Cold War (1947-1991), Korean War (1950-1953), Vietnam (1953-1975), Gulf War (1990-1991), Afghanistan (2001-ongoing) , and Iraq (2003-2011). And, this list is far from complete.

Foreign Powers Lavish Money/Attention on the Republic’s Leaders: Foreign wars lead to growing influence, by foreign powers and interests, on the Republic’s political leaders — true for Rome and true for us. In the past century, foreign embassies, agents and lobbyists have proliferated in our nation’s capital. As one specific example: A foreign businessman donated $100 million to Bill Clinton various activities. Clinton “opened doors” for him, and sometimes acted in ways contrary to stated American interests and foreign policy.
<http://www.nytimes. com/2008/ 01/31/us/ politics/ 31donor.html? pagewanted= all&_r=0>’s

Profits Made Overseas Shape the Republic’s Internal Policies: As the fortunes of Rome’s aristocracy increasingly derived from foreign lands, Roman policy was shaped to facilitate these fortunes. American billionaires and corporations increasingly influence our elections. In many cases, they are only nominally American — with interests not aligned with those of the American public. For example, Fox News is part of international media group News Corp., with over $30 billion in revenues worldwide. Is Fox News’ jingoism a product of News Corp.’s non-U.S. interests?
Collapse of the Middle Class: In the period just before the Roman Republic’s fall, the Roman middle class was crushed — destroyed by cheap overseas slave labor. In our own day, we’ve witnessed rising income inequality a stagnating middle class, and the loss of American jobs to overseas workers who are paid less and have fewer rights.
<http://www.theatlan tic.com/business /archive/ 2012/12/a- giant-statistica l-round-up- of-the-income- inequality- crisis-in- 16-charts/ 266074/>,

Gerrymandering: Rome’s late Republic used various methods to reduce the power of common citizens. The GOP has so effectively gerrymandered Congressional
districts that, even though House Republican candidates received only
about 48 percent of the popular vote in the 2012 election — they ended
up with the majority (53 percent) of the seats.
<http://thinkprogres s.org/justice/ 2012/11/07/ 1159631/american s-voted-for- a-democratic- house-gerrymande ring-the- supreme-court- gave-them- speaker-boehner/ ?mobile=nc>

Loss of the Spirit of Compromise: The Roman Republic, like ours, relied on a system of checks and balances. Compromise is needed for this type of system to function. In the end, the Roman Republic lost that spirit of compromise, with politics increasingly polarized between Optimates (the rich, entrenched elites) and Populares (the common people:

<http://www.britanni ca.com/EBchecked /topic/430565/ Optimates- and-Populares>.
Sound familiar? Compromise is in noticeably short supply in our own time also:<http://www.washingt onpost.com/ blogs/wonkblog/ wp/2012/11/ 09/is-this- the-end-for- the-filibuster/>,
“There were more filibusters between 2009 and 2010 than there were in
the 1950s, 1960s and 1970s combined.”

As Benjamin Franklin <http://www.bartleby .com/73/1593. html> observed, we have a Republic — but only if we can keep it.


Empires Then and Now March 26, 2012   paul craig roberts


Great empires, such as the Roman and British, were extractive. The empires succeeded, because the value of the resources and wealth extracted from conquered lands exceeded the value of conquest and governance. The reason Rome did not extend its empire east into Germany was not the military prowess of Germanic tribes but Rome’s calculation that the cost of conquest exceeded the value of extractable resources.

The Roman empire failed, because Romans exhausted manpower and resources in civil wars fighting amongst themselves for power. The British empire failed, because the British exhausted themselves fighting Germany in two world wars.

In his book, The Rule of Empires (2010), Timothy H. Parsons replaces the myth of the civilizing empire with the truth of the extractive empire. He describes the successes of the Romans, the Umayyad Caliphate, the Spanish in Peru, Napoleon in Italy, and the British in India and Kenya in extracting resources. To lower the cost of governing Kenya, the British instigated tribal consciousness and invented tribal customs that worked to British advantage.

Parsons does not examine the American empire, but in his introduction to the book he wonders whether America’s empire is really an empire as the Americans don’t seem to get any extractive benefits from it. After eight years of war and attempted occupation of Iraq, all Washington has for its efforts is several trillion dollars of additional debt and no Iraqi oil. After ten years of trillion dollar struggle against the Taliban in Afghanistan, Washington has nothing to show for it except possibly some part of the drug trade that can be used to fund covert CIA operations.

America’s wars are very expensive. Bush and Obama have doubled the national debt, and the American people have no benefits from it. No riches, no bread and circuses flow to Americans from Washington’s wars. So what is it all about?

The answer is that Washington’s empire extracts resources from the American people for the benefit of the few powerful interest groups that rule America. The military-security complex, Wall Street, agri-business and the Israel Lobby use the government to extract resources from Americans to serve their profits and power. The US Constitution has been extracted in the interests of the Security State, and Americans’ incomes have been redirected to the pockets of the 1 percent. That is how the American Empire functions.

The New Empire is different. It happens without achieving conquest. The American military did not conquer Iraq and has been forced out politically by the puppet government that Washington established. There is no victory in Afghanistan, and after a decade the American military does not control the country.

In the New Empire success at war no longer matters. The extraction takes place by being at war. Huge sums of American taxpayers’ money have flowed into the American armaments industries and huge amounts of power into Homeland Security. The American empire works by stripping Americans of wealth and liberty.

This is why the wars cannot end, or if one does end another starts. Remember when Obama came into office and was asked what the US mission was in Afghanistan? He replied that he did not know what the mission was and that the mission needed to be defined.

Obama never defined the mission. He renewed the Afghan war without telling us its purpose. Obama cannot tell Americans that the purpose of the war is to build the power and profit of the military/security complex at the expense of American citizens.

This truth doesn’t mean that the objects of American military aggression have escaped without cost. Large numbers of Muslims have been bombed and murdered and their economies and infrastructure ruined, but not in order to extract resources from them.

It is ironic that under the New Empire the citizens of the empire are extracted of their wealth and liberty in order to extract lives from the targeted foreign populations. Just like the bombed and murdered Muslims, the American people are victims of the American empire.

Facebook comment: Roberts argues that the American “empire” (top 1%) extracts resources from its own citizens rather than other nations (we got no oil from Iraq, just trillions in debt). Bonner, in his book in “Empire of Debt”, makes the case that we are the most inept Empire in world history. James Howard Kunstler that we wasted most of our vast resources on suburbia, which will be unsustainable and useless as oil declines.


Alice Friedemann. 2012. Will America be seen as the most inept empire in world history?

In the article “Empires Then and Now” above, Roberts argues that the American “empire” (top 1%) extracts resources from its own citizens rather than other nations (we got no oil from Iraq, just trillions in debt).

Bonner, in his book in “Empire of Debt”, makes the case that we are the most inept Empire in world history, because the United States pays for the world’s largest military to keep world-wide peace, leaving other countries free to spend money that might have gone to their military into health care, education, and commerce.

James Howard Kunstler explains how we wasted most of our vast resources on suburbia, which will be unsustainable and useless as oil declines (though I believe that abandoned homes will make excellent goat houses).

Ghenghis Khan saw the wealthy as unskilled and useless, so he often killed them when taking over a city (Weatherford’s “Ghenghis Kahn and the Making of the Modern World”).

So I think that in virtually any empire the rich become parasites, though never before even close to the level of parasitism we have now.

In nature, parasites aren’t entirely bad — for instance, a native snail in the Bay area harbors a parasite that fish eat which changes their behavior such that they swim close to the surface in erratic ways, making them 30 times easier for birds to catch, allowing enormous flocks of birds to exist.

Is there is any upside to the parasitism of the wealthy? If goods were distributed more equally, would overall resource consumption and extraction be even higher? Or the opposite — would large mining, construction, and warfare enterprises be more difficult to fund if money were less concentrated and more dispersed?

As far as I can tell, the so-called superiority of capitalism over other systems is that it was the most efficient in extracting resources as quickly as possible. Leading to a very short “age of oil” at the cost of polluting earth’s ecoysystems at a rate unprecedented in Earth’s history that risks driving ourselves and millions of other species extinct.

And even though we know we’re changing the earth’s climate to a state our species has never existed in before and may not be able to survive, the extraction machine can not be stopped.

Where’s Ghenghis Khan when you need him?]

Jason Godesky. Jan 19, 2006. http://anthropik.com/2006/01/thesis-29-it-will-be-impossible-to-rebuild-civilization

Previous collapses often set the scene for another “rise” to civilization. The fall of Rome shapes the Western imagination’s idea of collapse, with the descent into the barbarism of the Dark Ages, the long gestation of the Middle Ages, and the final rebirth of “civilization” in the Renaissance. However, as Greer points out in “How Civilizations Fall: A Theory of Catabolic Collapse,” [PDF] the Western Roman Empire suffered a maintenance crisis, not a catabolic collapse. So the question remains, is this a collapse, or the collapse? Are we merely facing a momentary downturn in a new sine wave of complexity, or does this collapse represent the end of civilization once and for all?

In Of Men and Galaxies, Sir Fred Hoyle obviously confuses civilization for intelligence, but that error notwithstanding, the following observation speaks to one of the essential problems that will face any civilization that will hope to succeed us:

It has often been said that, if the human species fails to make a go of it here on Earth, some other species will take over the running. In the sense of developing high intelligence this is not correct. We have, or soon will have, exhausted the necessary physical prerequisites so far as this planet is concerned. With coal gone, oil gone, high-grade metallic ores gone, no species however competent can make the long climb from primitive conditions to high-level technology. This is a one-shot affair. If we fail, this planetary system fails so far as intelligence is concerned. The same will be true of other planetary systems. On each of them there will be one chance, and one chance only.

It is important to remember that the various facets of complexity are inextricably linked, one to another. As Joseph Tainter remarked in “Complexity, Problem-Solving and Sustainable Societies”: “Energy has always been the basis of cultural complexity and it always will be.” He further oberseved in Collapse of Complex Societies:

A society increasing in complexity does so as a system. That is to say, as some of its interlinked parts are forced in a direction of growth, others must adjust accordingly. For example, if complexity increases to regulate regional subsistence production, investments will be made in hierarchy, in bureaucracy, and in agricultural facilities (such as irrigation networks). The expanding hierarchy requires still further agricultural output for its own needs, as well as increased investment in energy and minerals extraction. An expanded military is needed to protect the assets thus created, requiring in turn its own sphere of agricultural and other resources. As more and more resources are drained from the support population to maintain this system, an increased share must be allocated to legitimization or coercion. This increased complexity requires specialized administrators, who consume further shares of subsistence resources and wealth. To maintain the productive capacity of the base population, further investment is made in agriculture, and so on.

The illustration could be expanded, tracing still further the interdependencies within such a growing system, but the point has been made: a society grows in complexity as a system. To be sure, there are instances where one sector of a society grows at the expense of others, but to be maintained as a cohesive whole, a social system can tolerate only certain limits to such conditions.

Thus, it is possible to speak of sociocultural evolution by the encompassing term ‘complexity,’ meaning by this the interlinked growth of the several subsystems that comprise a society.

So, complexity is a function of energy throughput, and all the facets of complexity are interlinked. The question of whether or not a civilization will be capable of rising again is a question of how much energy will be available to it.

First, we must understand what kind of collapse it is that we face. A prolonged maintenance crisis like the fall of Rome would allow time for adaptation, but it is more likely that we face a sudden, catabolic collapse. The difference, as Greer explains in the paper cited above, is driven by the sort of diminishing returns on complexity that we have already discussed at length. Rome faced a maintenance crisis. It was beyond the point of diminishing returns, but the ecology and resources available in Europe were still sufficient to support a civilization. Rome collapsed under its own weight, moreso than from any kind of environmental stress or resource depletion. Thus, its collapse centered primarily on scaling back complexity and breaking down into smaller, more manageable kingdoms. In this scenario, energy throughput is reduced because complexity must fall to a more economic level. It is the price of complexity that is driving the process, so it levels out at a lower–but still civilized–level.

That is not the case with catabolic collapse. Catabolic collapse takes place when reductions in collapse are driven by a shortfall in energy throughput. That can be the result of desertification, sustained drought, loss of agricultural land, massive mortality from war, famine or disease, climate change, or a necessary fuel source’s production peaking. While it is true that our complexity has passed the point of diminishing returns (see thesis #15), and we are dealing with the cost of that, we have not yet shown many signs of a maintenance crisis. Rather, the perils we face–such as global warming, mass extinction (see thesis #17), and peak oil (see thesis #18)–are causes of catabolic collapse. Our shortfalls in complexity will likely be triggered by shortfalls in energy throughput. As Greer describes the process:

A society that uses resources beyond replenishment rate (d(R)/r(R) > 1), when production of new capital falls short of maintenance needs, risks a depletion crisis in which key features of a maintenance crisis are amplified by the impact of depletion on production. As M(p) exceeds C(p) and capital can no longer be maintained, it is converted to waste and unavailable for use. Since depletion requires progressively greater investments of capital in production, the loss of capital affects production more seriously than in an equivalent maintenance crisis. Meanwhile further production, even at a diminished rate, requires further use of depleted resources, exacerbating the impact of depletion and the need for increased capital to maintain production. With demand for capital rising as the supply of capital falls, C(p) tends to decrease faster than M(p) and perpetuate the crisis. The result is a catabolic cycle, a self-reinforcing process in which C(p) stays below M(p) while both decline. Catabolic cycles may occur in maintenance crises if the gap between C(p) and M(p) is large enough, but tend to be self-limiting in such cases. In depletion crises, by contrast, catabolic cycles can proceed to catabolic collapse, in which C(p) approaches zero and most of a society’s capital is converted to waste.

Of course, many of the survivors will want to rebuild civilization. The nature of catabolic collapse, however, will leave them with precious little to start with. As a self-reinforcing cycle, catabolic collapse is as unstoppable as the anabolic growth that currently drives us into ever-greater complexity. Both are self-reinforcing feedback loops, and both must run their course before any other direction can be taken. So we need not consider the case of an “interrupted” collapse, where civilization is rebuilt from the remains of the old. This will not be a return to the Dark Ages; it will be a return to the Stone Age.

How we be so sure of this? The current state of civilization is dependent on resources that are now so depleted, that they require an industrial infrastructure already in place to gather those resources. When coal was first used as a fuel, it could simply be picked off the ground. Those surface deposits were quickly used up. When those were gone, coal mining began. It was more costly, but as coal became a necessary fuel, the cost was justified. The shallowest mines were exploited first. As they ran out, miners turned to deeper and deeper mines. Today’s mines are often hundreds of feet below ground, with access tunnels that must burrow through miles of earth. Mining so far below the earth is a dangerous job, made possible only by industrial machinery for ventilation, stabilization, and digging. We can fetch this fossil fuel only because we have fossil fuels to put to the task.

Again, the issue of peak oil leaves significant quantities of oil still in the ground. But it is deep in the earth, or under the sea, and often of a poorer quality, requiring more refinement. We can drill and refine this oil only because we have industrial equipment to build rigs and power refineries for the task. Any interruption in our civilization’s supply of fossil fuel would require any effort to rebuild civilization to start from scratch. Catabolic collapse is precisely such an interruption.

Civilization, as we have seen, is only possible through agriculture, because only agriculture allows a society to increase its food supply–and thus its population–and thus its energy throughput–and thus its complexity–so arbitrarily. That level of complexity provides the agricultural society the ability to achieve other levels of complexity, such as crafting metal tools, state-level government, and advanced technology. Civilization only began when agriculture became possible, but does that mean that civilization can only appear based on agriculture? Yes, it does. Every culture must have some means of gathering food, and every means of gathering food can be placed into one of two categories: those where the people produce their own food, i.e., “cultivation,” and those where they do not. The latter is referred to as “foraging.” There is an enormous diversity under that heading–far more than deserves such a bland, umbrella term, but all such forms share a number of things in common. Because the amount of food they consume depends on the amount of food available in their ecosystem, there is a caloric limit of how much they can consume. They cannot raise their food supply, because their food supply is not under their control. Cultivators can be further subdivided between those who operate above, and below, the point of diminishing returns. Below the point of diminishing returns, cultivators are called horticulturalists. Horticulture also places a caloric limit–however many calories can be produced below the point of diminishing returns. To produce more than this would require working above the point of diminishing returns, at which point they cease to be horticulturalists, and instead become agriculturalists. Agriculturalists can increase the number of calories they produce simply by increasing their inputs–thus, only agriculturalists can arbitrarily increase their energy throughput, so only agriculturalists can start a civilization.

Given that, how plausible is agriculture after the collapse? Again, all but impossible. Plants, like any other organism, takes in nutrients, and excrete wastes. For plants, those are nutrients they take out of the soil, and waste they put into the soil. In nature, what one plant excretes as waste, another takes in as nutrients. They balance each other, and all of them thrive. But monoculture–planting whole fields of just one crop–sets fields of the same plant, all bleeding out the same nutrients, all dumping back in the same wastes. It is precsely the same effect as filling an empty room with people and sealing it completely off. Eventually, the entire room will be full of carbon dioxide, and there will be no more oxygen. Monoculture does to topsoil what locking yourself in a garage with your car engine running does to a human. Koetke’s “Final Empire” highlighted the importance of topsoil to life on earth, and the devastating impact agriculture has had on that topsoil:

In 1988, the annual soil loss due to erosion was twenty-five billion tons and rising rapidly. Erosion means that soil moves off the land. An equally serious injury is that the soil’s fertility is exhausted in place. Soil exhaustion is happening in almost all places where civilization has spread. This is a literal killing of the planet by exhausting its fund of organic fertility that supports other biological life. Fact: since civilization invaded the Great Plains of North America one-half of the topsoil of that area has disappeared.

As that happened, we also invented ever more powerful petrochemical fertilizers to offset the death of the soil, giving the illusion that all was well. The Dust Bowl arose because our innovation was outpaced by the devastation. We quickly got back on top of it, leading us to the current situation. The Great Plains are essentially a desert. We grow most of the world’s corn on a thick layer of oil we have laid over its soil, long ago bled to death by the first wave of farmers in America. In “The Oil We Eat,” Richard Manning dramatically illustrated how much our “breadbasket” now relies on oil when he wrote:

Corn, rice, and wheat are especially adapted to catastrophe. It is their niche. In the natural scheme of things, a catastrophe would create a blank slate, bare soil, that was good for them. Then, under normal circumstances, succession would quickly close that niche. The annuals would colonize. Their roots would stabilize the soil, accumulate organic matter, provide cover. Eventually the catastrophic niche would close. Farming is the process of ripping that niche open again and again. It is an annual artificial catastrophe, and it requires the equivalent of three or four tons of TNT per acre for a modern American farm. Iowa’s fields require the energy of 4,000 Nagasaki bombs every year.

Iowa is almost all fields now. Little prairie remains, and if you can find what Iowans call a “postage stamp” remnant of some, it most likely will abut a cornfield. This allows an observation. Walk from the prairie to the field, and you probably will step down about six feet, as if the land had been stolen from beneath you. Settlers’ accounts of the prairie conquest mention a sound, a series of pops, like pistol shots, the sound of stout grass roots breaking before a moldboard plow. A robbery was in progress.

The Fertile Crescent was not always a cruel joke. It was turned into a desert by agriculture in the very same way. At the moment, 40% of the earth’s surface is covered in farmland; most of that is no longer arable after being farmed for so long. Of the 60% that remains, most of it was never arable to begin with–that is why it was not farmed. The domesticable crops are a small subset of all the plants that exist, and they are disproportionately cereal grains, making them both small in number, and lacking in diversity. They tend to be low in nutritional content, and extremely tempermental, requiring very specific climate and soil conditions. Beyond simply lacking the soil they require, they will not have the climate they require, either.

In thesis #6, we made reference to Ruddiman’s “long Anthropocene” hypothesis, arguing that the Holocene interglacial was artificially extended by the deforestation caused by early agriculture. If Ruddiman is right, then an interruption in agricultural production would result in the resumption of the Pleistocene ice age. However, that case is complicated by the more recent trend of global warming. Mounting evidence suggests that the massive increases in the scale of anthropogenic atmospheric change introduced by the Industrial Revolution may not simply have offset the earth’s natural cooling trend, but may have begun to reverse it. Regardless of which scenario follows the collapse, ice age or global warming, the one thing that will not be possible is a continuation of the status quo. No matter what follows, we will see the end of the Holocene, and with it, the end of any climate capable of supporting agriculture on any significant scale.

We are therefore talking about a complete break with the end of our current civilization. Whole generations will pass before it becomes feasible again. What, then, of the distant future, when another interglacial occurs, or when global warming stabilizes? Will we be able to rebuild civilization then?

After the passage of millennia, the soil may well heal itself, and the necessary climate may return. In that scenario, agriculture may be possible in those same areas, and under the same conditions, that it first occurred. Flood plains at a given climate are necessary. It needs to be an annual flood, and it needs to deposit new soil, to compensate for the depletion of the soil on a regular basis–but not so regular that the fields are flooded while the crops are still growing. And, they will need to exist in areas where domesticable plants live. All in all, a very precise set of circumstances already.

If agriculture does begin in such areas (and there can only be a dozen or less in the whole world), they will find themselves limited below a ceiling we did not suffer. In the course of our civilization, we used up all of the surface and near-surface deposits of all the economically viable metals on earth. The simple physical property of pounds per square inch will limit the technology of our little kingdoms to the Neolithic. No plow, however ingenious, can ever be made out of rock. In some directions, complexity will be allowed to flourish. In other directions–particularly lever-based machines, tools, and weapons–we will be very tightly circumscribed by the lack of any feasible materials. That limitation on technological complexity will necessarily limit all other forms of complexity, as well–as discussed above, while some levels can gain complexty at the expense of others, that can only happen within certain parameters. This is why the Neolithic never saw state-level governments; only with the beginning of the Bronze Age did we see that development. Likewise, the lack of metals will continue to limit technological development after the collapse–and by limiting technological development, it will limit all other forms of complexity.

The role of human ingenuity is marvelous, but not all-encompassing. Not every problem can be solved simply by the application of wits. Ambition and wits existed in plenty throughout the Paleolithic, yet we never developed the technology or complexity necessary to build a civilization, because complexity advances as a single thing, and always as a function of energy. The lever and the wedge are ultimately necessary–in the form of the plow and the sword–but these are not effective unless made of a material that can withstand sufficient pressure. The only such materials on earth are metals now buried so deep underground that only an industrial infrastructure can fetch them.

Our future Neolithic kingdoms will thus be constrained by problems of scale inherent to such low levels of complexity, lacking the technology to communicate quickly or easily, without effective weapons to suppress rebellion, without complex bureaucracies to administer large territories. They will effectively be limited to small city-states, incapable of expanding beyond that for the same problems of scale that inhibited so many of the civilizations of Mesoamerica, but more so.

There is the minor question of civilization’s waste, however. While mining the earth for metals may not be possible, mining our waste may be far more feasible. Of course, unattended metals rust quickly, and become unusable after a generation. However, our landfills preserve the garbage within remarkably. Might potential future civilizations mine landmills for new metals? There is, of course, an inherent limitation to such a proposition, in that the rate of that resource’s replenishment is zero. Even fossil fuels have some replenishment rate. Any such resources will quickly be depleted–such a civilization might have a chance for a brief flash of glory, barely entering something akin to a Bronze Age level of complexity before burning itself out.

With the passage of geological ages, though, this will pass. Fossil fuels will be replenished, and metal ores will rise to the surface. After ages of the earth have passed, and another ice age comes, and then an interglacial, then, if there are still humans so far into the future–this is a matter of at least tens of millions of years, far longer than humans have so far survived–then there might be another opportunity to rebuild civilization then, but that will be the first chance we have after this collapse.

Jonathan Freedland. September 18, 2002. They came, they saw, they conquered, and now the Americans dominate the world like no nation before. But is the US really the Roman empire of the 21st century? And if so, is it on the rise – or heading for a fall?  The Guardian

The word of the hour is empire. As the United States marches to war, no other label quite seems to capture the scope of American power or the scale of its ambition. “Sole superpower” is accurate enough, but seems oddly modest. “Hyperpower” may appeal to the French; “hegemon” is favoured by academics. But empire is the big one, the gorilla of geopolitical designations – and suddenly America is bearing its name.

Of course, enemies of the US have shaken their fist at its “imperialism” for decades: they are doing it again now, as Washington wages a global “war against terror” and braces itself for a campaign aimed at “regime change” in a foreign, sovereign state. What is more surprising, and much newer, is that the notion of an American empire has suddenly become a live debate inside the US. And not just among Europhile liberals either, but across the range – from left to right.

Today a liberal dissenter such as Gore Vidal, who called his most recent collection of essays on the US The Last Empire, finds an ally in the likes of conservative columnist Charles Krauthammer. Earlier this year Krauthammer told the New York Times, “People are coming out of the closet on the word ‘empire’.” He argued that Americans should admit the truth and face up to their responsibilities as the undisputed masters of the world. And it wasn’t any old empire he had in mind. “The fact is, no country has been as dominant culturally, economically, technologically and militarily in the history of the world since the Roman empire.”

Accelerated by the post-9/11 debate on America’s role in the world, the idea of the United States as a 21st-century Rome is gaining a foothold in the country’s consciousness. The New York Review of Books illustrated a recent piece on US might with a drawing of George Bush togged up as a Roman centurion, complete with shield and spears. Earlier this month Boston’s WBUR radio station titled a special on US imperial power with the Latin tag Pax Americana. Tom Wolfe has written that the America of today is “now the mightiest power on earth, as omnipotent as… Rome under Julius Caesar”.

But is the comparison apt? Are the Americans the new Romans? In making a documentary film on the subject over the past few months, I put that question to a group of people uniquely qualified to know. Not experts on US defense strategy or American foreign policy, but Britain’s leading historians of the ancient world. They know Rome intimately – and, without exception, they are struck by the similarities between the empire of now and the imperium of then.

The most obvious is overwhelming military strength. Rome was the superpower of its day, boasting an army with the best training, biggest budgets and finest equipment the world had ever seen. No one else came close. The United States is just as dominant – its defense budget will soon be bigger than the military spending of the next nine countries put together, allowing the US to deploy its forces almost anywhere on the planet at lightning speed. Throw in the country’s global technological lead, and the US emerges as a power without rival.

There is a big difference, of course. Apart from the odd Puerto Rico or Guam, the US does not have formal colonies, the way the Romans (or British, for that matter) always did. There are no American consuls or viceroys directly ruling faraway lands.

But that difference between ancient Rome and modern Washington may be less significant than it looks. After all, America has done plenty of conquering and colonizing: it’s just that we don’t see it that way. For some historians, the founding of America and its 19th-century push westward were no less an exercise in empire-building than Rome’s drive to take charge of the Mediterranean. While Julius Caesar took on the Gauls – bragging that he had slaughtered a million of them – the American pioneers battled the Cherokee, the Iroquois and the Sioux. “From the time the first settlers arrived in Virginia from England and started moving westward, this was an imperial nation, a conquering nation,” according to Paul Kennedy, author of The Rise and Fall of the Great Powers.

More to the point, the US has military bases, or base rights, in some 40 countries across the world – giving it the same global muscle it would enjoy if it ruled those countries directly. (When the US took on the Taliban last autumn, it was able to move warships from naval bases in Britain, Japan, Germany, southern Spain and Italy: the fleets were already there.) According to Chalmers Johnson, author of Blowback: The Costs and Consequences of American Empire, these US military bases, numbering into the hundreds around the world, are today’s version of the imperial colonies of old. Washington may refer to them as “forward deployment”, says Johnson, but colonies are what they are. On this definition, there is almost no place outside America’s reach. Pentagon figures show that there is a US military presence, large or small, in 132 of the 190 member states of the United Nations.

So America may be more Roman than we realize, with garrisons in every corner of the globe. But there the similarities only begin. For the United States’ entire approach to empire looks quintessentially Roman. It’s as if the Romans bequeathed a blueprint for how imperial business should be done – and today’s Americans are following it religiously.

Lesson one in the Roman handbook for imperial success would be a realization that it is not enough to have great military strength: the rest of the world must know that strength – and fear it too. The Romans used the propaganda technique of their time – gladiatorial games in the Colosseum – to show the world how hard they were. Today 24-hour news coverage of US military operations – including video footage of smart bombs scoring direct hits – or Hollywood shoot-‘em-ups at the multiplex serve the same function. Both tell the world:
this empire is too tough to beat.

The US has learned a second lesson from Rome, realizing the centrality of technology. For the Romans, it was those famously straight roads, enabling the empire to move troops or supplies at awesome speeds – rates that would not be surpassed for well over a thousand years. It was a perfect example of how one imperial strength tends to feed another: an innovation in engineering, originally designed for military use, went on to boost Rome commercially. Today those highways find their counterpart in the information superhighway: the internet also began as a military tool, devised by the US defense department, and now stands at the heart of American commerce. In the process, it is making English the Latin of its day – a language spoken across the globe. The US is proving what the Romans already knew: that once an empire is a world leader in one sphere, it soon dominates in every other.

But it is not just specific tips that the US seems to have picked up from its ancient forebears. Rather, it is the fundamental approach to empire that echoes so loudly. Rome understood that, if it is to last, a world power needs to practise both hard imperialism, the business of winning wars and invading lands, and soft imperialism, the cultural and political tricks that work not to win power but to keep it.

So Rome’s greatest conquests came not at the end of a spear, but through its power to seduce conquered peoples. As Tacitus observed in Britain, the natives seemed to like togas, baths and central heating – never realising that these were the symbols of their “enslavement”. Today the US offers the people of the world a similarly coherent cultural package, a cluster of goodies that remain reassuringly uniform wherever you are. It’s not togas or gladiatorial games today, but Starbucks, Coca-Cola, McDonald’s and Disney, all paid for in the contemporary equivalent of Roman coinage, the global hard currency of the 21st century: the dollar.

When the process works, you don’t even have to resort to direct force; it is possible to rule by remote control, using friendly client states. This is a favorite technique for the contemporary US – no need for colonies when you have the Shah in Iran or Pinochet in Chile to do the job for you – but the Romans got there first. They ruled by proxy whenever they could. We, of all people, should know: one of the most loyal of client kings ruled right here, in the southern England of the first century AD.

His name was Togidubnus and you can still visit the grand palace that was his at Fishbourne in Sussex. The mosaic floors, in remarkable condition, are reminders of the cool palatial quarters where guests would have gathered for preprandial drinks or a perhaps an audience with the king. Historians now believe that Togidubnus was a high-born Briton educated in Rome, brought back to Fishbourne and installed as a pro-Roman puppet. Just as Washington’s elite private schools are full of the “pro-western” Arab kings, South American presidents or African leaders of the future, so Rome took in the heirs of the conquered nations’ top families, preparing them for lives as rulers in Rome’s interest.

And Togidubnus did not let his masters down. When Boudicca led her uprising against the Roman occupation in AD60, she made great advances in Colchester, St Albans and London – but not Sussex. Historians now believe that was because Togidubnus kept the native Britons under him in line. Just as Hosni Mubarak and Pervez Musharraf have kept the lid on anti-American feeling in Egypt and Pakistan, Togidubnus did the same job for Rome nearly two millennia ago.

Not that it always worked. Rebellions against the empire were a permanent fixture, with barbarians constantly pressing at the borders. Some accounts suggest that the rebels were not always fundamentally anti-Roman; they merely wanted to share in the privileges and affluence of Roman life. If that has a familiar ring, consider this: several of the enemies who rose up against Rome are thought to have been men previously nurtured by the empire to serve as pliant allies. Need one mention former US protege Saddam Hussein or one-time CIA trainee Osama bin Laden?

Rome even had its own 9/11 moment. In the 80s BC, Hellenistic king Mithridates called on his followers to kill all Roman citizens in their midst, naming a specific day for the slaughter. They heeded the call – and killed 80,000 Romans in local communities across Greece. “The Romans were incredibly shocked by this,” says ancient historian Jeremy Paterson of Newcastle University. “It’s a little bit like the statements in so many of the American newspapers since September 11: ‘Why are we hated so much?'”

Internally, too, today’s United States would strike many Romans as familiar terrain. America’s mythologising of its past – its casting of founding fathers Washington and Jefferson as heroic titans, its folk-tale rendering of the Boston Tea Party and the war of independence – is very Roman. That empire, too, felt the need to create a mythic past, starred with heroes. For them it was Aeneas and the founding of Rome, but the urge was the same: to show that the great nation was no accident, but the fruit of manifest destiny.

And America shares Rome’s conviction that it is on a mission sanctioned from on high. Augustus declared himself the son of a god, raising a statue to his adoptive father Julius Caesar on a podium alongside Mars and Venus. The US dollar bill bears the words “In God we trust” and US politicians always like to end their speeches with “God bless America.”

Even that most modern American trait, its ethnic diversity, would make the Romans feel comfortable. Their society was remarkably diverse, taking in people from all over the world – and even promising new immigrants the chance to rise to the very top (so long as they were from the right families). While America is yet to have a non-white president, Rome boasted an emperor from north Africa, Septimius Severus. According to classicist Emma Dench, Rome had its own version of America’s “hyphenated” identities. Like the Italian-Americans or Irish-Americans of today, Rome’s citizens were allowed a “cognomen” – an extra name to convey their Greek-Roman or British-Roman heritage: Tiberius Claudius Togidubnus.

There are some large differences between the two empires, of course – starting with self-image. Romans revelled in their status as masters of the known world, but few Americans would be as ready to brag of their own imperialism. Indeed, most would deny it. But that may come down to the US’s founding myth. For America was established as a rebellion against empire, in the name of freedom and self-government. Raised to see themselves as a rebel nation and plucky underdog, they can’t quite accept their current role as master.

One last factor scares Americans from making a parallel between themselves and Rome: that empire declined and fell. The historians say this happens to all empires; they are dynamic entities that follow a common path, from beginning to middle to end.

“What America will need to consider in the next 10 or 15 years,” says Cambridge classicist Christopher Kelly, “is what is the optimum size for a nonterritorial empire, how interventionist will it be outside its borders, what degree of control will it wish to exercise, how directly, how much through local elites? These were all questions which pressed upon the Roman empire.”

Anti-Americans like to believe that an operation in Iraq might be proof that the US is succumbing to the temptation that ate away at Rome: overstretch. But it’s just as possible that the US is merely moving into what was the second phase of Rome’s imperial history, when it grew frustrated with indirect rule through allies and decided to do the job itself. Which is it? Is the US at the end of its imperial journey, or on the brink of its most ambitious voyage? Only the historians of the future can tell us that.


David Fridley  Nov 26, 2006 [in response to the person asking for parallels between America and the Roman Empire]

First off, what was “collapse”? In our accelerated world, we tend to interpret it as instantaneous event, in a time frame that satisfies our needs to know the outcome by the evening news. But collapse is really nothing more than the process of simplification from complexity. In the case of Rome (and other declining civilizations), there was no single event labeled “collapse”…it was a process of simplification that unfolded over hundreds of years, far beyond the memory of any single person living at the time. I would recommend an excellent treatment of the issue at http://www.xs4all.nl/~wtv/powerdown/greer.htm, discussing “catabolic collapse”. As you can see in the table there (dates from Tainter), collapse took anywhere from 100 to 500 years for major civilizations, and over 300 years for the western Roman Empire itself. So in that regard, no, I don’t think there were groups similar to peak oil groups today warning of the empire’s collapse (and what would have been their argument? And what would have been their “solutions”?)

At the same time, socially, we are vastly different. In Roman times, 85-90% of the population were the energy producers–that is, the farmers–whose surplus energy supported the 10-15% of the population (including the emperor, army, musicians, artists, vagabonds, merchants and so forth) who were not directly involved in energy production. In the US today, 3% of the population (and vast amounts of fossil fuels) provides the surplus to support the 97% of the population not directly involved in energy production. In that regard, only the “elite” of the empire would have even noticed a material change with “collapse”. Today, we are all the “elite”. The energy producers just went on producing as they always had, though there were changes in feudal relations, taxation, sometimes confiscatory policies, and other hardships that accompanied both wars (in the good times) and collapse. Although some Roman historians lamented the passing of the Republic (which lasted for about 400 years–longer than ours–til about 40 BC), I’ve never read anything of a self-aware group that looked at the material conditions of the empire and predicted collapse over some centuries in the future. That, I think, would be pretty much unlikely at the time, since in Western civilizations, at least, it wasn’t until the publication of Thomas More’s Utopia in 1516 that we ever viewed the future as a better place than the past, and thus see decline as something odd. Before then, the “Golden Age” of man–what civilizations aspired to, were always those of the past, and history was considered a process of degeneration. With this kind of world view, what exactly would “collapse” mean to one of the elite Romans and how exactly would it have mattered to the 90% of the population who lived in stasis?

Compare this as well to the worldview of the Chinese, who developed a sophisticated view of rise and fall that came from thousands of years of dynasties rising then collapsing. To a Chinese, this was a natural phenomenon, and they created a whole phenomenology around it, including the concept of “mandate of heaven” (tianming) that gave the emperor his right to rule, and the withdrawal of the mandate that led to the collapse of the dynasty, usually indicated by natural disasters. It survived to the 20th century even…the massive Tangshan earthquake of July 1976 was commonly seen as the event that withdrew the mandate of heaven from Chairman Mao, and indeed, he died 2 months later and his regime overthrown. What this highlights is that even the way we think–fret–react to–plan for–educate about–“collapse” following peak oil is highly conditioned by our own modern social conditioning–the fact that we are heavily inculcated with the idea that the future is a better place, that history is linear, and that “collapse” is an unnatural thing.

Kunstler (I believe) had a good insight into this as well. He remarked on the phenomenon of “temporal amnesia”–the fact that we forget how things were after a period of change, such as living in the same place for a long time. This building is replaced. Those trees are planted. Social security benefits are reduced. Copays go up. Food prices creep up. After 10 years, things are materially different, but do you really remember how it used to be? Over several hundred years of collapse, who in Rome or Mesopotamia or any of the other major civilizations that collapsed have had the historical context to talk about “collapse”?

So, I don’t think folks in the Roman Empire thought in terms of “collapse”–this is an artificial construct that we have applied to the simplification of the Roman system and its evolution into something different. After all, even Charlemagne, living 400 years after the “collapse” of the Roman Empire, crowned himself as “Holy Roman Emperor”, so even by then the concept of “Roman Empire” was still around.

I truly think peak oil (and the other unsustainable aspects of a system based on exponential growth) will lead to collapse of our system, but I don’t know how it will play out, nor does anyone. We’ve never created such a complex society in human history before, and thus have never encountered this situation before, so in that sense, history is not quite repeating itself.

Joel Kotkin . August 7, 2005 Flight to Safety. If the world’s major cities don’t rethink what they’re willing to accept in the name of security, they risk becoming ghost towns as people seek safe havens from a growing terrorist threat


You’ve got to hand it to Londoners. They refused to be cowed by the July 7 terrorist attacks. And when new incidents threatened to paralyze the city again, they carried on with characteristic British stiff upper-lipness. But admirable as their urbanite resilience has been, it shouldn’t blind us to the reality that the bombings in the British capital underscored: The great challenge facing the world’s major cities today is finding a way to make life safe for their citizens.

Although current fashion is to blame causes such as energy, food and water shortages for urban decline through the centuries, the truth is that far more cities have fallen due to a breakdown in security. Whether the menace is internal disorder or external threat, history has shown repeatedly that once a city can no longer protect its inhabitants, they inevitably flee and the city slides into decline and even extinction.

While modern cities are a long way from becoming extinct, it’s only by acknowledging the primacy of security — and addressing it in the most aggressive manner — that they will be able to survive and thrive in this new century, in which they already face the challenge of a telecommunications revolution that is undermining their traditional monopoly on information and culture, and draining their populations.

As businesses and industries escape the urban core to operate in small towns and even the countryside, demographic surveys show that the population is going with them. After a brief surge in inner-city populations in the late 1990s, most older American cities have lost more people than they have gained since 2000. Families, retirees and immigrants, the key sources of population growth, are largely deserting the urban core. This is true for not only perennial losers such as Baltimore, Cleveland, Philadelphia and Detroit, but also places that enjoyed a brief resurgence in the last decade, such as San Francisco, Minneapolis and Chicago.

Nor is this flight a strictly American phenomenon: Population has been dropping in London, Paris, Hamburg, Milan and Frankfurt. In many of these cities, the only rapidly growing group is immigrants, most of them Muslim, including many who are increasingly recruited by and susceptible to Islamist extremists.

We don’t yet know entirely how the terrorist threat — “the fear factor” — exacerbates urban depopulation trends. It is clear that American inner-city residents reacted far more strongly to Sept. 11 than people in suburbs and smaller towns. Polls taken months after the terrorist attacks on New York and Washington showed that twice as many big-city residents as suburbanites, and four times as many city-dwellers as rural residents, felt “great concern” about future attacks.

It’s easier to measure effects of decisions by financial services firms to shift more of their operations to suburbs and smaller towns, in part because they are less vulnerable to a potential terrorist assault. Jobs that used to be done in Manhattan are migrating to New York’s outer suburbs, as well as to places such as Florida. The same has been happening to London. British observers note the steady movement of financial and other high-end service jobs to less vulnerable and less expensive provincial cities, as well as offshore havens in India and other parts of the developing world.

Terrorism clearly poses a greater threat to some cities than to others. The symbolic global importance and high population density of London and New York made them inviting targets to terrorists.

Unlike smaller cities and suburbs and more modern, sprawling places such as Phoenix, Houston or Los Angeles, which depend on multiple job centers and private cars, centralized London and New York rely on the very transit modes – – subways, trains and buses — that are targets of terrorism. Over the past three decades, in fact, terrorist attacks on transportation systems have killed more than 11,000 people in cities from Jerusalem, Tel Aviv and Baghdad to Madrid and London.

It’s too early to tell how businesses or individuals might react if terrorist attacks were to become commonplace. But the historical record isn’t promising. Many of the earliest cities of antiquity — in places as dispersed as Mesopotamia, China, India and Mesoamerica — shrank and ultimately disappeared after being overrun by more violent but often far less civilized peoples. As is the case today, the greatest damage was often inflicted not by organized states, but by nomadic peoples or even small bands of brigands who either detested urban civilization or had little use for its arts.

One early example is the urban civilization of the Indus Valley that flourished around 2000 B.C. in what is now Pakistan. After Aryan raiders penetrated the defenses of the ancient cities of Harappa and Moenjodaro, it was many hundreds of years before large metropolitan centers once again rose on the subcontinent.

The first great cities of the Americas — those of the Olmecs and the Maya in Central America and the pre-Inca civilizations in the Andes — declined primarily because of invasion. From the fourth to the sixth century, between 50,000 and 85,000 people lived in Teotihuacan in central Mexico. After an invasion from the north in A.D. 750, residents fled and the site has remained largely deserted.

The best-known example of security-driven collapse is Rome. The Roman Empire was a confederation of cities. By the second century, people, products and ideas were traveling quickly from urban center to urban center over secure sea lanes and 51,000 miles of paved roads stretching from Jerusalem to Boulogne, which connected scores of cities in between. Europe would not again see such secure and well-peopled cities until well into the 19th century.

This archipelago of cities did not fall in one cataclysmic crash, but as a result of repeated assaults by brigands and stateless hordes over hundreds of years. The attacks led to a gradual withdrawal of the Roman presence, first from the outermost empire, such as Britain, and a gradual shift of population from beleaguered cities to the rural hinterlands. By the seventh century, virtually all the empire’s great cities — large provincial centers such as Trier, on the German frontier, Marseilles and Roman Londinium — had either been abandoned or had shrunk to shadows of themselves.

Rome itself, a behemoth of almost a million people in the second century, was reduced to a pitiful ruin populated by less than a tenth of that number.

The critical importance of security to cities is evident today. Crime- infested Mexico City has lost jobs, businesses and educated residents to better-governed, safer places like Monterrey and Guadalajara. Concerns about safety have slowed economic growth in cities such as San Salvador, Rio de Janeiro and Johannesburg.

The U.S. cities that have declined most precipitously and consistently are those plagued by the nation’s highest crime rates. Attempts by mayors in these cities to be hip and cool have not turned them around, in large part because they are still perceived as unsafe. Baltimore Mayor Martin O’Malley has cultivated an image of coolness for himself and encouraged other cool people to add to his city’s “creative class.” Yet as one Baltimore resident suggested to me recently, “What’s the point of being hip and cool if you’re dead?”

Now, cities face a different menace. Sadly, many metropolitan leaders seem unprepared to meet today’s terrorist threat head-on, in part due to the trendy multiculturalism that now characterizes so many Western cities. Consider London’s multiculturalist Mayor Ken Livingstone, who last year welcomed a radical jihadist, Egyptian cleric Sheikh Yusuf Qaradawi, to his city.

Multiculturalism and overly permissive immigration policies have also played a role. Unfettered in their own enclave, Muslim extremists in Brooklyn helped organize the first attack on the World Trade Center in the early 1990s. Lax Canadian policies have allowed radical Islamists to find homes in Montreal and Toronto, where some might have planned attacks on the United States, like the 2000 plot to blow up Los Angeles International Airport.

In Europe, multiculturalism has been elevated to a kind of social dogma, exacerbating the separation between Muslim immigrants and the host society. Not surprisingly, jihadist agitation has flourished in Hamburg, Amsterdam, Madrid, Berlin and Paris as well as London.

If cities are to survive, they will need to face this latest threat to urban survival with something more than liberal platitudes, displays of pluck and determination. They will have to face up to the need for sometimes harsh measures, such as tighter immigration laws, preventive detention and widespread surveillance of suspected terrorists.

They will also need to institute measures that encourage immigrants to assimilate, such as fostering greater economic opportunity for newcomers or enforcing immersion in the national language and political institutions. Militant anti-Western Islamist agitation — actively supportive of al Qaeda, for example — also must be rooted out; it can be no more tolerated in Western cities today than overt support for Nazism should have been during World War II.

Technological measures — from cameras in subway tunnels to radiation- scanning devices at highway approaches to cities — can also help improve security, as can steps like putting more police and bomb-sniffing dogs on mass transit, as New York has decided to do. But imposing the controls we now see at airports — magnetometers and scanners and body searches — at the entrance to every public place would make life in cities far less enjoyable than it is today, and is to be viewed strictly as a last resort.

The kinds of policies needed to secure their safety may pose a serious dilemma for great cities that have been built on the values of openness, freedom of movement, privacy, tolerance and due process. Yet to survive, these same cities may now need to shift their primary focus to protecting their people, their commerce and their future against those who seek to undermine and even, ultimately, destroy them.

Joel Kotkin is an Irvine Senior Fellow with the New America Foundation and the author of “The City: A Global History” (Modern Library). This piece originally appeared in the Washington Post. Contact us at insight@sfchronicle.com.


James Laxer. September 24, 2002.  The Day the Empire Struck Back. Toronto Globe & Mail

Make note of Sept. 20, 2002. Historians will surely mark it as a seminal moment in our new century. On that date, an old debate ended and a new one began.

For the past decade, analysts have been debating the question of whether the United States would follow the course of former powerful states such as Britain and Rome and proclaim itself an empire. In George W. Bush’s National Security Strategy, submitted to the U.S. Congress on Sept. 20, the White House espouses a doctrine that is explicitly imperialist.

The document envisions a world in which the United States will enjoy permanent military dominance over all countries, allies and potential foes alike. Indeed, in its sweeping declaration that the U.S. “has no intention of allowing any foreign power to catch up with the huge lead the United States has opened since the fall of the Soviet Union,” the distinction between friends and foes becomes much less important than it was in the past.

The United States now spends as much on its military as all the other countries in the world combined spend on their militaries. According to the Bush document, the U.S. military will “be strong enough” to dissuade any potential challenger from “pursuing a military buildup in hopes of surpassing, or equaling, the power of the United States.”

The meaning of the doctrine is clear. It dashes the aspirations of those who had hoped that the world was moving toward a system of international law that would allow for the peaceful resolution of conflicts, through covenants and courts. In place of this, a single power that shuns covenants and courts has proclaimed that it intends to dominate the world militarily, intervening pre-emptively where necessary to exorcise threats.

Throughout the Cold War, the United States portrayed itself as first among equals, the leader of the free world. Its doctrine rested on the proposition that, through containment and deterrence, the U.S. and its allies could prevent aggression by hostile states. The new doctrine consigns containment and deterrence to the dustbin, and with them the notion of the United States as first among equals. For the first time in a formal statement of U.S. policy, the United States is portrayed as standing above all other states. Its role, as guardian of a global system in which the U.S. is at the center, is conceptualized as being of a higher order than the roles played by all other states. It is this feature of the doctrine that makes it explicitly imperialist.

Throughout its history, the U.S. has sought to influence others through its values and its culture. Americans have never seen themselves as a militaristic people. Now, though, the U.S. government is resting its claim to global power on military might, and that puts the Americans in the company of the Roman emperors and their legions. To be sure, the Bush document displays a fine Orwellian touch when it proclaims that Washington will not use its power to seek “unilateral advantage.” The 95 per cent of humanity that is non-American is to be lulled into accepting the benefits of “a distinctly American internationalism.” Those who are not pacified (*agree to be controlled for our benefit) will have to contend with American legions that will strike pre-emptively, long before a threat to American interests is allowed to develop.

The coming U.S. assault on Iraq will be the first case in which the new U.S. doctrine will be acted on. Those who have suggested that the Iraq adventure is in a unique category, having to do with the unique evil of Saddam Hussein, need to read the new Bush doctrine. It’s all there in black and white.

It may very well be true that there is not much the rest of the world can do about America’s military might. But former imperial powers that have proclaimed their right to dominate others have ended up creating adversaries that multiply faster than the means to control them. However comfortable the yoke that is offered, people won’t accept it over the long term. Those who want a world in which no power is supreme and which laws and covenants are used to settle conflicts will begin a new debate — about how to contend with imperial America.

Americans may live to rue Sept. 20, 2002, the day they turned in the old republic for a new global empire.  *LONG LIVE THE NEW ROMAN EMPIRE! My face is wet with tears for the country I have always loved.

James Laxer is a professor of political science at York University.


Dough Saunders. Oct 5, 2002. Is the American empire already over? http://www.globeandmail.com

All we seem to do these days is argue about the United States. And the arguments are awfully sparse, aren’t they? Either our neighbour is the most powerful nation on Earth, a menacing imperialist intruder that we must resist, or it’s the most powerful nation on Earth, a beneficial force of democracy and peace that we must join and support.

Let me offer you a new way of thinking about America: Over.

Under this school of thought, the United States stopped being the world’s dominant nation years ago, and has quietly collapsed into being Just Another Country. We haven’t really noticed this, the theory goes, because most other countries still act as if the United States has its old military and financial power, an assumption that could be stripped of its invisible clothes in the event of a protracted Iraq war.

This is not a fringe theory. It comes from within the United States, from respected political scientists on the Ivy League campuses. Why does it get such little play? Both the left and the right have their entire houses built on the notion of a fixed and immutable American hegemony, pro- or anti-. Somewhere between these poles is this small community of thinkers, declaring that the end has already occurred.

“The United States has been fading as a global power since the 1970s, and the U.S. response to the terrorist attacks has merely accelerated this decline.” So says Immanuel Wallerstein, the Yale University political scientist who is by far the most outspoken member of this camp. A gravelly old contrarian with little time for the orthodoxies of the left or the right, he may have gained his remove by teaching at McGill University in the 1970s.

In a forthcoming book, to be titled Decline of American Power,he describes his country as “a lone superpower that lacks true power, a world leader nobody follows and few respect, and a nation drifting dangerously amidst a global chaos it cannot control.”

In his view, America gave up the ghost in 1974, when it admitted defeat in Vietnam and discovered that the conflict had more or less exhausted the gold reserves, crippling its ability to remain a major economic power. It has remained the focus of the world’s attention partly for lack of any serious challenger to the greenback for the world’s savings, and because it has kept attracting foreign investments at a rate of $1.2-billion (U.S.) per day. But if it comes to a crunch, the United States can no longer prevail either economically or — here is the most controversial statement — militarily. In Mr. Wallerstein’s calculus, of the three major wars the United States has fought since the Second World War, one was a defeat and two (Korea and the Gulf War) were draws.

Iraq, he told me recently, would be an end game. “The policy of the U.S. government, which all administrations have been following since the seventies, has been to slow down the decline by pushing on all fronts. The hawks currently in power have to work very, very hard twisting arms very, very tightly to get the minimal legal justification for Iraq that they want now. This kind of thing, they used to get with a snap of the fingers.” You don’t have to agree with Mr. Wallerstein’s hyperbolic view to be a member of the Over camp — and many do disagree: When he first brought it up in the journal Foreign Policy this summer, half a dozen editorial writers in the United States attacked him. But more moderate thinkers have joined the club, including Charles Kupchan at Georgetown University, whose forthcoming book The End of the American Era makes a similar point in more subtle terms. Joseph Nye at Harvard, a friend of Henry Kissinger’s, argues in his new book The Paradox of American Power that “world politics is changing in a way that means Americans cannot achieve all their international goals acting alone” — a tacit acknowledgment of Mr. Wallerstein’s thesis.

This is how great powers end: Not by suddenly collapsing, but by quietly becoming Just Another Country. This happened to England around 1873, but it wasn’t until 1945 that anyone there noticed.

Outsiders do notice. Spend some time talking to a currency trader or a foreign financier, and you’ll glimpse the end of the almighty dollar: Right now, about 70 per cent of the world’s savings are in greenbacks, while America contributes about 30 per cent of the world’s production — an imbalance that has been maintained for the past 30 years only because Japan collapsed and Europe took too long to get its house together.

A Japanese CEO told me this in blunt terms the other day: “It was Clinton’s sole great success that he kept the world economy in dollars for 10 years longer than anyone thought he would. But nobody’s staying in dollars any more.”

There are other signs: The middling liberals, who in the 1960s would have sided with the left in opposing U.S. imperialism, are today begging for an empire. Michael Ignatieff, the liberal scholar, argued at length recently that the United States ought to become an imperial force — on humanitarian grounds. Would this argument be necessary if the United States actually dominated the world?

I’m not sure whether to fully believe the refreshing arguments of Mr. Wallerstein and his friends, but they do have history on their side. In their times, Portugal, Holland, Spain, France and England all woke up to discover, far after the fact, that they were no longer the big global powers, but Just Another Country.

Like the bewildered Englishmen in Robert Altman’s Gosford Park, they struggled to maintain their dignity while wondering just what those strange visitors from abroad were talking about. dsaunders@globeandmail.ca


COMMENT: Jack Dingler  Oct 18, 2002

It’s war games perspective. If you want to have the strongest military in the end, you neutralize everyone competing with you for resources. You don’t have to kill them, just make it impossible for them to gain resources that will give them any hope of being a threat.

In a political / diplomatic climate where you don’t want to make it obvious that this is what you are doing. You can use subtle economic mechanisms to destroy a nation’s ability to purchase fuel.

The rules are simple for determining who to take out. This is the order:
1. Those producing no oil in combination with a weak economy. 2. Those with some oil and a weak economy. 3. Those that are strong economic players, but produce no oil. 4. Those that are major economic players and produce oil should be taken down after production peaks.

Nations with nuclear technology should not be touched until the above list is exhausted.

Simply messing with a nation’s credit rating, is all that needs to be done to them. Soon enough, their banks will be emptied, and their ability to purchase oil in significant quantities will end. If you want to help them along, ship in small arms and incite civil war. Insure that their leaders are turned over in quick succession. Keep their politics destabilized.

Now we’ve reached the point, where this has been accomplished. There are no more significant nations on the list. All that’s left now to do is wage war.


HEATHER MALLICK. Oct 21, 2002. First Iraqlahoma, then Canadaho.     http://www.globeandmail.com/servlet/ArticleNews/PEstory/TGAM/20021019/FOCMALL/Columnists/columnists/columnistsNational_temp/5/5/13/

After the United States does a Mussolini on Saddam Hussein and leaves him hanging upside down outside a Baghdad Gas ‘n’ Go station and Dick Cheney’s Halliburton moves in, it has even bigger plans.

It will set up a military administration like the one that ruled Germany and Japan after the Second World War, a postwar regime backed by 75,000 soldiers and run at an annual cost of $16-billion (U.S.), presumably paid for by stolen Iraqi oil. General Douglas McArthur will by played by Gen. Tommy Franks. He will be titled SCAP (Supreme Commander Allied Powers). Then Ahmad Chalabi will run the U.S.-sponsored puppet government, like the Shah of Iran did, but no one will care when it’s overthrown in 2012 because Iraq will be nothing but an oil-free pile of sand.

(Oh, and there will be a remake of Casablanca with Madonna, George Clooney and Jude Law forming the love triangle. The gin joint will be called not Rick’s, but Dick’s.)

Bushites and their pathetic Canadian lickspittles can call this American scenario anything they want, but I smell a colony in the making and a previous millennium in the revisiting.

A flood of images comes to mind. Pith helmets. The baskets of severed black hands collected by King Leopold of Belgium in the Congo. The French guillotine in Algeria. Amritsar. The Black Hole of Calcutta. E.M. Forster novels. Portraits of the Queen in a Northern Ontario post office in the 1960s, her gelid skin and blue satin gown melding into an approximation of flesh. Meryl Streep blithering on about her farm in Ahfrika at the foot of the Ngong Hills and her soul pilot, Robert Redford.

Colonies started in riches and ended in tears and stupid movies, plus a certain nausea when we contemplate Africa’s suffering now. Didn’t the Americans learn from the British experience? The Brits built their empire on colonies and then retreated from them, becoming a nation of peasants, Londoners and arms dealers. The Americans have an empire now and are going to accompany its decline with colony-building.

I should say “hasten” its decline because the resentful natives these days have more weapons to hand. They have fighters, guns, land mines and missiles, all purchased from the last empire. And if Osama bin Laden was distressed by American airfields in Saudi Arabia, he won’t like American prefabricated picket fences in suburban Basra.

What does America bring to a colony? After a decade of sanctions and all those dead children, some food would be nice.

Norwegian journalist Erik H. Thoreson tosses off a list: HMOs, unaffordable drugs, household guns, fast food, obesity, factory farms, universal Wal-Mart, two weeks of annual holiday, SUVs compulsory, oil and coal vs nitrogen as fuel, and credit-card debt as a social asset.

I would add: white plastic patio furniture, novels with only nice characters, the copyrighting of everything, hostility toward clever, literate people, and the use of spy planes in neighbourhoods even after that Washington sniper is caught.

There are good things too, but even I can’t see how an Iraqi will benefit from the American things I love: Macy Gray CDs, casual friendliness, David Chase teleplays, Woody Guthrie, blues music and a beautiful sexual sprawl.

Canada has already participated in its own transformation into an unofficial U.S. colony by adopting their slogan “We want stuff and we want it cheap.”

But after the new American colony of Iraqlahoma emerges, and it will, Junior Bush will start casting around for more. He will notice Canada. He already has our oil reserves. Under NAFTA, our gas is proportionately their gas. As Eric Reguly of The Globe and Mail has reported, if Canada wants to export 50 per cent less gas to the Yanks, it must cut back its supplies to its own population by half. We’re one big family!

But Canada has the next big thing that Americans need, even more than oil: water, for which there are no industrial substitutes.

Our SCAP won’t need to be military, since we don’t have a military. He will be GE’s Jack Welch, author of Straight from the Gut,who will move to Toronto with his itty-bitty new wife, as Ottawa’s too far from New York. Brian and Mila will clean his pool and walk his dogs. St. John’s will be renamed St. Condoleeza, then Lauraville. French-speaking Quebec will be crushed under Donald Rumsfeld’s personal boot. The cultural attaché will be Mary Higgins Clark. Agricultural Commander will be Pete Domenici of New Mexico, who will divert all rivers out of Saskatchewan or scoop the stuff up in Canadairs, but one way or another, California and the rest of the American dust bowl is going to be green again. As for Death Valley, let’s put it this way: A river runs through it.

The Canadian Protectorate will be a boon for oldie-mouldie Americans. Walter Cronkite will come out of retirement to read The National. Andy Williams will sing The Star-Spangled Banner at hockey games. Elly May from The Beverly Hillbillies,her face a wizened white prune offset by those gingham shirts, will be Ontario’s deputy SCAP. Alberta gets Strom Thurmond. B.C. gets Morgan Fairchild.

Welcome to Canadaho, the 52nd colony. Next stop Cubachusetts.


Ron Patterson  Nov 13, 2002 Bill, I do not believe that the US will ever completely “control” the world’s oil supply. That would be a gargantuan task, far to large and complicated to ever be pulled off successfully. A nation of infidels trying to control the oil extraction in a nation of Muslims would be like trying to suck honey out of a beehive through a straw.

The very best we can hope for would be to work out an agreement of some kind with Saudi Arabia and perhaps Kuwait and the Emirates. To do that we would have to make serious concessions to these nations. And tops on the list of any of their demands would be for us to stop kowtowing to Israel.

Think about it Bill, there are 1,215,000,000 Muslims on the planet. Now supposing we infidels took over the birthplace of Islam, the birthplace of Muhammad, the nation of their two holiest cities, the nation of their most holy shrine, the Kabbah. Think they might be pisssed? If you think we have terrorism now then you ain’t seen nothing yet!

No, we will never “control” these Islamic nations. And you may draw your own conclusions as to what this means.


Nov 24, 2002 How America will get Europe to finance its 2002-03 Oil War with Iraq by Michael Hudson

Last time around, in the 1991 Gulf War, America got its allies to bear most of the costs voluntarily. After all, U.S. diplomats claimed, wasn’t the war fought to protect Kuwait and the next petro-domino, Saudi Arabia, from Iraqi attack – and in the process to protect Europe’s oil and gas supplies from an aggressive grabber? Wasn’t it therefore fair to ask the Saudis and Kuwaitis, along with the Germans, British and other countries to bear the lion’s share of the cost of the oil war fought for their own benefit?

Europe and the Near East agreed to pay, and their central banks turned over some of the excess U.S. Treasury bonds they had accumulated by running year after year of trade and payments surpluses with America. And almost immediately, these central banks’ dollar holdings filled up again with dollars that were unspendable and had little value, except to give back to the United States or let accumulate for no real purpose.

This Treasury-bond standard of international finance has enabled the United States to obtain the largest free lunch ever achieved in history. America has turned the international financial system upside down. Whereas formerly it rested on gold, central bank reserves are now held in the form of U.S. Government IOUs that can be run up without limit. In effect, America has been buying up Europe, Asia and other regions with paper credit – U.S. Treasury IOUs that it has informed the world it has little intention of ever paying off. And there is little Europe or Asia can do about it, except to abandon the dollar and create their own financial system.

Michael Hudson’s Super Imperialism: The Origins and Fundamentals of U.S. World Dominance explains how the dollar’s being forced off gold in 1971 led to a new international financial system in which the world’s central banks are obliged to finance the U.S. balance of payments deficit by using their surplus dollars in the only way that central banks are allowed to use them: to buy U.S. Treasury bonds. In the process, they finance the U.S. Government’s domestic budget deficit as well.

The larger America’s balance-of-payments deficit becomes, the more dollars end up in the hands of European, Asian and Near Eastern central banks, and the more money they must recycle back to the United States by buying U.S. Treasury bonds. Over the past decade American savers have been net sellers of government bonds, putting their own money into the stock market, corporate bonds and real estate. Foreign governments have been obliged to hold U.S. bonds whose interest rates have fallen steadily, while their volume now exceeds America’s ability or willingness to pay.

What makes today’s Super Imperialism different from past “private enterprise” imperialism

Past studies of imperialism have focused on how corporations invest in other countries, extracting profits and interest. This phenomenon occurs largely via private-sector investors and exporters. But today’s novel form of international financial imperialism occurs among governments themselves, and specifically between the U.S. Government and the central banks of nations running balance-of-payments surpluses.

The larger their surpluses grow, the more dollars they are obliged to put into U.S. Treasury securities. Hence, the book’s title, Super Imperialism.

How the United States makes other countries pay for its wars

Since Europe’s Middle Ages and Renaissance, going to war has left nations with heavy public debts, which in turn have needed to be financed by raising taxes. Two centuries ago Adam Smith gave a list of how each new war borrowing in Britain led to a new tax being imposed to pay its interest charges. Militarily ambitious nations thus became indebted, high-tax and high-cost economies. When foreign funds could not be borrowed, belligerent countries had to pay out gold to defray the costs of their military spending or see their currencies depreciate against gold. After the Napoleonic Wars ended in 1815 and again after World War I, Britain and other countries imposed deflationary financial policies whose unemployment and trade depression imposed economic austerity until prices fell to a point where the currency achieved its prewar gold price. Domestic economies thus were sacrificed to pay creditors, saving them from having to suffer a loss as measured in gold.

America’s war in Vietnam and Southeast Asia in the 1960s seemed to follow this time-honored scenario. U.S. overseas military spending ended up in the hands of foreign central banks, especially France, whose banks were the dominant financial institutions in Indo-China. Central banks cashed in these for gold nearly on a monthly basis from the 1965 troop buildup onward. Germany did on a quiet scale what General de Gaulle did with great fanfare in cashing in the dollars sent from France’s former colonies.

By 1971 the U.S. dollar’s gold cover – legally 25 percent for Federal Reserve currency – was nearly depleted, and America withdrew from the London Gold Pool. The dollar no longer could be redeemed for gold at $35 an ounce. It seemed at the time that the Vietnam War had cost America its world financial position, just as World War I had stripped Britain and the rest of Europe of their financial leadership as a result of their Inter-Ally arms debts to the United States.

But in going off gold the United States created a new kind of international financial system. It was a double standard, that is, the dollar-debt standard. The consequences can be seen today. This time around the Near East and Moslem world have announced their opposition to a new U.S. oil war, as have France and Germany. Popular opinion throughout Europe has turned against American adventurism, and at first glance it appears that America will have to finance its war alone.

And indeed it would, if today’s global financial system were still what it was before 1971. America could not fight a conventional war and pay for its troop support costs without seeing the dollar plunge. In fact, it seemed that in 1971 no country ever again could go to war without seeing its international reserves depleted and its currency collapse, forcing its interest rates to rise and its economy to fall into depression. Yet in all the argument over the coming U.S.-Islamic war, Europeans have not seen that it is they themselves that will have to bear the U.S. military costs, and to do so without limit.

What has changed is the fact that U.S. Treasury bonds – American IOUs of increasingly dubious real value – have replaced gold as the form of reserves held by the world’s central banks. Almost without anyone noticing it, these central banks have been left with only one asset to hold: U.S. Government bonds.

Central banks do not buy stocks, real estate or other tangible assets. When Saudi Arabia and Iran proposed to use their oil dollars to begin buying out American companies after 1972, U.S. officials let it be known that this would be viewed as an act of war. OPEC was told that it could raise oil prices all it wanted, as long as it used the proceeds to buy U.S. Government bonds. That way, Americans could pay for oil in their own currency, not in gold or other “money of the world.” Oil exports to the United States, as well as German and Japanese autos and sales by other countries, were bought with paper dollars that could be created ad infinitim.

America’s free lunch as Europe’s and Asia’s expense

After World Wars I and during World War II, U.S. diplomats forced Britain and other countries to pay their arms debts and other military expenditures in the form of real output and by selling off their companies. But this is not what American officials are willing to do today. The world economy now operates on a double standard that enables America to spend internationally without limit, following whatever economic and military policies it wishes to, without any gold constraint or other international constraint.

U.S. officials claim that the world’s dollar glut has become the “engine” driving the international economy. Where would Europe and Asia be, they ask, without the U.S. import demand? Do not dollar purchases help other countries employ labor that otherwise would stand idle?

This kind of rhetorical question fails to acknowledge the degree to which America is importing foreign goods and pumping dollars into the world economy without providing any quid pro quo. The important question to be asked is why European and Asian central banks don’t simply create their own domestic credit to expand their markets? Why can’t they increase their consumption and investment levels rather than relying on the U.S. economy to buy their consumer goods and capital goods for surplus dollars that have no better use than to accumulate in the world’s central banking system?

The answer is that Europe and Asia suffer from a set of economic blinders known as the Washington Consensus. It is a cover story to perpetuate America’s free ride at global expense, by pretending that the Treasury bill standard is something other than an exploitative free ride.

Toward debtor countries, American diplomats impose the Washington Consensus via the World Bank and IMF, demanding that debtors raise their interest rates to raise the money to pay foreign investors. These hapless countries dutifully impose austerity programs to keep their wages low, sell off their public domain to pay their foreign debts, deregulate their economy so as to enable foreign investors to privatize local electricity, telephone services and other national infrastructure formerly provided at subsidized rates to help these economies grow.

Toward creditor nations America relates as the world’s most Highly Indebted Developed Country by refusing to raise its own interest rates or permit key U.S. industries to be sold off.

Super-Imperialism explains how this dollar-debt standard came about. Hudson’s narrative begins with World War I, showing how unforgiving America was of Europe’s arms debts. Its stance was in sharp contrast to France’s forgiveness of America’s own Revolutionary War debt, and also to America’s insistence today that Europe and Asia agree to finance present and future American wars with unlimited lines of credit. In particular, Super Imperialism focuses on how the United States used Britain as its Trojan Horse within Europe. After reaching highly unfavorable agreements with Britain as to how to finance its debts stemming from World Wars I and II, America and Britain together than confronted the rest of Europe with a fait accompli on harsh U.S. terms. Britain acquiesced in relinquishing its world economic power to the United States instead of trying to go it alone.

It looks as if little has changed today. First published in 1972, this new and revised second edition of Super Imperialism, published by Pluto Press, reviews how the British and Germans, the Japanese and Chinese, and even the central banks of France and Russia are about to finance the war in Iraq indirectly, by absorb the dollars that will be thrown off by America’s military adventurism. Prof. Hudson began writing this book while serving as the balance-of-payments economist for the Chase Manhattan Bank and Arthur Anderson during 1964-69, and completed it while teaching international finance at The New School in New York. (He is now Distinguished Professor of Economics at the University of Missouri at Kansas City.) His book was quickly translated into Spanish, Japanese, Russian and Arabic, and a new and revised edition was republished in Japan earlier this year before being published in Britain by Pluto.

This book was the first to explain how America has obliged other countries to finance its payments deficit, including its foreign military spending and its corporate buyouts of European and Asian companies. In effect, America has devised a new means to tax Europe and Asia via their central banks’ obligation to accept unlimited sums of dollars. The burden on Europe and Asia is not felt directly as a tax, however, but indirectly through their payments surpluses with the United States.

The Treasury-bill standard has enabled the USA to import goods far beyond its ability to export. The upshot is to provide America with a unique form of affluence, achieved by getting a free ride from Europe, Asia and other regions. When British exporters (or the owners of companies or real estate being sold for dollars) receive more dollar, the recipients of these payments turn them over to the Bank of England for sterling. The Bank of England in turn invests these dollars in U.S. Treasury bonds, receiving a relatively small interest rate. Now that the gold option has been closed there is no alternative for how to spend these dollars. America has found a way to make the rest of the world pay for its imports, and indeed pay for its takeover of foreign companies, and most imminently to pay for its new war in the Middle East.

This is why the new form of America’s inter-governmental super imperialism differs from the familiar old private-enterprise analysis that applied prior to 1971.


Nov 26, 2002 http://atimes.com/atimes/Global_Economy/DH14Dj01.html The economics of a global empire By Henry C K Liu

The productivity boom in the US was as much a mirage as the money that drove the apparent boom. There was no productivity boom in the US in the last two decades of the 20th century; there was an import boom. What’s more, this boom was driven not by the spectacular growth of the American economy; it was driven by debt borrowed from the low-wage countries producing this wealth. Or, to put it a tad less technically, the economic boom that made possible the current US political hegemony was fueled by payments of tribute from vassal states kept perpetually at the level of subsistence poverty by their own addiction to exports. Call it the New Rome theory of US economic performance.

True, exports can be beneficial to an economy if they enable that economy to import needed goods and services in return. Under mercantilism and a gold standard, for example, an economy that incurred recurring trade surpluses was essentially accumulating gold which could reliably be used for paying for imports in the future.

In the current international trade system, however, trade surpluses accumulate dollars, a fiat currency of uncertain value in the future. Furthermore, these dollar-denominated trade surpluses cannot be converted into the exporter’s own currency because they are needed to ward off speculative attacks on the exporter’s currency in global financial markets.

Aside from distorting domestic policy, the export sector of the Chinese economy has been exerting disproportionate influence on Chinese foreign policy for more than a decade. China has been making political concessions on all fronts to the US for fear of losing the US market from whence it earns most of its foreign reserves, which it is compelled to invest in US government debt. This is ironic because according to trade theory, a perpetual trade surplus accompanied with a perpetual capital account deficit is not in the economic interest of the exporting nation. China is not unique in this dilemma. Most of the world’s export economies face similar problems. This is the economic basis of US unilateralism in foreign affairs.

When Chinese exporters invest China’s current account surplus in dollar financial assets, the Chinese economy will see no benefit from exports as more goods leave China than come in to offset the trade imbalance. True wealth is given away by Chinese exporters for paper, at least until a future trade deficit allows China to import an equivalent amount of goods in the future. But China cannot afford a balanced trade, let alone a trade deficit, because trade surpluses are necessary to keep the export sector growing and for maintaining the long-term value of its currency in relation to the dollar. The bulk of China’s trade surpluses, then, must be invested in US securities. This is the economic reality of US-China trade.

The gap between the perceived value of the accumulated fiat currency (US) of the importing economy (US) and the value of that currency when dollar-denonimated investments are finally cashed in at market price represents the ultimate difference in the quantity of goods and services eventually received between the trading economies. Since the drivers of trade imbalances are overvalued currencies of the importer or undervalued currencies of exporters, obviously the one-sided trade can only end when the exporter has wasted away all its expandable wealth, or the importer has run deficits to levels that exceed the willingness of the exporter to accept more of the importer’s debt. Interest rate policies of central banks are usually the culprit in this matter as they drive investment flows in the direction of a high interest economy, making necessary the perpetual trade imbalance. Other forms of waste of wealth, such as pollution, low wages and worker benefits, neglect of domestic development and rising poverty in both export and non-export sectors, are penalties assumed by the exporter.

China exported 4.07 billion pairs of shoes in 2001, up 2.55 percent from the previous year. But the value of those exports, US$10.1 billion, was an increase of only 2.48 percent over 2000. Actual value growth per unit, then, was a negative. Guangdong province is China’s largest shoe-making region, with annual production at around three billion pairs, accounting for almost a third of the world’s total. Assuming the number of Chinese workers making shoes to be constant, Chinese productivity dropped in the shoe industry in 2001. The only way productivity could have remained the same or improved would have been if the Chinese shoe industry had cut workers, thus contributing to China’s growing unemployment problem.

Imports from China are resold in the US at a greater profit margin for US importers than that enjoyed by Chinese exporters in production for export. In part, this has to do with the inflated distribution costs in the importing country (US) because of overvaluation of its currency, and the higher standard of living in the US made possible partly by Chinese exporter credit. Thus a $2 toy leaving a Chinese factory is a $3 part of a shipment arriving at San Diego. By the time a US consumer buys it for $10, the US economy registers $10 in final sales, less $3 in imports, for a $7 addition to gross domestic product (GDP). The GDP gain to import ratio is greater than two, in this case two-and-a-third. The GDP gain to export ratio is zero if the $2 export price becomes part of the importer’s capital account surplus. If 50 percent of the $2 export price is used for paying return to foreign capital, then the ratio is in fact negative.

The numbers for other product types vary greatly, but the pattern is similar. The $1.25 trillion of imports to the US in 2000 are directly responsible for some $2.5 trillion of US GDP, almost 28 percent of its $9 trillion economy.

The $400 billion of Chinese exports are directly responsible for a loss of $800 billion in Chinese GDP of $1 trillion as compared to a GDP if that export were consumed domestically. In other words, if it were to not export at all, China would almost double its GDP by redirecting the equivalent productivity toward domestic development. On a purchasing power parity basis (PPP), the GDP loss to exports would be four times greater. The higher the trade surplus in China’s favor, meaning more goods and services leaving China than entering, the more serious its adverse impact on China’s GDP.

Viewing the greater margins available in the importing country as a result of a currency valuation imbalance and understanding that retailing and distribution are operationally less efficient relative to manufacturing, it can be observed that imports raise apparent productivity because sales per employee increase as one goes from the factory floor towards the final consumer. Also, the closer in function the factory floor is to the retail space, the higher its apparent productivity. Through marketing and proximity to customer, a seller can gain advantage in the assembly of imported major parts to order.

Thus a US assembler who out-sources its content parts can win final sales away from the offshore integrated manufacturer who makes the same parts and assembles them abroad. In the high technology arena, time to market of design innovation is key. By hiding costs through the use of employee stock options for compensation (an issue of current debate in US corporate governance), a local in the importing country can use the high valuation of his stock, driven by creative accounting and artificially low production costs and interest rates at the exporter country, to raise funds to further subsidize the production costs of the final product, be it software or hardware. The content of the product will increasingly come from low wage, low margin exporting nations, and the out-sourcing assembler’s manufacturing involvement may be little beyond snapping out-sourced parts in place, advertised ad nauseum as a US brand. Dell is a classic example, as is Disney’s licensing empire.

To quantify the order of magnitude of the effect of imports on apparent US aggregate productivity, a direct relationship to the trade deficit can be observed. The productivity gain observed is not as strong as presented by aggregate data. The 4 percent productivity rise cited in US government statistics can be primarily attributable to sharp import increases. The gain in net productivity is much smaller, on the order of 1.8 percent, since the technology revolution began affecting the economy a whole decade earlier. Much of the rest of the improvement has to do with normal cyclical behavior of productivity, the result of normal rise in capacity utilization during boom times from a bubble economy.

There is another measure of increases in trade flow volume that stems from the appreciation of the trade-weighted dollar. The trade-weighted dollar measure shows improvement consistently because of the attempts of European, OPEC and Japanese holders of US debt to retain value in the dollar by creating dollar-denominated debt in emerging economies that actually produce something, as opposed to the US which gains foreign income primarily through the use of international protections for intellectual property.

For the purpose of this discussion, one need focus only on the broad trade-weighted dollar index being put in an upward trend, as highly indebted emerging market economies attempt to extricate themselves from dollar-denominated debt through the devaluation of their currencies. The purpose is to subsidize exports, ironically making dollar debts more expensive in local currency terms. The moderating impact on US price inflation also amplifies the upward trend of the trade-weighted dollar index despite persistent US expansion of monetary aggregates, also known as monetary easing or money printing.

Adjusting for this debt-driven increase in the value of dollars, the import volume into the US can be estimated in relationship to these monetary aggregates. The annual growth of the volume of goods shipped to the US has remained around 15 percent for most of the 1990s. The US enjoyed a booming economy when the dollar was gaining ground, and this occurred at a time when interest rates in the US were higher than those in its creditor nations. This led to the odd effect that raising US interest rates actually prolonged the boom in the US rather than threatened it, because it caused massive inflows of liquidity into the US financial system, lowered import price inflation, increased apparent productivity and prompted further spending by US consumers enriched by the wealth effect despite a slowing of wage increases.

This was precisely what Federal Reserve Board chairman Alan Greenspan did in the 1990s in the name of pre-emptive measures against inflation. Dollar hegemony enabled the US to print money to fight inflation, causing a debt bubble. For those who view the US as the New Roman Empire with an unending stream of imports as the spoils of war, this data should come as no surprise. This was what Greenspan meant by US “financial hegemony”.

The transition to offshore production is the source of the productivity boom of the “New Economy” in the US. The productivity increase not attributable to the importing of other nation’s productivity is much less impressive. While published government figures of the productivity index show a rise of nearly 70 percent since 1974, the actual rise is between zero and 10 percent in many sectors if the effect of imports is removed from the equation. The lower values are consistent with the real-life experience of members of the blue collar working class and the white collar middle class.

This era of declining reward for manual effort coincides with the Reagan shift to having workers pay for their social benefits, while promoting heavy subsidies of corporations, particularly in the earlier stages of corporate growth, through pro-business tax policies and regulatory indulgence.

Historical timelines for the actual levels of productivity in the US may be traced back to the introduction of computer-assisted accounting by IBM and later EDS in the late 1960s. This cleared the labor-intensive accounting pools of the large corporations and mammoth government agencies. Automation of scientific work began even earlier and entered mainstream engineering by the mid 1970s. By 1980, the ordering-inventory and inter-corporate billing systems were computerized to a great extent, as had occurred in banking and finance in the 1970s. By the 1990s, computerized trading and market modeling actually transformed market efficiency into systemic risk of unprecedented dimensions.

The current process is one of standardization and inclusion, as well as reintroduction of regulatory restraint. Inventory management in the current “just in time” manner was not attractive until high US real interest rates made the holding of inventory unattractive. Prior to this, during periods of real inflation, inventory was a profit center, not a cost problem, thanks to FIFO (first in, first out) accounting where inflation would produce an annual statement of higher ending inventory value, a lower cost of goods sold and a higher gross profit. Now that the world has organized away the inventory that cushions supply disruptions and price inflation, we are quite defenseless against them. Never before has Murphy’s Law (if something can go wrong, it will) a better chance to demonstrate itself with a cruel spate of price inflation.

The result of this distortion driven by the monetary system is a decline in real living standards of producers in all of the exporting and indebted world, and in the US. Indeed, reward has been divorced from real effort and reassigned to manipulators. There have been enormous strides in productivity around the globe, but few of them came in the US. It has been the seigniorage of the dollar reserve system granted to the US without economic discipline that allowed the import of productivity from abroad and the superficial appearance of prosperity in the US economy.

World trade has been shrinking. The conventional wisdom of market fundamentalism is that the global economy is slowing to work off excess debt, causing global trade to shrink temporarily. The world is waiting for a rebound in the US economy so that other countries can again export themselves out of recession.

Yet a case can be made that global trade is shrinking because it transfers wealth from the have-nots to the have-too-muches, and after two decades, the unsustainable rate of wealth transfer has slowed, leading to slower economic growth worldwide. Those economies that have been dependent on exports for growth will do well to understand that the recent drop in exports in more than a cyclical phenomenon. It is a downward spiral unless balanced trade is restored so that trade is a supplement to domestic development rather than a deterrent. Regions like Asia and Latin America should restructure their export policies to focus on intra-regional trade that aim at development instead of those that transfer wealth out of the region. Places like Shanghai, Hong Kong, Singapore and Tokyo should stop looking for predatory competitive advantage and move toward symbiotic trade policies to enhance regional development.

The purpose of the $30 billion IMF loan of Brazil – an unprecedented figure – is not so much to help the Brazilian economy escape its debt trap as it is to bail out US transnational banks holding Brazilian debt. The net result is to force the Brazilian economy to export more wealth to the tune of $30 billion plus interest on top of the mountains of debt it already has and could not service. Brazil would be better off defaulting as Russia did. Economist Paul Krugman lamented in his New York Times column that he mistakenly bought into the Washington consensus and now his confidence that market fundamentalists had been “giving good advice is way down”.

The line between honest mistakes in pushing the regulatory envelope and fraud is now debated regarding corporate finance and governance in the US, and many executives and their financial advisors are being charged with criminal liability. Are economists who knowingly pushed the ideological envelope beyond the limits of reality above the laws of conscience?

Henry C K Liu is chairman of the New York-based Liu Investment Group


http://www.dailytimes.com.pk/default.asp?page=story_15-12-2002_pg4_11 America’s bid for global dominance By John Pilger The threat posed by US terrorism to the security of nations and individuals was outlined in prophetic detail in a document written more than two years ago and disclosed only recently. What was needed for America to dominate much of humanity and the world’s resources, it said, was “some catastrophic and catalyzing event – like a new Pearl Harbor”. The attacks of 11 September 2001 provided the “new Pearl Harbor”, described as “the opportunity of ages” . The extremists who have since exploited 11 September come from the era of Ronald Reagan, when far-right groups and “think-tanks” were established to avenge the American “defeat” in Vietnam. In the 1990s, there was an added agenda: to justify the denial of a “peace dividend” following the cold war. The Project for the New American Century was formed, along with the American Enterprise Institute, the Hudson Institute and others that have since merged the ambitions of the Reagan administration with those of the current Bush regime.

One of George W Bush’s “thinkers” is Richard Perle. I interviewed Perle when he was advising Reagan; and when he spoke about “total war”, I mistakenly dismissed him as mad. He recently used the term again in describing America’ s “war on terror”. “No stages,” he said. “This is total war. We are fighting a variety of enemies. There are lots of them out there. All this talk about first we are going to do Afghanistan, then we will do Iraq… this is entirely the wrong way to go about it. If we just let our vision of the world go forth, and we embrace it entirely and we don’t try to piece together clever diplomacy, but just wage a total war… our children will sing great songs about us years from now.”

Perle is one of the founders of the Project for the New American Century, the PNAC. Other founders include Dick Cheney, now vice-president, Donald Rumsfeld, defence secretary, Paul Wolfowitz, deputy defence secretary, I Lewis Libby, Cheney’s chief of staff, William J Bennett, Reagan’s education secretary, and Zalmay Khalilzad, Bush’s ambassador to Afghanistan. These are the modern chartists of American terrorism. The PNAC’s seminal report, Rebuilding America’s Defences: strategy, forces and resources for a new century, was a blueprint of American aims in all but name. Two years ago it recommended an increase in arms-spending by $48bn so that Washington could “fight and win multiple, simultaneous major theatre wars”. This has happened. It said the United States should develop “bunker-buster” nuclear weapons and make “star wars” a national priority. This is happening. It said that, in the event of Bush taking power, Iraq should be a target. And so it is.

As for Iraq’s alleged “weapons of mass destruction”, these were dismissed, in so many words, as a convenient excuse, which it is. “While the unresolved conflict with Iraq provides the immediate justification,” it says, “the need for a substantial American force presence in the Gulf transcends the issue of the regime of Saddam Hussein.” How has this grand strategy been implemented? A series of articles in the Washington Post, co-authored by Bob Woodward of Watergate fame and based on long interviews with senior members of the Bush administration, reveals how 11 September was manipulated.

On the morning of 12 September 2001, without any evidence of who the hijackers were, Rumsfeld demanded that the US attack Iraq. According to Woodward, Rumsfeld told a cabinet meeting that Iraq should be “a principal target of the first round in the war against terrorism”. Iraq was temporarily spared only because Colin Powell, the secretary of state, persuaded Bush that “public opinion has to be prepared before a move against Iraq is possible”. Afghanistan was chosen as the softer option. If Jonathan Steele’s estimate in the Guardian is correct, some 20,000 people in Afghanistan paid the price of this debate with their lives.

Time and again, 11 September is described as an “opportunity”. In last April ‘s New Yorker, the investigative reporter Nicholas Lemann wrote that Bush’s most senior adviser, Condoleezza Rice, told him she had called together senior members of the National Security Council and asked them “to think about ‘how do you capitalise on these opportunities'”, which she compared with those of “1945 to 1947″: the start of the cold war. Since 11 September, America has established bases at the gateways to all the major sources of fossil fuels, especially central Asia. The Unocal oil company is to build a pipeline across Afghanistan. Bush has scrapped the Kyoto Protocol on greenhouse gas emissions, the war crimes provisions of the International Criminal Court and the anti-ballistic missile treaty. He has said he will use nuclear weapons against non-nuclear states “if necessary”. Under cover of propaganda about Iraq’s alleged weapons of mass destruction, the Bush regime is developing new weapons of mass destruction that undermine international treaties on biological and chemical warfare.

In the Los Angeles Times, the military analyst William Arkin describes a secret army set up by Donald Rumsfeld, similar to those run by Richard Nixon and Henry Kissinger and which Congress outlawed. This “super-intelligence support activity” will bring together the “CIA and military covert action, information warfare, and deception”. According to a classified document prepared for Rumsfeld, the new organisation, known by its Orwellian moniker as the Proactive Pre-emptive Operations Group, or P2OG, will provoke terrorist attacks which would then require “counter-attack” by the United States on countries “harbouring the terrorists”.

In other words, innocent people will be killed by the United States. This is reminiscent of Operation Northwoods, the plan put to President Kennedy by his military chiefs for a phoney terrorist campaign – complete with bombings, hijackings, plane crashes and dead Americans – as justification for an invasion of Cuba. Kennedy rejected it. He was assassinated a few months later. Now Rumsfeld has resurrected Northwoods, but with resources undreamt of in 1963 and with no global rival to invite caution. You have to keep reminding yourself this is not fantasy: that truly dangerous men, such as Perle and Rumsfeld and Cheney, have power. The thread running through their ruminations is the importance of the media: “the prioritised task of bringing on board journalists of repute to accept our position”.

“Our position” is code for lying. Certainly, as a journalist, I have never known official lying to be more pervasive than today. We may laugh at the vacuities in Tony Blair’s “Iraq dossier” and Jack Straw’s inept lie that Iraq has developed a nuclear bomb (which his minions rushed to “explain”). But the more insidious lies, justifying an unprovoked attack on Iraq and linking it to would-be terrorists who are said to lurk in every Tube station, are routinely channelled as news. They are not news; they are black propaganda.

This corruption makes journalists and broadcasters mere ventriloquists’ dummies. An attack on a nation of 22 million suffering people is discussed by liberal commentators as if it were a subject at an academic seminar, at which pieces can be pushed around a map, as the old imperialists used to do.

The issue for these humanitarians is not primarily the brutality of modern imperial domination, but how “bad” Saddam Hussein is. There is no admission that their decision to join the war party further seals the fate of perhaps thousands of innocent Iraqis condemned to wait on America’s international death row. Their doublethink will not work. You cannot support murderous piracy in the name of humanitarianism. Moreover, the extremes of American fundamentalism that we now face have been staring at us for too long for those of good heart and sense not to recognise them. – Courtesy The New Statesman



Editor’s Note: FPIF Advisory Committee member Michael Klare deciphers the Bush administration’s motives in promoting an invasion of Iraq. The full global affairs commentary (excerpted below) is available online at http://www.fpif.org/commentary/2003/0301warreasons.html .

Michael T. Klare, author of Resource Wars: The New Landscape of Global Conflict and a professor of peace and world security studies at Hampshire College in Amherst, Mass., is a military affairs analyst with Foreign Policy In Focus (online at www.fpif.org).)


The United States is about to go to war with Iraq. As of this writing, there are 60,000 U.S. troops already deployed in the area around Iraq, and another 75,000 or so are on their way to the combat zone. Weapons inspectors have found a dozen warheads, designed to carry chemical weapons. Even before this discovery, senior U.S. officials were insisting that Saddam was not cooperating with the United Nations and had to be removed by force. Hence, there does not seem to be any way to stop this war, unless Saddam Hussein is overthrown by members of the Iraqi military or is persuaded to abdicate his position and flee the country.

The most fundamental question of all is: WHY are we going to war?

In their public pronouncements, President Bush and his associates have advanced three reasons for going to war with Iraq and ousting Saddam Hussein: (1) to eliminate Saddam’s WMD arsenals; (2) to diminish the threat of international terrorism; and (3) to promote democracy in Iraq and the surrounding areas.

These are, indeed, powerful motives for going to war. But are they genuine? Is this what is really driving the rush to war? To answer this, we need to examine each motive in turn.

(1) Eliminating weapons of mass destruction: The reason most often given by the administration for going to war with Iraq is to reduce the risk of a WMD attack on the United States. To be sure, a significant WMD attack on the United States would be a terrible disaster, and it is appropriate for the President of the United States to take effective and vigorous action to prevent this from happening. If this is, in fact, Bush’s primary concern, then one would imagine that he would pay the greatest attention to the greatest threat of WMD usage against the United States, and deploy available U.S. resources–troops, dollars, and diplomacy–accordingly. But is this what Bush is actually doing? The answer is no. Anyone who takes the trouble to examine the global WMD proliferation threat closely and to gauge the relative likelihood of various WMD scenarios would have to conclude that the greatest threat of WMD usage against the United States at the present time comes from North Korea and Pakistan, not Iraq.

(2) Combating terrorism: The administration has argued at great length that an invasion of Iraq and the ouster of Saddam Hussein would constitute the culmination of and the greatest success in the war against terrorism. But there simply is no evidence that this is the case; if anything, the opposite is true. From what we know of Al Qaeda and other such organizations, the objective of Islamic extremists is to overthrow any government in the Islamic world that does not adhere to a fundamentalist version of Islam and replace it with one that does. The Baathist regime in Iraq does not qualify as such a regime; thus, under Al Qaeda doctrine, it must be swept away, along with the equally deficient governments in Egypt, Jordan, and Saudi Arabia. If follows from this that a U.S. effort to oust Saddam Hussein and replace his regime with another secular government–this one kept in place by American military power–will not diminish the wrath of Islamic extremists but rather fuel it.

(3) The promotion of democracy: The ouster of Saddam Hussein, it is claimed, will clear the space for the Iraqi people (under American guidance, of course) to establish a truly democratic government and serve as a beacon and inspiration for the spread of democracy throughout the Islamic world, which is said to be sadly deficient in this respect. But is there any reason to believe that the administration is motivated by a desire to spread democracy in its rush to war with Iraq? There are several reasons to doubt this. First of all, many of the top leaders of the current administration, particularly Donald Rumsfeld and Dick Cheney, were completely happy to embrace the Saddam Hussein dictatorship in the 1980s when Iraq was the enemy of our enemy (that is, Iran) and thus considered our de facto friend. There is another reason to be skeptical about the Bush administration’s commitment to democracy in this part of the world, and that is the fact that the administration has developed close relationships with a number of other dictatorial or authoritarian regimes in the area.

So, if concern over WMD proliferation, or the reduction of terrorism, or a love of democracy do not explain the administration’s determination to oust Saddam Hussein, what does?

I believe that the answer is a combination of three factors, all related to the pursuit of oil and the preservation of America’s status as the paramount world power. These concerns undergird the three motives for a U.S. invasion of Iraq. The first derives from America’s own dependence on Persian Gulf oil and from the principle, formally enshrined in the Carter Doctrine, that the United States will not permit a hostile state from ever coming into a position where it can threaten America’s access to the Gulf. The second is the pivotal role played by the Persian Gulf in supplying oil to the rest of the world: whoever controls the Gulf automatically maintains a stranglehold on the global economy, and the Bush administration wants that to be the United States and no one else. And the third is anxiety about the future availability of oil: the United States is becoming increasingly dependent on Saudi Arabia to supply its imported petroleum, and Washington is desperate to find an alternative to Saudi Arabia should it ever be the case that access to that country is curtailed–and the only country in the world with large enough reserves to compensate for the loss of Saudi Arabia is Iraq.

It is this set of factors, I believe, that explain the Bush administration’s determination to go to war with Iraq–not concern over WMD, terrorism, or the spread of democracy. But having said this, we need to ask: do these objectives, assuming they’re the correct ones, still justify a war on Iraq? Some Americans may think so. There are, indeed, advantages to being positioned on the inside of a powerful empire with control over the world’s second-largest supply of untapped petroleum. If nothing else, American motorists will be able to afford the gas for their SUVs, vans, and pick-up trucks for another decade, and maybe longer. There will also be lots of jobs in the military and in the military-industrial complex, or as representatives of American multinational corporations although, with respect to the latter, I would not advise traveling in most of the rest of the world unless accompanied by a small army of bodyguards). But there will also be a price to pay. Empires tend to require the militarization of society, and that will entail putting more people into uniform, one way or another. It will also mean increased spending on war, and reduced spending on education and other domestic needs. It will entail more secrecy and intrusion into our private lives. All of this has to be entered into the equation. And if you ask me, empire is not worth the price.


Jack Dingler Feb 7, 2003 A lot of folks have compared the US to Rome, using the concept of Empire.

Lately, I’ve been thinking that the US might more closely be compared to Sparta.

Like Sparta, the US doesn’t seem to understand it’s own internal problems, it doesn’t understand where it’s wealth and energy comes from.

Like Sparta, the US relies on military might to solve problems that could better be solved by fixing it’s own problems first.

The US like Sparta is looking to solve internal frictions by increasingly dominate the masses. Like Sparta, the US doesn’t understand world politics.

Like Sparta, the US is easily rallied by motivational speakers spouting gibberish.

Like Sparta the US has become backwards intellectually and educationally, and has no qualms about calling other nations backwards and ignorant.

When Sparta went on wars of conquest in order to enact revenge on the bequest of various nobles, it destroyed works of art, great libraries, and architectual treasures. All at high expense to Sparta, but at some little gain for those already wealthy.

Sparta eventually fell, because it could no longer subjugate the people that supported Sparta in the production of food, clothing and materials. Spartas wars of conquest came back to haunt her, in frequent expensive attacks on her borders.

Finally Sparta had a great inability to adapt to the world. Sparta constantly tried to reshape the world to fit Sparta’s vision. Sparta had a plan to impose their supperior form of government and thinking on the rest of the world and failed miserable. Not only did the rest of the world refuse to become Spartan, they also refused to fight like Spartans. Because Spartas enemies knew they were outmatched, they ran away from battles, only to return and wear the spartans down in nuisance attacks.

The US appears in many ways to fit the Spartan model. Perhaps we should be drawing parrallels from them instead of the Romans?

szoraster Feb 7, 2003 The Spartans were famous for living simple, even, dare I say it, Spartan lifestyles. Their wealth came from their farms, communal lifestyles, and the army. No conspicuous consumption there. Spartans didn’t even like money too much. Later on, this attitude towards money made it hard to hire mercenaries. Or buy allies. A big problem for a world power.

One of the big problems was that simple lifestyle, strict full citizenship requirements, and battle deaths made for a shrinking base of full male citizens and an army that could not increase in size.

Sparta didn’t even sack Athens when it had the chance. Thebes and Corinth wanted it to, but Sparta refused.  Not sure what other examples of great works of art, great libraries, architectural treasures you might be writing about.  Also, not sure when Sparta “went on wars of conquest in order to enact [sic] revenge on the bequest of various nobles”?  It did become the default dominate power in Greece after the Peloponnesian War, and got suckered into doing too much in Asia Minor.

Sparta’s weak domination of Greece was broken by Thebes during two standup battles in 371 and 362BC.  Nothing about internal revolt or lack of support by the helots or local allied cities. Just better battle tactics by those great Thebean generals. Using focused mass in a new way. And the Spartans even were willing to try again against the Macedonians in 330 BC. Again beaten in a standup, face to face, phalanx to phalanx battle.  Afterwards, things really started to go down hill.

One thing about the Spartan hegemony in Greece is that no effort was put into imposing the Spartan model on the rest of Greece. The model didn’t export and they had no desire to export it. Also interesting is how little attention the rest of the Greek cities paid even to the Spartan military model, except when they need allies in war.  Athens, as you remember, reverted to a democracy soon after its great defeat in the Peloponnesian War. And the Spartans didn’t care.


Jack Dingler Feb 7, 2003 “The Spartan empire began to grow, and the Spartans were forced with a completely new way of life – completely different to the simple life they were used to living. They had been brought up knowing only one way of life and had been taught vigorously not to challenge the ideas of the state, but now the state was changing. Sparta sent out commanders to conquered states, and, outside of Sparta, these commanders were surrounded with wealth and luxuries, the likes of which they had never known in Sparta. The temptation was too much for these commanders like Lysander, and began to dress in fine clothes, dine on expensive food and wear delicate and expensive jewelry. Away from the protection of Sparta, power went to some of these commanders’ heads. For example, Lysander became rich and arrogant, so much so that people refused to serve under him. Although Lysander was recalled to Sparta, it was found that he had been smuggling riches into the city of Sparta itself, and it is thought that this happened in many other cases. With this corruption to the Spartan way of life going on all around, Sparta was on the way to its end.”

The Spartans had to expend constant effort keeping their slave population under control. The Helots revolted in 463. The Spartans were forced to ask for aid from Athens. The Thebians encouraged the Helots to engage in the resistance. Helots particpated by fighting and by withholding food and goods to aid in the supply lines for the Spartan Army. Thebes was able to do this by promising the Helots that they could gain freedom from their oppressive government in the new order.

During the period of the 30 tyrants, Sparta attempted to impose their morality and method of government on the Athenian people. During this period, much of the wealth and beauty of Athens was destroyed. Anyone who spoke out against the political changes was promptly executed. Socrates was a victim of the times.

During the period of the 30 tyrants, Sparta propped up one Athenian politician after another in governance of Athens, in an attempt to make Athens more Spartan. As Spartans themselves had no interest in governing Athens, one corrupt Athenian after another talked the Spartans into letting them have a turn in raping the city.


What’s behind the rush to war? Petróleo, petróleo, petróleo  by Robert Jensen Robert

Jensen is a journalism professor at the University of Texas at Austin and author of “Writing Dissent: Taking Radical Ideas from the Margins to the Mainstream.” He can be reached at rjensen@uts.cc.utexas.edu.

In a recent interview, Secretary of Defense Donald Rumsfeld seemed surprised when asked if plans to attack Iraq had anything to do with oil.

“Nonsense,” he replied. “It has nothing to do with oil. Literally nothing to do with it.

The Bush administration would have us believe a war will be over weapons of mass destruction, terrorist ties, and the “liberation” of Iraq. The problem is, virtually no one in the world (outside the United States and 10 Downing St.) believes it.

At last month’s World Social Forum in Porto Alegre, Brazil, the legendary Uruguayan writer Eduardo Galeano took up the question of U.S. motivations to attack Iraq, and he also offered three reasons, though quite different from Rumsfeld’s. For Galeano, the answer was: “Petróleo, petróleo, petróleo.

The crowd in the packed arena — 15,000 from around the world — endorsed Galeano’s analysis with thunderous applause.

Whose analysis should we accept? A man who manages the world’s most destructive military machine on behalf of one of the most rapacious administrations in U.S. history, or people from around the world committed to social justice?

That’s a bit of a loaded question. So, let’s step back and consider the issue.

Any U.S. military action in the Middle East is, at some level, about oil. If not for oil, the United States would not concentrate its military forces on the region. But that doesn’t mean that U.S. policymakers want to occupy Iraq and literally steal the oil; it’s hard to imagine even the most arrogant Bush official proposing that.

When President Bush says “We have no territorial ambitions; we don’t seek an empire,” he is telling half a truth. Certainly the United States isn’t looking to make Iraq the 51st state. But that’s not the way of empire today — it’s about control, not about territory.

Rumsfeld, trying to bolster his claim about the innocence of U.S. intentions, said, “Oil is fungible, and people who own it want to sell it and it’ll be available,” implying that the United States need not worry about being shut out from buying on the open market. That’s mostly correct, but irrelevant.

So, if policymakers do not seek to occupy Iraq permanently and take direct possession of its vast oil reserves (at least 112 billion barrels, second to Saudi Arabia), and if U.S. access to oil on the international market is not the issue, then what might be U.S. interests?

Many argue that the close ties between Bush and the oil industry suggest a war will be fought to give U.S. companies the inside track on exploiting oil in a post-Saddam Iraq. U.S. firms no doubt don’t like the privileged position that French and Russian companies have had, but focusing too much on short-term concerns misses a bigger U.S. strategic goal that has been part of policy for a more than half a century, through Republican and Democratic administrations.

The key is not who owns the oil but who controls the flow of oil and oil profits. After World War II, when the United States was one of the world’s leading oil producers and had little need for imported oil, the U.S. government trained attention on the Middle East. In 1945 the State Department explained that the oil constitutes “a stupendous source of strategic power, and one of the greatest material prizes in world history.

In a world that runs on oil, the nation that controls the flow of oil has that strategic power. U.S. policymakers want leverage over the economies of its biggest competitors — Western Europe, Japan and China — which are more dependent on Middle Eastern oil. From this logic flows the U.S. policy of support for reactionary regimes (Saudi Arabia), dictatorships (Iran under the Shah) and regional military surrogates (Israel), always aimed at maintaining control.

This analysis should not be difficult to accept given the Bush administration’s National Security Strategy report released last fall, which explicitly calls for U.S. forces to be strong enough to deter any nation from challenging American dominance. U.S. policymakers state it explicitly: We will run the world.

Such a policy requires not only overwhelming military dominance but economic control as well. Mao said power flows from the barrel of a gun, but U.S. policymakers also understand it flows from control over barrels of oil.


Michael Dewolf   May 15, 2003 “Of the three, the experience of the 16th century Spaniards makes for the best comparison with the Americans of today. For nearly 100 years immense supplies of gold and silver (the likes of which Europe had never seen before), plundered from the natives of Central and South America flowed into Spanish coffers. Sadly, this 16th century version of excessive money supply growth managed only to fuel the nations’ spending habits, while at the same time disincentivizing their willingness to produce. Instead of turning this windfall into productive wealth, Spain used it to buy “consumer goods” from other nations. As a result, Spain’s debt to foreigners soared and all the gold and silver was exported out of the country (think current account deficit without the ability to “print” more gold). With all this new-found wealth, it didn’t take long for the kings of Spain to think themselves superior and embark on a mission of bending the world to their will. Charles V, not satisfied any longer with being a mere king, lobbied intensely, using bribes and threats and eventually convinced a “coalition of the willing” to make him emperor of the Holy Roman Empire. After loosing quite a few of its booty-laden ships on the high seas, Spain, claiming self defense, declared that it would no longer make a distinction between the pirates and the nations that harboured them. To eliminate this “state-sponsored piracy”, they decided to strike at the worst offender – Britain (although I doubt that Philip II ever suggested that he was merely trying to free the British people from oppression). Boasting their technologically superior Spanish Armada (not dissimilar to America’s air supremacy), they waged what proved to be a disastrous war against Britain whose smaller ships proved far too wily. Years of wars ensued with a variety of other countries that did not share Spain’s view of the world. Having already traded their gold and silver for consumer goods, the nation had to turn to debt-finance to pay for these wars. As Spain’s tab reached the limit, their lenders, the Fuggers of Augsburg (16th century version of the Japanese) were forced to convert their debt into long-term loans. Eventually, Spain’s creditors cut them off and the nation, now bankrupt, introduced to the world the now time-honored tradition of default by a sovereign state.”

from http://www.gold-eagle.com/editorials_03/giustra051303.html


Francisco González   Dec 5, 2003 This article makes the point that the EROEI of current and future U.S oil wars can hardly be positive for very long, if ever.

The end? August 6, 2003 By Stephen James Kerr  http://www.zmag.org/weblinks/kerr_endofoil.htm


With this understanding of oil production trends, the seemingly insane actions of the Bush White House reveal a consistent, if desperate internal logic – the US ruling class embarks upon its strategy of global Empire out of desperation at its incredibly weak underlying position. American capitalists feel compelled to rule oil producing nations like Iraq, Saudi Arabia and Iran as direct colonies in order to stave off their own collapse. It’s a strategy adopted by Imperial Rome, the British Empire, and others, but investments in Empire are also subject to the law of diminishing returns.

In order to conquer and hold its Empire, American capitalism must increase its investment in the military, an organization which cannot account for one trillion dollars of its own spending. Bush’s Pentagon budget is the biggest in history – over $400 billion per year, but so is the $455 billion US budget deficit. The Washington Post recently noted that “The defense budget is set to grow over the next few years faster than the forecast growth in the economy.” How long is America going to shut down schools to build bombs? Ultimately investments in the military must be accounted for as investments to control and extract resources, the most important of which is oil energy.

The American created ‘Cold War’ spent the Soviet Union into the ground. The peak oil crisis – the so-called ‘war on terror’ – will do the same for America and the globalized industrial economy. No campaign of precision bombing can stop this process of American decline.

Over time, the investment in Empire – soldiers, bombs, bureaucratic administration, prisons – required just to maintain the energy flow into the capitalist economy will exceed the net energy return on that investment, in fact such a point may come sooner than later.


Factor in the effective loss of the number six US oil supplier (Iraq) with the staggering stupidity of a US Congress so in bed with big oil and big auto that it steadfastly refused to raise fuel efficiency for US cars this week – insisting instead on the patriotic waste of fuel – and one gets a glimpse into the American oil economy brain trust.



May 27, 2004 The American Empire and Its Prospects by J. R. Nyquist http://www.financialsense.com/stormwatch/geo/pastanalysis/2004/0527.html

Financial historian Niall Ferguson thinks American imperialism is a good thing. If America does not take up the banner of liberal empire, says Ferguson, the developing world will not develop, vast regions of the earth will remain backward and the march of progress may grind to a halt. Ferguson’s latest book, Colossus: The Price of America’s Empire, argues that Americans have wrongly come to believe that “empire” is a dirty word. “The irony is that there were no more self-confident imperialists than the Founding Fathers themselves,” writes Ferguson. George Washington once referred to the United States as a “nascent empire” and an “infant empire.” Thomas Jefferson once told Madison that no constitution was “ever before as well calculated as ours for extending extensive empire and self government.” Alexander Hamilton also referred to the United States as “the most interesting … empire … in the world.

Ferguson wants Americans to look at long-term costs and benefits when judging imperialist projects like the democratization of Iraq. He also wants Americans to understand what is necessary for a successful outcome in the worldwide struggle for free markets and humane administration. America is the world’s dominant power. Everything therefore depends on the United States. But there is a problem. Basic American attitudes toward consumption, credit and global responsibility aren’t what they should be. Ferguson describes Americans as, “Consuming on credit, reluctant to go to the front line, inclined to lose interest in protracted undertakings.” The American colossus, he complains, is “a kind of strategic couch potato.” The typical American would rather live comfortably above his means, borrowing from East Asia, inattentive to the world’s problems. According to Ferguson: “the percentage of Americans classified as obese has nearly doubled in the past decade, from 12 percent in 1991 to 21 percent in 2001.” Meanwhile, the underdeveloped countries are struggling under tyranny and mismanagement. Millions are threatened with starvation. But today, quips Ferguson, “‘the white man’s burden’ is around his waist.” American consumption has risen from 62 percent of GDP in the 1960s to nearly 70 percent of GDP in 2002. Today Americans save less than half of what they were saving per capita in 1959. As Ferguson points out: “Household sector credit market debt rose from 44 percent of GDP in the 1960s and 1970s to 78 percent in 2002.” Worse yet, Americans are disastrously preoccupied “with the hazards of old age and ill health that will prove to be the real cause of [America’s] fiscal overstretch….

The problem with American empire is not the empire. The problem is debt caused by a ballooning welfare state. The United States has explicit liabilities and implicit liabilities, and Ferguson predicts that the implicit liabilities of Social Security, Medicare and other entitlements promise to crash the American financial system. “To put it bluntly,” he writes, “this news is so bad that scarcely anyone believes it.” America has fallen prey to a financial delusion and the final result will be bankruptcy. “If financial markets decide a country is broke and is going to inflate,” warns Ferguson, “they act in ways that make that outcome more likely.” Thus begins the stampede of the bond market mammals. The trigger, says Ferguson, will be “an item of bad financial news.” It will be the proverbial straw that breaks the camels back. (In this case, a bale of straw.) If America’s economic deficit is not corrected, says Ferguson, then we may expect the “black hole of implicit liabilities” to deliver a financial crisis followed by political revolution. In all likelihood, he adds, “the decline and fall of America’s undeclared empire may be due not to terrorists at the gates or to the rogue regimes that sponsor them, but to a fiscal crisis of the welfare state at home.

A further complication for America’s empire is the U.S. manpower deficit in Iraq. According to Ferguson, America’s Ivy League graduates do not dream of serving as proconsuls in the hot and dusty backwaters of the planet. Instead, America’s best and brightest want to make six figure incomes and reside in comfortable suburbs. The American empire (unlike the British empire) suffers from a shortage of willing talent. “There are simply not enough Americans out there to make nation building work,” laments Ferguson. The shortage of military personnel in Iraq has been acknowledged by nearly everyone. But this shortage is not purely military. There is a shortage of skilled administrators and technicians prepared to spend their professional lives abroad. “Until there are more U.S. citizens not just willing but eager to shoulder the ‘nation builder’s burden,’ ventures like the occupation of Iraq will lack a vital ingredient,” says Ferguson.

As for America’s attention deficit disorder, the British historian does not hide his disgust. According to Ferguson, “one poll revealed that nearly a third of Americans thought the [Nicaraguan] contras were fighting in Norway.” How can the United States uphold an empire when the American people are so utterly self-absorbed?

If America is unwilling or unable to shoulder the “nation builder’s burden,” will the worldly Europeans step forward to nudge the half-baked Americans aside?  Is the European Union an emerging mega-empire in the making? — Not a chance, says Ferguson. Americans may be self-absorbed but Europe is senile. EU integration has not fostered economic growth. In fact, growth has been declining since 1973. Eurozone monetary policy has been mismanaged since the single currency began. “The success of the euro as a substitute for the dollar in some international transactions masks a deep failure,” Ferguson explains. “This failure has consisted in systematically underestimating deflationary pressures on the Germany economy….” Rather than emerging as a superpower rival of the United States, the European Union is becoming a kind of “super-Switzerland.” This is a formation, says Ferguson, “where economics tends to count for more than politics and where the cantons and provinces are more powerful than the central government.” The world has seen this sort of thing before. A country that remains strictly within the confines of economic power cannot play a leading role, even when it possesses great wealth. “Talk of federal Europe’s emerging as a counterweight to the United States,” argues Ferguson, “is based on a complete misreading of developments. The EU is populous but senescent. Its economy is large but sluggish. Its productivity is not bad but vitiated by excessive leisure.

As for the Chinese, Ferguson says their economy will not overtake the American economy before 2041. As things presently stand, China is far behind the United States and lacks America’s positional advantage within the global economy. At present the Chinese are forced to buy America’s debt to keep their currency from appreciating against the dollar. This is done to protect China’s export economy. Even so, warns Ferguson, “Today’s open door between America and Asia could close with a surprisingly loud bang.” Trade is not an absolute guarantor of peace, he reminds us. Germany and England developed extensive trade prior to 1914. But war came nonetheless.

Ferguson is genuinely puzzled by the absence of great power conflict today. America, the dominant country of the moment, doesn’t like being a great power. Europe and Japan are “senescent societies and strategic dwarfs.” China is economically tied to America through trade. Russia doesn’t count because of its shriveled economy. “The absence of great power conflict is a concept that is unfamiliar in modern international history,” writes Ferguson, who doesn’t quite trust what he sees.  If only his intuition were a little stronger. If only his blinders were not so firmly fastened. Ferguson has failed to notice that yesterday’s great power conflict has taken a subterranean detour. Russia pretends to be America’s ally in the war against terror. But Eastern Europe, with its new democratic makeup, is still run by the same communist elite as before. And Russia’s strategic rocket forces remain on a hair-trigger, aimed at the United States.

Ferguson believes that Russia’s economic backwardness somehow cancels Russian power from the international equation. He ought to be reminded that poor Sparta defeated wealthy Athens in the Peloponnesian War. “I assert,” wrote Machiavelli in his Discourses on Livy, “that it is not gold, as is claimed by common opinion, that constitutes the sinews of war, but good soldiers; for gold does not find good soldiers, but good soldiers are quite capable of finding gold.” Russia’s lean and hungry condition may be more incentive to empire than America’s fattened condition. “It is obvious, wrote Machiavelli, “that when an army is short of provisions and must either fight or starve to death, it always chooses to fight, since this is the more honorable course, and one that gives fortune some chance to show you favor.

Greatness is found in the human heart, not in a pocket book. “The question Americans must ask themselves is just how transient they wish their predominance to be,” says Ferguson. Let us acknowledge that a bolt of lightning from an approaching storm has already illuminated this question for the attentive few. All that remains is to wait for the thunderclap.


“All in all, Rome’s fate does not augur well for the current wave of transatlantic infighting. Simply put, the process of Rome’s decline foreshadows a unitary West that is in the midst of separating into distinct North American and European power centers.”


Washington = Rome Redux

By Charles Kupchan | Friday, April 16, 2004

The parallels between today’s world and the world of the late Roman Empire are striking. Washington today, like Rome then, enjoys primacy, but is beginning to tire of the burdens of hegemony as it witnesses the gradual diffusion of power and influence away from the imperial core. And Europe today, like Byzantium then, is emerging as an independent center of power, dividing a unitary realm into two. Charles Kupchan — the author of “End of the American Era” — explains. he United States and Europe have certainly been close partners for more than five decades. It would be natural to conclude that they are kith and kin for good. But consider the Roman Empire and its rapid demise after the founding of a second capital in Constantinople. (This and all other emphases below are mine / BT)

Extending Rome’s reach

By the first century AD, the borders of the Roman Empire stretched west to Spain and the British Isles, north to Belgium and the Rhineland, south to North Africa and Egypt, and east to the Arabian peninsula.

“New Rome” and “Old Rome” competed over the style and grandeur of their architecture. The Western and Eastern Empires were becoming distinct political and cultural entities.

Rome was to control much of this territory for the next 300 years. A hub-spoke pattern of rule provided the foundation for an imperial realm of such scope and longevity. And Rome managed to extend its reach over the periphery through overlapping sources of control.


The Romans made significant improvements in roadway construction, warfare and shipbuilding. This facilitated the flow of political influence and resources between the imperial center and its distant limbs. They also introduced an advanced system of governance that fostered the “Romanization” of new subjects.

Small groups of Romans were sent to live in imperial territories to help assimilate conquered peoples and encourage them to take on a Roman identity and way of life.

The goal was to cultivate allegiance toward, rather than resentment of, Roman rule. Assimilation was a much cheaper and more effective way to extend control than was coercion.

In fear of Rome

Rome was similarly thrifty in its military strategy. The well-trained legions were kept in reserve and deployed only as needed to put down uprisings or repel invaders. This system provided effective deterrence.

The mere prospect of facing the legions was enough to dissuade many potential challengers from attacking. Like the United States today, Rome enjoyed uncontested primacy and the deference that came with it.

Feeling thin

By the third century, however, Rome was beginning to feel the strain of keeping together such a large imperial zone. The empire’s frontiers could no longer be guaranteed against contenders growing in both number and strength.

Germanic tribes threatened in the west. Persians and nomads from the Black Sea region pressed in the east.

The frequency and intensity of barbarian attacks compelled Rome to change its military strategy. With simultaneous threats emerging on the perimeter, the legions had to be dispatched to the frontier.

Threats from within

Their deployment put an extraordinary strain on troop levels and imperial coffers. Even worse, with the legions no longer held in reserve — but instead stretched precariously thin, they could no longer deter adversaries through intimidation.

Attacks on one part of the frontier therefore invited secondary attacks elsewhere. The empire also began to face threats from within. Some of the larger provinces had amassed considerable wealth and were seeking to distance themselves from Rome.

Revamping the empire

Enter Diocletian, who became emperor in 284. He offered a bold and The process if Rome’s decline foreshadows a unitary West that is in the midst of separating into distinct North American and European power centers.

innovative solution to the problem of imperial overstretch. The task of managing the empire, Diocletian reasoned, had grown too onerous for a single ruler.

Better to divide up the realm — and devolve responsibility for its several parts to trusted colleagues. He accordingly elevated one of his generals, Maximian, to the rank of co-emperor.

Dividing the spoils

Diocletian and Maximian each named a junior emperor, known as a Caesar, who would help run the empire and be in line to succeed his Augustus (supreme emperor). The realm was then effectively divided into two halves, and each half again divided between the Augustus and his Caesar.

Diocletian ruled the Eastern Empire with the assistance of his junior counterpart, while Maximian and his Caesar ruled the west. Diocletian also divided the larger, wealthier provinces into smaller units, disarming the threat they posed to the authority of the Augusti.

Separate, but equal?

These reforms proved effective in shoring up the security of the realm The well-trained legions were kept in reserve and deployed only as needed to put down uprisings or repel invaders. This system provided effective deterrence.

and enabling both the western and eastern portions of the empire to turn back the barbarian threat.

Over time, Rome and Constantinople emerged as separate capitals, each seeking to extend the influence and enhance the prestige of its court. The replacement of one political center with two had been formalized.

The pope

The papacy in Rome and the patriarchate in Constantinople soon joined the fray. They entered the battle over doctrinal questions — and differed as to whether religious authorities in Constantinople were of equal status to their counterparts in Rome.

Disputes over language and culture followed. The Western “Roman” Empire was based on Latin culture and language — the Eastern “Byzantine” Empire, on Greek. “New Rome” and “Old Rome” also competed over the style and grandeur of their architecture. The Western and Eastern Empires were becoming distinct political and cultural entities.

Things are getting worse

The order that unipolarity had provided was gone for good. True, the Roman Empire had been already experiencing a rapid decline before Diocletian’s time. In time, it was this worsening state of affairs which had inspired his reforms.

But with authority and resources now divided between east and west, the pace of decline quickened.

The end of Rome

The Western Empire maintained its integrity only until the death of Theodosius The Romans introduced an advanced system of governance that fostered the “Romanization” of new subjects.

the Great in 595 [should say 395]. Thereafter, much of its territory was overrun by Germanic tribes and other challengers.

Rome itself was sacked by Goths in 410 — and then invaded and plundered by Vandals in 455. Twenty years later, the last Roman emperor, Romulus Augustulus, ended his reign and left Italy in the hands of tribal leaders.

The church — which was to have helped secure imperial unity — did just the opposite. From the outset, church authorities in Rome and Constantinople were adversaries. The Pope in Rome and the Patriarch in Constantinople were in a constant struggle for religious and political influence, if not predominance.

Holy Smoke

Tensions grew so acute that, in 484, the papacy and the patriarchate excommunicated each other. Serious doctrinal differences helped intensify the rivalry. Matters of fierce contentions concerned questions such as: Did the Holy Ghost proceed from the Father alone — or also the Son?

Was Christ one being of divine nature —or two inseparable beings, one divine and one human? Should busts and religious images play a central part in worship? Or did worshiping figures, as in Judaism and Islam, constitute idolatry?

The end of modern unipolarity?

When mingled with personal animosities, these doctrinal disputes were to mire To Romans, the goal was to cultivate allegiance toward, rather than resentment of, Roman rule.

both churches in centuries of competition and intrigue — including murders, kidnappings and lesser forms of abuse. The church nonetheless stayed nominally unified until 1054, when it formally broke into its Roman Catholic and Greek Orthodox variants.

All in all, Rome’s fate does not augur well for the current wave of transatlantic infighting. Simply put, the process of Rome’s decline foreshadows a unitary West that is in the midst of separating into distinct North American and European power centers.

Excerpted from THE END OF THE AMERICAN ERA by Charles A. Kupchan Copyright (c) Charles A. Kupchan With permission from the publisher Alfred A. Knopf, a division of Random House, Inc.


Nov 30, 2006 I think we the basic model of empire may now be historically outdated. What I think we are heading for might be an international corporate oligarchy, which perhaps might retain national governments to provide some services and local administration. I’m not sure this is where we are going, but the increase in the power of giant corporations is a definite trend.

Interestingly, even the Chinese Communist Party is joining the developing international upper class. Corporations must meet certain requirements to obtain permission to do business in China, and one of the requirements is that they put the children of high Communist Party officials in high executive positions. So we are seeing a merging of even the Chinese Communist Party into the multinational corporations, which may be the embryo of a world oligarchic government. Again, this is just a trend and things might go in a different direction. History is full of surprises.

James Newell in Siskiyou County, CA

Posted in Roman Empire | Leave a comment

Ward-Perkins “The Fall of Rome: And the End of Civilization”

Bryan Ward-Perkins. 2006. The Fall of Rome: And the End of Civilization. Oxford University Press.

Notes from this book follow:

A recent Guide to Late Antiquity, published by Harvard University Press, asks us “to treat the period between around 250 and 800 as a distinctive and quite decisive period of history that stands on its own’, rather than as the story of the unravelling of a once glorious and “higher” state of civilization”. This is a bold challenge to the conventional view of darkening skies and gathering gloom as the empire dissolved.

Words like ‘decline’ and ‘crisis’, which suggest problems at the end of the empire and which were quite usual into the 1970s, have largely disappeared from historians’ vocabularies, to be replaced by neutral terms, like ‘transition’, ‘change’, and ‘transformation’.

Here too old certainties are being challenged. According to the traditional account, the West was, quite simply, overrun by hostile ‘waves’ of Germanic peoples. The long-term effects of these invasions have, admittedly, been presented in very different ways, depending largely on the individual historian’s nationality and perspective. For some, particularly in the Latin countries of Europe, the invasions were entirely destructive. For others, however, they brought an infusion of new and freedom-loving Germanic blood into a decadent empire.

Unsurprisingly, an image of violent and destructive Germanic invasion was very much alive in continental Europe in the years that immediately followed the Second World War.” But in the latter half of the twentieth century, as a new and peaceful Western Europe became established, views of the invaders gradually softened and became more positive

More recently, however, some historians have gone very much further than this, notably the Canadian historian Walter Goffart, who in 1980 launched a challenge to the very idea of fifth-century ‘invasions’. He argued that the Germanic peoples were the beneficiaries of a change in Roman military policy. Instead of continuing the endless struggle to keep them out, the Romans decided to accommodate them into the empire by an ingenious and effective arrangement. The newcomers were granted a proportion of the tax revenues of the Roman state, and the right to settle within the imperial frontiers; in exchange, they ceased their attacks, and diverted their energies into upholding Roman power, of which they were now stakeholders. In effect, they became the Roman defense force.

Goffart was very well aware that sometimes Romans and Germanic newcomers were straightforwardly at war, but he argued that `the fifth century was less momentous for invasions than for the incorporation of barbarian protectors into the fabric of the West’. In a memorable sound bite, he summed up his argument: “what we call the Fall of the Western Roman empire was an imaginative experiment that got a little out of hand.” Rome did fall, but only because it had voluntarily delegated away its own power, not because it had been successfully invaded. Like the new and positive ‘Late Antiquity’, the idea that the Germanic invasions were in fact a peaceful accommodation has had a mixed reception. The world at large has seemingly remained content with a dramatic ‘Fall of the Roman empire’, played out as a violent and brutal struggle between invaders and invaded.

As someone who is convinced that the coming of the Germanic peoples was very unpleasant for the Roman population, and that the long-term effects of the dissolution of the empire were dramatic, I feel obliged to challenge such views.

The Germanic invaders of the western empire seized or extorted through the threat of force the vast majority of the territories in which they settled, without any formal agreement on how to share resources with their new Roman subjects. The impression given by some recent historians that most Roman territory was formally ceded to them as part of treaty arrangements is quite simply wrong. Wherever the evidence is moderately full, as it is from the Mediterranean provinces, conquest or surrender to the threat of force was definitely the norm, not peaceful settlement.

The city of Rome was repeatedly besieged by the Goths, before being captured and sacked over a three-day period in August 410. We are told that during one siege the inhabitants were forced progressively ‘to reduce their rations and to eat only half the previous daily allowance, and later, when the scarcity continued, only a third’. `When there was no means of relief, and their food was exhausted, plague not unexpectedly succeeded famine. Corpses lay everywhere …’ The eventual fall of the city, according to another account, occurred because a rich lady ‘felt pity for the Romans who were being killed off by starvation and who were already turning to cannibalism’, and so opened the gates to the enemy.’

Unsurprisingly, the defeats and disasters of the first half of the fifth century shocked the Roman world. This reaction can be charted most fully in the perplexed response of Christian writers to some obvious and awkward questions. Why had God, so soon after the suppression of the public pagan cults (in 391), unleashed the scourge of the barbarians on a Christian empire; and why did the horrors of invasion afflict the just as harshly as they did the unjust? The scale of the literary response to these difficult questions, the tragic realities that lay behind it, and the ingenious nature of some of the answers that were produced, are all worth examining in detail. They show very clearly that the fifth century was a time of real crisis, rather than one of accommodation and peaceful adjustment.” It was an early drama in the West, the capture of the city of Rome itself in 410, that created the greatest shock waves within the Roman world. In military terms, and in terms of lost resources, this event was of very little consequence, and it certainly did not spell the immediate end of west Roman power.

The pagans now, not unreasonably, attributed Roman failure to the abandonment by the State of the empire’s traditional gods, who for centuries had provided so much security and success. The most sophisticated, radical, and influential answer to this problem was that offered by Augustine, who in 413 (initially in direct response to the sack of Rome) began his monumental City of God.” Here he successfully sidestepped the entire problem of the failure of the Christian empire by arguing that all human affairs are flawed, and that a true Christian is really a citizen of Heaven. Abandoning centuries of Roman pride in their divinely ordained state (including Christian pride during the fourth century), Augustine argued that, in the grand perspective of Eternity, a minor event like the sack of Rome paled into insignificance.

Most resorted to what rapidly became Christian platitudes in the face of disaster.

In a similar vein and also in early fifth-century Gaul, Orientius of Auch confronted the difficult reality that good Christian men and women were suffering unmerited and violent deaths. Not unreasonably, he blamed mankind for turning God’s gifts, such as fire and iron, to warlike and destructive ends.

Roman military dominance over the Germanic peoples was considerable, but never absolute and unshakeable. The Romans had always enjoyed a number of important advantages: they had well-built and imposing fortifications; factory-made weapons that were both standardized and of a high quality; an impressive infrastructure of roads and harbors; the logistical organization necessary to supply their army, whether at base or on campaign; and a tradition of training that ensured disciplined and coordinated action in battle, even in the face of adversity. Furthermore, Roman mastery of the sea, at least in the Mediterranean, was unchallenged and a vital aspect of supply. It was these sophistications, rather than weight of numbers, that created and defended the empire,

These advantages were still considerable in the fourth century. In particular, the Germanic peoples remained innocents at sea (with the important exception of the Anglo-Saxons in the north), and notorious for their inability to mount successful siege warfare. Consequently, small bands of Romans were able to hold out behind fortifications, even against vastly superior numbers, and the empire could maintain its presence in an area even after the surrounding countryside had been completely overrun.

The Alamans were physically stronger and swifter; our soldiers, through long training, more ready to obey orders. The enemy were fierce and impetuous; our men quiet and cautious. Our men put their trust in their minds; while the barbarians trusted in their huge bodies.’ At Strasbourg, at least according to Ammianus, discipline, tactics, and equipment triumphed over mere brawn.

However, even at the best of times, the edge that the Romans enjoyed over their enemies, through their superior equipment and organization, was never remotely comparable, say, to that of Europeans in the nineteenth century using rifles and the Gatling and Maxim guns against peoples armed mainly with spears. Consequently, although normally the Romans defeated barbarians when they met them in battle, they could and did occasionally suffer disasters. Even at the height of the empire’s success, in AD 9, three whole legions under the command of Quinctilius Varus, along with a host of auxiliaries, were trapped and slaughtered by tribesmen in north Germany. Some 20,000 men died:

The West was lost mainly through failure to engage the invading forces successfully and to drive them back. This caution in the face of the enemy, and the ultimate failure to drive him out, are best explained by the severe problems that there were in putting together armies large enough to feel confident of victory. Avoiding battle led to a slow attrition of the Roman position, but engaging the enemy on a large scale would have risked immediate disaster on the throw of a single dice. Did the invaders push at the doors of a tottering edifice, or did they burst into a venerable but still solid structure? Because the rise and fall of great powers have always been of interest, this issue has been endlessly debated. Famously, Edward Gibbon, inspired by the secularist thinking of the Enlightenment, blamed Rome’s fall in part on the fourth-century triumph of Christianity and the spread of monasticism: “’a large portion of public and private wealth was consecrated to the specious demands of charity and devotion; and the soldiers pay was lavished on the useless multitudes of both sexes, who could only plead the merits of abstinence and chastity.”

Gibbon’s ideas about the damaging effects of Christianity were fiercely contested at the time; then fell into abeyance. In the nineteenth and early twentieth centuries, the fall of Rome tended to be explained in terms of the grand theories of racial degeneration or class conflict that were then current. But in 1964 the pernicious influence of the Church was given a new lease of life by the then doyen of late Roman studies, A. H. M. Jones. Under the wonderful heading ‘Idle Mouths’, Jones lambasted the economically unproductive citizens of the late empire-aristocrats, civil servants, and churchmen: “the Christian church imposed a new class of idle mouths on the resources of the empire … a large number lived on the alms of the peasantry, and as time went on more and more monasteries acquired landed endowments which enabled their inmates to devote themselves entirely to their spiritual duties.”

In my opinion, the key internal element in Rome’s success or failure was the economic well-being of its taxpayers. This was because the empire relied for its security on a professional army, which in turn relied on adequate funding. The fourth-century Roman army contained perhaps as many as 600,000 soldiers, all of whom had to be salaried, equipped, and supplied. The number of troops under arms, and the levels of military training and equipment that could be lavished on them, were all determined by the amount of cash that was available. As in a modern state, the contribution in tax of tens of millions of unarmed subjects financed an elite defense corps of full-time fighters. Consequently, again as in a modern state, the strength of the army was closely linked to the well-being of the underlying tax base. Indeed, in Roman times this relationship was a great deal closer than it is today. Military expenditure was by far the largest item in the imperial budget, and there were no other massive departments of state, such as ‘Health’ or ‘Education’, whose spending could be cut when necessary in order to protect ‘Defense'; nor did the credit mechanisms exist in Antiquity that would have allowed the empire to borrow substantial sums of money in an emergency. Military capability relied on immediate access to taxable wealth.’

Invasions were not the only problem faced by the western empire; it was also badly affected during parts of the fifth century by civil war and social unrest.

We know that what the empire required during these years was a concerted and united effort against the Goths (then marching through much of Italy and southern Gaul, and sacking Rome itself in 410), and against the Vandals, Sueves, and Alans (who entered Gaul at the very end of 406 and Spain in 409). What it got instead were civil wars, which were often prioritized over the struggle with the barbarians.

As we have seen, the revolts by the Bacaudae in the West can partly be understood as an attempt by desperate provincials to defend themselves, after the central government had failed to protect them. Roman civilians had to relearn the arts of war in this period, and slowly they did so. As early as 407-8 two wealthy landowners in Spain raised a force of slaves from their own estates, in support of their relative the emperor Honorius. But it would, of course, take time to convert a disarmed and demilitarized population into an effective fighting force;

Interestingly, the most successful resistance to Germanic invasion was in fact offered by the least Romanized areas of the empire: the Basque country; Brittany; and western Britain. Brittany and the Basque country were only ever half pacified by the invaders, while north Wales can lay claim to being the very last part of the Roman Empire to fall to the barbarians-when it fell to the English under Edward I in 1282. It seems that it was in these ‘backward’ parts of the empire that people found it easiest to re-establish tribal structures and effective military resistance.

Sophistication and specialization, characteristic of most of the Roman world, were fine, as long as they worked: Romans bought their pots from professional potters, and bought their defense from professional soldiers. From both they got a quality product–much better than if they had had to do their soldiering and potting themselves. However, when disaster struck and there were no more trained soldiers and no more expert potters around, the general population lacked the skills and structures needed to create alternative military and economic systems. In these circumstances, it was in fact better to be a little ‘backward’.

Unlike the Romans, who relied for their military strength on a professional army (and therefore on tax), freeborn Germanic males looked on fighting as a duty, a mark of status, and, perhaps, even a pleasure. As a result, large numbers of them were practiced in warfare-a very much higher proportion of the population than amongst the Romans. Within reach of the Rhine and Danube frontiers lived tens of thousands of men who had been brought up to think of war as a glorious and manly pursuit, and who had the physique and basic training to put these ideals into practice. Fortunately for the Romans, their innate bellicosity was, however, to a large extent counterbalanced by another, closely related, feature of tribal societies-disunity, caused by fierce feuds, both between tribes and within them.

Already, before the later fourth century, there had been a tendency for the small Germanic tribes of early imperial times to coalesce into larger political and military groupings. But events at the end of this century and the beginning of the next unquestionably accelerated and consolidated the trend. In 376 a disparate and very large number of Goths were forced by the Huns to seek refuge across the Danube and inside the empire. By 378 they had been compelled by Roman hostility to unite into the formidable army that defeated Valens at Adrianopolis. At the very end of 406 substantial numbers of Vandals, Alans, and Sueves crossed the Rhine into Gaul. All these groups entered a still functioning empire, and, therefore, a very hostile environment. In this world, survival depended on staying together in large numbers. Furthermore, invading armies were able to pick up and assimilate other adventurers, ready to seek a better life in the service of a successful war band. We have already met the soldiers of the dead Stilicho and the slaves of Rome, who joined the Goths in Italy in 408; but even as early as 376-8 discontents and fortune-seekers were swelling Gothic ranks, soon after they had crossed into the empire-the historian Ammianus Marcellinus tells us that their numbers were increased significantly, not only by fleeing Gothic slaves, but also by miners escaping the harsh conditions of the state’s gold mines and by people oppressed by the burden of imperial taxation.

The different groups of incomers were never united, and fought each other, sometimes bitterly, as often as they fought the `Romans’– just as the Roman side often gave civil strife priority over warfare against the invaders.” When looked at in detail, the ‘Germanic invasions’ of the fifth century break down into a complex mosaic of different groups, some imperial, some local, and some Germanic, each jockeying for position against or in alliance with the others, with the Germanic groups eventually coming out on top.

Balkans, Italy, Gaul, and Spain between 376 and 419, were indeed quite unlike the systematic annexations of neighboring territory that we expect of a true invasion. These Goths on entering the empire left their homelands for good. They were, according to circumstance (and often concurrently), refugees, immigrants, allies, and conquerors, moving within the heart of an empire that in the early fifth century was still very powerful. Recent historians have been quite correct to emphasize the desire of these Goths to be settled officially and securely by the Roman authorities. What the Goths sought was not the destruction of the empire, but a share of its wealth and a safe home within it, and many of their violent acts began as efforts to persuade the imperial authorities to improve the terms of agreement between them.

The incoming peoples were not ideologically opposed to Rome–they wanted to enjoy a slice of the empire rather than to destroy the whole thing. Emperors and provincials could, and often did, come to agreements with the invaders. For instance, even the Vandals, the traditional ‘bad boys’ of this period, were very happy to negotiate treaty arrangements, once they were in a strong enough negotiating position. Indeed it is a striking but true fact that emperors found it easier to make treaties with invading Germanic armies who would be content with grants of money or land than with rivals in civil wars-who were normally after their heads.


Because the military position of the imperial government in the fifth century was weak, and because the Germanic invaders could be appeased, the Romans on occasion made treaties with particular groups, formally granting them territory on which to settle in return for their alliance.

Is it really likely that Roman provincials were cheered by the arrival on their doorsteps of large numbers of heavily armed barbarians under the command of their own king? To understand these treaties, we need to appreciate the circumstances of the time, and to distinguish between the needs and desires of the local provincials, who actually had to host the settlers, and those of a distant imperial government that made the arrangements. I doubt very much that the inhabitants of the Garonne valley in 419 were happy to have the Visigothic army settled amongst them; but the government in Italy, which was under considerable military and financial pressure, might well have agreed this settlement, as a temporary solution to a number of pressing problems. It bought an important alliance at a time when the imperial finances were in a parlous condition. At the same time it removed a roving and powerful army from the Mediterranean heartlands of the empire, converting it into a settled ally on the fringes of a reduced imperial core. Siting these allies in Aquitaine meant that they could be called upon to fight other invaders, in both Spain and Gaul. They could also help contain the revolt of the Bacaudae, which had recently erupted to the north, in the region of the Loire. It is even possible that the settlement of these Germanic troops was in part a punishment on the aristocracy of Aquitaine, for recent disloyalty to the emperor.

The interests of the center when settling Germanic peoples, and those of the locals who had to live with the arrangements, certainly did not always coincide. The granting to some Alans of lands in northern Gaul in about 442, on the orders of the Roman general Aetius, was resisted in vain by at least some of the local inhabitants. The Alans, to whom lands in northern Gaul had been assigned by the patrician Aetius to be divided with the inhabitants, subdued by force of arms those who resisted, and, ejecting the owners, forcibly took possession of the land. But, from the point of view of Aetius and the imperial government, the same settlement offered several potential advantages. It settled one dangerous group of invaders away from southern Gaul (where Roman power and resources were concentrated); it provided at least the prospect of an available ally; and it cowed the inhabitants of northern Gaul, many of whom had recently been in open revolt against the empire.) All this, as our text makes very clear, cost the locals a very great deal. But the cost to the central government was negligible or non-existent, since it is unlikely that this area of Gaul was any longer providing significant tax revenues or military levies for the emperor. If things went well (which they did not), the settlement of these Alans might even have been a small step along the path of reasserting imperial control in northern Gaul.

The imperial government was entirely capable of selling its provincial subjects downriver, in the interests of short-term political and military gain.

At a number of points along the line, things might have gone differently, and the Roman position might have improved, rather than worsened. Bad luck, or bad judgment, played a very important part in what actually happened. For instance, had the emperor Valens won a stunning victory at Hadrianopolis in 378 (perhaps by waiting for the western reinforcements that were already on their way), the ‘Gothic problem’ might have been solved, and a firm example would have been set to other barbarians beyond the Danube and Rhine. Similarly, had Stilicho in 402 followed up victories in northern Italy over the Goths with their crushing defeat, rather than allowing them to retreat back into the Balkans, it is much less likely that another Germanic group in 405-6, and the Vandals, Alans, and Sueves in 406, would have taken their chances within the western empire.

How did the East Survive? The eastern half of the Roman empire survived the Germanic and Iiunnic attacks of this period, to flourish in the fifth and early sixth centuries; indeed it was only a thousand years later, with the Turkish capture of Constantinople in 1453, that it came to an end. No account of the fall of the western empire can be fully satisfactory if it does not discuss how the East managed to resist very similar external pressure. Here, I believe, it was primarily good fortune, rather than innately greater strength, that was decisive.

The Cost of Peace. The new arrivals demanded and obtained a share of the empire’s capital wealth, which at this date meant primarily land. We know for certain that many of the great landowners of post-Roman times were of Germanic descent, even though we have very little information as to how exactly they had obtained their wealth at the expense of its previous owners.


The Germanic settlers rapidly used their power to acquire more wealth.

The Germanic peoples entered the empire with no ideology that they wished to impose, and found it most advantageous and profitable to work closely, within the well-established and sophisticated structures of Roman life. The Romans as a group unquestionably lost both wealth and power in order to meet the needs of a new, and dominant, Germanic aristocracy. But they did not lose everything, and many individual Romans were able to prosper under the new dispensation.

In the case of the Anglo-Saxons and others who bordered Roman territory by land or sea, the number of immigrants was probably substantially larger, since here the initial conquests could readily he followed up by secondary migration. However, except perhaps in regions that were right on the frontiers, it is unlikely that the numbers involved were so large as to dispossess many at the level of the peasantry. Many smallholders in the new kingdoms probably continued to hold their land much as before, except that much of the tax and rent that they paid will now have gone to enrich Germanic masters.


It is currently deeply unfashionable to state that anything like a ‘crisis’ or a ‘decline’ occurred at the end of the Roman empire, let alone that a `civilization’ collapsed and a ‘dark age’ ensued. The new orthodoxy is that the Roman world, in both East and West, was slowly, and essentially painlessly, `transformed’ into a medieval form. However, there is an insuperable problem with this new view: it does not fit the mass of archaeological evidence now available, which shows a startling decline in western standards of living during the fifth to seventh centuries. This was a change that affected everyone, from peasants to kings, even the bodies of saints resting in their churches. It was no mere transformation-it was decline on a scale that can reasonably be described as ‘the end of a civilization’.


The Fruits of the Roman Economy

The Romans produced goods, including mundane items, to a very high quality, and in huge quantities; and then spread them widely, through all levels of society. Because so little detailed written evidence survives for these humble aspects of daily, life, it used to be assumed that few goods moved far from home, and that economic complexity in the Roman period was essentially there to satisfy the needs of the state and the whims of the elite, with little impact on the broad mass of society. However, painstaking work by archaeologists has slowly transformed this picture, through the excavation of hundreds of sites, and the systematic documentation and study of the artefacts found on them. This research has revealed a sophisticated world, in which a north-Italian peasant of the Roman period might eat off tableware from the area near Naples, store liquids in an amphora from North Africa, and sleep under a tiled roof. Almost all archaeologists, and most historians, now believe that the Roman economy was characterized, not only by an impressive luxury market, but also by a very substantial middle and lower market for high-quality functional products.

Evidence comes from the study of the different types of pottery found in such abundance on Roman sites: functional kitchen wares, used in the preparation of food; fine table wares, for its presentation and consumption; and amphorae, the large jars used throughout the Mediterranean for the transport and storage of liquids, such as wine and oil.’

Pots, although not normally the heroes of history books, deserve our attention. Three features of Roman pottery are remarkable, and not to be found again for many centuries in the West: its excellent quality and considerable standardization; the massive quantities in which it was produced; and its widespread diffusion, not only geographically (sometimes being transported over many hundreds of miles), but also socially (so that it reached, not just the rich, but also the poor). In the areas of the Roman world that I know best, central and northern Italy, after the end of the Roman world, this level of sophistication is not seen again until perhaps the fourteenth century, some 800 years later.

What strikes the eye and the touch most immediately and most powerfully with Roman pottery is its consistently high quality. This is not just an aesthetic consideration, but also a practical one. These vessels are solid (brittle, but not friable), they are pleasant and easy to handle (being light and smooth), and, with their hard and sometimes glossy surfaces, they hold liquids well and are easy to wash. Furthermore, their regular and standardized shapes will have made them simple to stack and store. When people today are shown a very ordinary Roman pot, and, in particular, are allowed to handle it, they often comment on how ‘modern’ it looks and feels, and need to be convinced of its true age.

On the left bank of the Tiber in Rome, by one of the river ports of the ancient city, is a substantial hill some So meters high, Monte Testaccio, Pottery Mountain, is a reasonable translation into English. It is made up entirely of broken oil amphorae, mainly of the second and third centuries AD and primarily from the province of Baetica in south-western Spain. It has been estimated that Monte Testaccio contains the remains of some 53 million amphorae, in which around 6,000,000,000 liters of oil were imported into the city from overseas.” Imports into imperial Rome were supported by the full might of the state and were therefore quite exceptional-but the size of operations at Monte Testaccio, and the productivity and complexity that lay behind them, none the less cannot fail to impress. This was a society with similarities to our own-moving goods on a gigantic scale, manufacturing high-quality containers to do so, and occasionally, as here, even discarding them on delivery. Like us, the Romans enjoy the dubious distinction of creating a mountain of good-quality rubbish.

In all but the remotest regions of the empire, Roman pottery of a high standard is common on the sites of humble villages and isolated farmsteads.

Pottery in most cultures is vital in relation to one of our primary needs, food. Ceramic vessels, of different shapes and sizes, play an essential part in the storage, preparation, cooking, and consumption of foodstuffs. They certainly did so in Roman times, even more than they do today, since their importance for storage and cooking has declined considerably in modern times, with the invention of cardboard and plastics, and with the spread of cheap metal ware and glass.

Amphorae, not barrels, were the normal containers for the transport and domestic storage of liquids. There is every reason to see pottery vessels as central to the daily life of Roman times.

I am also convinced that the broad picture that we can reconstruct from pottery can reasonably be applied to the wider economy. Pots are low-value, high-bulk items, with the additional disadvantage of being brittle-in other words, no one has ever made a large profit from making a single pot (except for quite exceptional art objects), and they are difficult and expensive to pack and transport, being heavy, bulky, and easy to break. If, despite these disadvantages, vessels (both fine table wares and more functional items) were being made to a high standard and in large quantities, and if they were travelling widely and percolating through even the lower levels of society-as they were in the Roman period-then it is much more likely than not that other goods, whose distribution we cannot document with the same confidence, were doing the same. If good-quality pottery was reaching even peasant households, then the same is almost certainly true of other goods, made of materials that rarely survive in the archaeological record, like cloth, wood, basketwork, leather, and metal. There is, for instance, no reason to suppose that the huge markets in clothing, foot ware, and tools were less sophisticated than that in pottery.

Further confirmation for this view can be found in an even humbler item, which also survives well in the soil but has received less scholarly attention than pottery-the roof tile.

Even buildings intended only for storage or for animals may well often have been tiled:

Tiles can be made locally in much of the Roman world, but they still require a large kiln, a lot of clay, a great deal of fuel, and expertise. After they have been manufactured, carrying them, even over short distances, without the advantages of mechanized transport, is also no mean feat. On many of the sites where they have been found, they can only have arrived laboriously, a few at a time, loaded onto pack animals. The roofs we have been looking at may not seem very important, but they represented a substantial investment in the infrastructure of rural life. A tiled roof may appeal in part because it is thought to be smart and fashionable, but it also has considerable practical advantages over roofs in perishable materials, such as thatch or wooden shingles. Above all, it will last much longer, and, if made of standardized well-fired tiles, as Roman roofs were, will provide more consistent protection from the rain-with minor upkeep, a tiled roof can function well for centuries; whereas even today a professionally laid thatch roof, of straw grown specifically for its durability, will need to be entirely remade every thirty years or so. A tiled roof is also much less likely to catch fire, and to attract insects, than wooden shingles or thatch. In Roman Italy, indeed in parts of pre-Roman Italy, many peasants, and perhaps even some animals, lived under tiled roofs. After the Roman period, sophisticated conditions such as these did not return until quite recent times.

Even smaller industries will have required considerable skills and some specialization in order to flourish, including, for example: the selection and preparation of clays and decorative slips; the making and maintenance of tools and kilns; the primary shaping of the vessels on the wheel; their refinement when half-dry; their decoration; the collection and preparation of fuel; the stacking and firing of the kilns; and the packing of the finished goods for transport. From unworked clay to finished product, a pot will have passed through many different processes and several different hands, each with its own expert role to play.

To reach the consumer then required a network of merchants and traders, and a transport infrastructure of roads, wagons, and pack animals, or sometimes of boats, ships, river- and sea-ports.

How exactly all this worked we will never know, because we have so few written records from the Roman period to document it; but the archaeological testimony of goods spread widely around their region of production, and sometimes further afield, is testimony enough to the fact that complex mechanisms of distribution did exist to link a potter at his kiln with a farmer needing a new bowl to eat from.

Wrecks filled with amphorae are so common that two scholars have recently wondered whether the volume of Mediterranean trade in the second century AD was again matched before the nineteenth century.

I am keen to emphasize that in Roman times good-quality articles were available even to humble consumers, and that production and distribution were complex and sophisticated. In many ways, this is a world like our own; but it is also important to try and be a little more specific. Although this is inevitably a guess, I think we are looking at a world that is roughly comparable, in terms of the range and quality of goods available, to that of the thirteenth to fifteenth centuries, rather than at a mirror image of our own times. The Roman period was not characterized by the consumer frenzy and globalized production of the modern developed world, where mechanized production and transport, and access to cheap labor overseas, have produced mountains of relatively inexpensive goods, often manufactured thousands of miles away. In Roman times machines still played only a relatively small part in manufacture, restricting the quantity of goods that could be made; and everything was transported by humans and animals, or, at best, by the wind and the currents. Consequently, goods imported from a distance were inevitably more expensive and more prestigious than local products.

Although some goods traveled remarkable distances, the majority of consumption was certainly local and regional-Roman pottery, for instance, is always much commoner near its production site than in more distant areas.

Many people were able to buy at least a few of the more expensive products from afar.

However, even if many would now choose to prioritize the role of the merchant over that of the state, no one would want to deny that the impact of state distribution was also considerable. Monte Testaccio alone testifies to a massive state effort with a wide impact: on Spanish olive-growers; on amphora-manufacturers; on shippers; and, of course, on the consumers of Rome itself, who thereby had their supply of olive oil guaranteed. The needs of the imperial capitals, like Rome and Constantinople, and of an army of around half a million men, stationed mainly on the Rhine and Danube and on the frontier with Persia, were very considerable, and the impressive structures that the Roman state set up to supply them are at least partially known from written records.

The distributive activities of the state and of private commerce have sometimes been seen as in conflict with each other; but in at least some circumstances they almost certainly worked together to mutual advantage. For instance, the state coerced and encouraged shipping between Africa and Italy, and built and maintained the great harbor works at Carthage and Ostia, because it needed to feed the city of Rome with huge quantities of African grain. But these grain ships and facilities were also available for commercial and more general use.

The End of Complexity. In the post-Roman West, almost all this material sophistication disappeared. Specialized production and all but the most local distribution became rare, unless for luxury goods; and the impressive range and quantity of high-quality functional goods, which had characterized the Roman period, vanished, or, at the very least, were drastically reduced. The middle and lower markets, which under the Romans had absorbed huge quantities of basic, but good-quality, items, seem to have almost entirely disappeared. Pottery, again, provides us with the fullest picture. In some regions, like the whole of Britain and parts of coastal Spain, all sophistication in the production and trading of pottery seems to have disappeared altogether: only vessels shaped without the use of the wheel were available, without any functional or aesthetic refinement. In Britain, most pottery was not only very basic, but also lamentably friable and impractical. In other areas, such as the north of Italy, some solid wheel-turned pots continued to be made and some soapstone vessels imported, but decorated table wares entirely, or almost entirely, disappeared; and, even amongst kitchen wares, the range of vessels being manufactured was gradually reduced to only a very few basic shapes. By the seventh century, the standard vessel of northern Italy was the olla (a simple bulbous cooking pot), whereas in Roman times this was only one vessel type in an impressive batterie de cuisine (jugs, plates, bowls, serving dishes, mixing and grinding bowls, casseroles, lids, amphorae, and others).

The great tableware producers of Roman North Africa continued to make (and export) their wares throughout the fifth and sixth centuries, and indeed into the latter half of the seventh. But the number of pots exported and their distribution became gradually more-and-more restricted-both geographically (to sites on the coast, and eventually, even there, only to a very few privileged centers like Rome), and socially (so that African pottery, once ubiquitous, by the sixth century is found only in elite settlements).

It was not only quality and diversity that declined; the overall quantities of pottery in circulation also fell dramatically.

Rome continued to import amphorae and table wares from Africa even in the late seventh century, and it was here, in the eighth century, that one of the very first medieval glazed wares was developed. These features are impressive, suggesting the survival within the city of something close to a Roman-style ceramic economy. But, even in this exceptional case, a marked decline from earlier times is evident, if we look at overall quantities.

In the Mediterranean region, the decline in building techniques and quality was not quite so drastic-what we witness here, as with the history of pottery production, is a dramatic shrinkage, rather than a complete disappearance. Domestic housing in post-Roman Italy, whether in town or countryside, seems to have been almost exclusively of perishable materials. Houses, which in the Roman period had been primarily of stone and brick, disappeared, to be replaced by settlements constructed almost entirely of wood. Even the dwellings of the landed aristocracy became much more ephemeral, and far less comfortable: archaeologists, despite considerable efforts, have so far failed to find any continuity into the late-sixth and seventh centuries of the impressive rural and urban houses that had been a ubiquitous feature of the Roman period-with their solid walls, and marble and mosaic floors, and their refinements such as underfloor heating and piped water.

It may have been as much as a thousand years later, perhaps in the fourteenth or fifteenth centuries, that roof tiles again became as readily available and as widely diffused in Italy as they had been in Roman times. In the meantime, the vast majority of the population made do with roofing materials that were impermanent, inflammable, and insect-infested. Furthermore, this change in roofing was not an isolated phenomenon, but symptomatic of a much wider decline in domestic building standards-early medieval flooring, for instance, in all but palaces and churches, seems to have been generally of simple beaten earth.

Coinage is undoubtedly a great facilitator of commercial exchange-copper coins, in particular, for small transactions. In the absence of coinage, raw bullion for major purchases, and barter for minor ones, can admittedly be much more sophisticated than we might initially suppose.” But barter requires two things that coinage can circumvent: the need for both sides to know, at the moment of agreement, exactly what they want from the other party; and, particularly in the case of an exchange that involves one party being ‘paid back’ in the future, a strong degree of trust between those who are doing the exchanging. If I want to exchange one of my cows for a regular supply of eggs over the next five years, I can do this, but only if I trust the chicken-farmer. Barter suits small face-to-face communities, in which trust either already exists between parties, or can be readily enforced through community pressure. But it does not encourage the development of complex economies, where goods and money need to circulate impersonally. In a monied economy, I can exchange my cow for coins, and only later, and perhaps in a distant place, decide when and how to spend them. I need only trust the coins that I receive.

A Return to Prehistory? The economic change that I have outlined was an extraordinary one. What we observe at the end of the Roman world is not a ‘recession’ with an essentially similar economy continuing to work at a reduced pace. Instead what we see is a remarkable qualitative change, with the disappearance of entire industries and commercial networks. The economy of the post-Roman West is not that of the fourth century reduced in scale, but a very different and far less sophisticated entity. This is at its starkest and most obvious in Britain. A number of basic skills disappeared entirely during the fifth century, to be reintroduced only centuries later. Some of these, such as the technique of building in mortared stone or brick,

All over Britain the art of making pottery on a wheel disappeared in the early fifth century, and was not reintroduced for almost 300 years.

Rare elite items, made or imported for the highest levels of society. At this level, beautiful objects were still being made, and traded or gifted across long distances. What had totally disappeared, however, were the good-quality, low-value items, made in hulk, and available so widely in the Roman period.

The complex system of production and distribution, whose disappearance we have been considering, was an older and more deeply rooted phenomenon than an exclusively `Roman’ economy. Rather, it was an ‘ancient’ economy that in the eastern and southern Mediterranean was flourishing long before Rome became at all significant, and that even in the north-western Mediterranean was developing steadily before the centuries of Roman domination. Cities such as Alexandria, Antioch, Naples and Marseille were ancient long before they fell under Roman control.


What was destroyed in the post-Roman centuries, and then only very slowly re-created, was a sophisticated world with very deep roots indeed.

Patterns of Change. There was no single moment, nor even a single century of collapse. The ancient economy disappeared at different times and at varying speeds across the empire.

There is general agreement that Roman Britain’s sophisticated economy disappeared remarkably quickly and remarkably early. There may already have been considerable decline in the later fourth century, but, if so, this was a recession, rather than a complete collapse: new coins were still in widespread use and a number of sophisticated industries still active. In the early fifth century all this disappeared, and, as we have seen in the previous chapter, Britain reverted to a level of economic simplicity similar to that of the Bronze Age, with no coinage, and only hand-shaped pots and wooden buildings.2 Further south, in the provinces of the western Mediterranean, the change was much slower and more gradual, and is consequently difficult to chart in detail. But it would be reasonable to summarize the change in both Italy and North Africa as a slow decline, starting in the fifth century (possibly earlier in Italy), and continuing on a steady downward path into the seventh. Whereas in Britain the low point had already been reached in the fifth century, in Italy and North Africa it probably did not occur until almost two centuries later, at the very end of the sixth century, or even, in the case of Africa, well into the seventh.’ Turning to the eastern Mediterranean, we find a very different story. The best that can be said of any western province after the early fifth century is that some regions continued to exhibit a measure of economic complexity, although always within a broad context of decline. By contrast, throughout almost the whole of the eastern empire, from central Greece to Egypt, the fifth and early sixth centuries were a period of remarkable expansion. We know that settlement not only increased in this period, but was also prosperous, because it left behind a mass of newly built rural houses, often in stone, as well as a rash of churches and monasteries across the landscape (Fig. 6.2). New coins were abundant and widely diffused, and new potteries, supplying distant as well as local markets, developed on the west coast of modern Turkey, in Cyprus, and in Egypt-. Furthermore, new types of amphora appeared, in which the wine and oil of the Levant and of the Aegean were transported both within the region, and outside it, even as far as Britain and the upper Danube. If we measure `Golden Ages’ in terms of material remains, the fifth and sixth centuries were certainly golden for most of the eastern Mediterranean, in many areas leaving archaeological traces that are more numerous and more impressive than those of the earlier Roman empire.’ In the Aegean, this prosperity came to a sudden and very dramatic end in the years around AD 6oo.` Great cities such as Corinth, Athens, Ephesus, and Aphrodisias, which had dominated the region since long before the arrival of the Romans, shrank to a fraction of their former size-the recent excavations at Aphrodisias suggest that the greater part of the city became in the early seventh century an abandoned ghost town, peopled only by its marble statues.” The tablewares and new coins, which had been such a prominent feature of the fifth and sixth centuries, disappeared with a suddenness similar to the experience of Britain some two centuries earlier

My focus here, however, will be on what happened after the invasions began. The evidence available very strongly suggests that political and military difficulties destroyed regional economies, irrespective of whether they were flourishing or already in decline. The death of complexity in Britain in the early fifth century must certainly have been closely related to the withdrawal of Roman power from the province, since the two things happened at more or less at the same time.

All regions, except Egypt and the Levant, suffered from the disintegration of the Roman empire, but distinctions between the precise histories of different areas show that the impact of change varied quite considerably. In Britain in the early fifth century, and in the Aegean world around AD 6oo, collapse seems to have happened suddenly and rapidly, as though caused by a series of devastating blows. But in Italy and Africa change was much more gradual, as if brought about by the slow decline and death of complex systems. These different trajectories make considerable sense. The Aegean was hit by repeated invasion and raiding at the very end of the sixth century, and throughout the seventh-first by Slavs and Avars (in Greece), then by Persians (in Asia Minor), and finally by Arabs (on both land and sea).

The effect of the disintegration of the Roman state cannot have been wholly dissimilar to that caused by the dismemberment of the Soviet command economy after 1989. The Soviet structure was, of course, a far larger, more complex, and all-inclusive machine than the Roman. But most of the former Communist bloc has faced the problems of adjustment to a new world in a context of peace, whereas, for the Romans of the West, the end of the state economy coincided with a prolonged period of invasion and civil war. The emperors also maintained, primarily for their own purposes, much of the infrastructure that facilitated trade: above all a single, abundant, and empire-wide currency; and an impressive network of harbours, bridges, and roads. The Roman state minted coins less for the good of its subjects than to facilitate the process of taxing them; and roads and bridges were repaired mainly in order to speed up the movement of troops and government envoys. But coins in fact passed through the hands of merchants, traders, and ordinary citizens far more often than those of the taxman; and carts and pack animals travelled the roads much more frequently than did the legions.” With the end of the empire, investment in these facilities fell dramatically: in Roman times, for instance, there had been a continuous process of upgrading and repairing the road network, commemorated by the erection of dated milestones; there is no evidence that this continued in any systematic way beyond the early sixth century.

Security was undoubtedly the greatest boon provided by Rome.

it is a remarkable fact that few cities of the early empire were walled-a state of affairs not repeated in most of Europe and the Mediterranean until the late nineteenth century, and then only because high explosives had rendered walls ineffective as a form of defence. The security of Roman times provided the ideal conditions for economic growth.

there were also other problems that played a subsidiary role. In 541, for instance, bubonic plague reached the Mediterranean

economic sophistication has a negative side.

because the ancient economy was in fact a complicated and interlocked system, its very sophistication rendered it fragile and less adaptable to change. For bulk, high-quality production to flourish in the way that it did in Roman times, a very large number of people had to be involved, in more-or-less specialized capacities. First, there had to be the skilled manufacturers, able to make goods to a high standard, and in a sufficient quantity to ensure a low unit-cost. Secondly, a sophisticated network of transport and commerce had to exist, in order to distribute these goods efficiently and widely. Finally, a large (and therefore generally scattered) market of consumers was essential, with cash to spend and an inclination to spend it. Furthermore, all this complexity depended on the labour of the hundreds of other people who oiled the wheels of manufacture and commerce by maintaining an infrastructure of coins, roads, boats, wagons, wayside hostelries, and so on. Economic complexity made mass-produced goods available, but it also made people dependent on specialists or semi-specialists-sometimes working hundreds of miles away-for many of their material needs. This worked very well in stable times, but it rendered consumers extremely vulnerable if for any reason the networks of production and distribution were disrupted, or if they themselves could no longer afford to purchase from a specialist. If specialized production failed, it was not possible to fall back immediately on effective self-help. Comparison with the contemporary western world is obvious and important. Admittedly, the ancient economy was nowhere near as intricate as that of the developed world in the twenty-first century. We sit in tiny productive pigeon-holes, making our minute and highly specialized contributions to the global economy and we are wholly dependent for our needs on thousands, indeed hundreds of thousands, of other people spread around the globe, each doing their own little thing. We would be quite incapable of meeting our needs locally, even in an emergency. The ancient world had not come as far down the road of specialization and helplessness as we have,

The enormity of the economic disintegration that occurred at the end of the empire was almost certainly a direct result of this specialization. The post-Roman world reverted to levels of economic simplicity, lower even than those of immediately pre-Roman times, with little movement of goods, poor housing, and only the most basic manufactured items.

The sophistication of the Roman period, by spreading high-quality goods widely in society, had destroyed the local skills and local networks that, in pre-Roman times, had provided lower-level economic complexity. It took centuries for people in the former empire to reacquire the skills and the regional networks that would take them back to these pre-Roman levels of sophistication.


Food production may also have slumped, causing a steep drop in the population. Almost without exception, archaeological surveys in the West have found far fewer rural sites of the fifth, sixth, and seventh centuries AD than of the early empire.’ In many cases, the apparent decline is startling, from a Roman landscape that was densely settled and cultivated, to a post-Roman world that appears only very sparsely inhabited (Fig. 7.ia, b). Almost all the dots that represent Roman-period settlements disappear, leaving only large empty spaces. At roughly the same time, evidence for occupation in towns also decreases dramatically-the fall in the number of rural settlements was certainly not produced by a flight from the countryside into the cities.

Since economic complexity definitely increased the quality and quantity of manufactured goods, it is more likely than not that it also increased production of food, and therefore the number of people the land could feed. Archaeological evidence, from periods of prosperity, does indeed seem to show a correlation between increasing sophistication in production and marketing, and a rising population.

however sophisticated Roman agriculture was, harvests could still fail, and, when they did, transport was not cheap or rapid enough to bring in the large quantities of affordable grain that could have saved the poor from starvation. Edessa in Mesopotamia was one of the richest cities of the Roman East, surrounded by prosperous arable farming. But in AD Soo a swarm of locusts consumed the wheat harvest; a later harvest, of millet, also failed. For the poor, disaster followed. The price of bread shot up, and people were forced to sell their few possessions for a pittance in order to buy food. Many tried, in vain, to assuage their hunger with leaves and roots. Those who could, fled the region; but crowds of starving people flocked into Edessa and other cities, to sleep rough and to beg: ‘They slept in the colonnades and streets, howling night and day from the pangs of hunger.’ Here disease and the cold nights of winter killed large numbers of them; even collecting and burying the dead became a major problem.”‘


If we ask ourselves how the ability to read and write came to be so widespread in the Roman world, the answer probably lies in a number of different developments, which all encouraged the use of writing. In particular, there is no doubt that the complex mechanism of the Roman state required literate officials at all levels of its operations. There was no other way that the state could raise taxes in coin or kind from its provincials, assemble the resulting profits, ship them across long distances, and consume or spend them where they were needed. A great many lists and tallies will have been needed to ensure that a gold solidus raised in one of the peaceful provinces of the empire, like Egypt or Africa, was then spent effectively to support a soldier on the distant frontiers of Mesopotamia, the Danube, or the Rhine.

In Italy, the primacy of ancient civilization is seldom doubted, and a traditional view of the end of the Roman world is very much alive. Most Italians are with me in remaining highly skeptical about a peaceful `accommodation’ of the barbarians, and the ‘transformation’ of the Roman world into something new and equally sophisticated.’ The idea that the Germanic incomers were peaceful immigrants, who did no harm, has not caught on.

The historians who have argued for a new and rosy Late Antiquity are primarily North Americans, or Europeans based in the USA, and they have shifted their focus right out of the western Roman empire. Much of the evidence that sustains the new and upbeat Late Antiquity is rooted firmly in the eastern Mediterranean, where, as we have seen, there is good evidence for prosperity through the fifth and sixth centuries, and indeed into the eighth in the Levant.

Until fairly recently it was institutional, military, and economic history that dominated historians’ views of the fourth to seventh centuries.’ Quite the reverse is now the case, at least in the USA. Of the thirty-six volumes so far published by the University of California Press in a series entitled ‘The Transformation of the Classical Heritage’, thirty discuss the world of the mind and spirit (primarily different aspects of Christian thought and practice); only five or six cover more secular topics (such as politics and administration); and none focuses on the details of material life.’

Posted in Collapse of Civilizations, Roman Empire | Tagged , , , | Leave a comment