Pedro Prieto on Population

[I haven’t always agreed with Pedro about population, but I think he’s right now. It’s too late to do anything about overshoot, and we probably could have never done anything about it because we are animals. Though still, it is a shame we couldn’t have had state level incentives like lower taxes or better education for families with 2 children or less, and lower immigration levels so that countries with high birth rates didn’t have escape valves, but it’s too late now. Given how the rich benefit from excess population by paying lower wages, and the Ponzi scheme of pensions and SSN requiring ever more workers, and businesses and religions to grow their customers, overshoot and dieoff were inevitable.  If there is a dim light, perhaps the length of the dark times will be less since the overshoot is so high. Alice]

May 7, 2015 Pedro replies within a thread about population

Evidence #1. Population is in global overshoot today.

Evidence #2. All living beings have, by nature, exponential reproduction capabilities and rates, limited only by the access to resources in their environment.

But I have always found difficult and sometimes immoral or even useless to suggest, propose, mandate, legislate, etc. to my fellow humans, how they should -must- proceed with their natural instincts to solve this imbalance. If we force humans by legal coercion to limit their natural reproduction capabilities, we are doing something wrong, in my opinion.

Disclaimer #1. I am fully respectful for couples freely deciding to have an offspring below the minimum reproduction rate to sustain the species (below 2.2 per couple, for instance, as an average).

Disclaimer #2. I am also very respectful with couples that freely decide not to have babies.

Disclaimer #3. I am finally very respectful for couples that voluntarily take, if available to them, whatever the contraceptive methods.

But I believe we are totally mistaken when we plan to force millions of couples to “decouple” from their natural instincts and put in place policies like the “one child policy” in China. The fact that some demographers still believe it was a success (now it is being abandoned), because otherwise China could have today some 400 million inhabitants than the 1.3 billion of today, is not sufficient evidence that this type of policies are going to the heart of the population overshoot. It is only a temporal delay.

I am contrary, absolutely contrary, to sending missionaries or doctors to impoverished countries to sterilize women without their informed consent, with the alibi of attending them in giving birth or just launching preservatives with how to use pamphlets from planes, to populations that are rather waiting vaccines, aspirins, potable water or means to have it, anti-diarrhea pills or just a barefoot doctor. First things first.

If we are in overshoot, we would require, to this effect, to know first HOW MUCH we are in overshoot.

Then, in second place, we would need to understand WHY something that was not needed for 2 million years (reproduction legal coercion) is now needed or badly needed and if it the proposed measures have some sense or minimum possibility to succeed.

The most probable answer to this second question is that the core of the population overshoot lies more in our reversible industrial and technological way of living (even in our agricultural way of living), than in our irreversible natural way of reproduction in itself.

This will bring then the question of what way of living we think we aspire to for a given population.

Nate projects a chart, in some of his presentations, on the evolution of live beings since the Earth was formed (credit from William Stanton). We had never been in global overshoot ever before to the best of anthropological knowledge. Perhaps we were in overshoot in some specific, limited areas or regions and for a limited human groups and in a very limited period of time.

We have managed to live without overshoot all the period through which we evolved from erectus to habilis then to sapiens and then to sapiens-sapiens, when clearly differenced from non human great apes. During all this long period of time, a) we managed very well to survive as species and also managed, in all our absolute savageness (Rousseau), to respect other species (of course not individuals of other species we hunted or gathered for our own survival).


And we kept, very well, in a stationary population level of few hundred of thousands or at most very few million individuals within our particular species.


What Stanton called the “human population spike” it is a modern deadline since we started to manage agriculture and domesticate animals, about 10,000 years ago. Then, we had a spike within the spike with the fossil fuel powered machines about 150 years ago and finally a hyper spike within the spike about 30-50 years ago, with the technological advent.

I am very much convinced that we are going to return, sooner than later, to a stationary population level of the order of magnitude previous to the Stanton population spike. Difficult and dramatic as it may seem, I hope and wish humans will be able at least to keep at the original levels, rather than disappearing.

That is why I cannot understand very well the worries and concerns of many in developed countries, about global population growth per se.

The drama of falling from 7.2 or perhaps 8 billion, depending on the population inertia, to few hundreds of thousands or few millions is going to last perhaps one or two generations. That is all –and it won’t be little in human suffering-. Then, we will return to normality for another (I hope so) long period of time.

The present global population status is an obvious anomaly, for which we cannot blame Yanomamis in the Amazonia, Jivaros in Africa or Asmat in Irian Jaya or hunter gatherers having an offspring as large as they can, when nature takes the decision to balance it to the sustainable level.

The anomaly, in my opinion, resides more in the model of society we have created and that even most of the environmentalists hate to abandon or give up, than in trying to stay in this ultra consumerist society and correct or suppress the natural exponential reproduction capabilities to avoid the unavoidable.

Posted in Pedro Prieto, Population | Tagged , , | 1 Comment

Peak oil in 2015 for Russia & America means peak world

Peak Russia + Peak USA means Peak World

by Ron Patterson on March 5, 2015

Since around 2005 many countries have increased their oil production but more have decreased. But the combined production of the United States and Russia have kept the world on a slight uptrend since that time.

World since 2000

World oil production jumped in 2011, hardly moved at all in 2013 but it was up by more than  1.5 million barrels per day in 2014. And after such a huge gain everyone and their brother were singing “peak oil is dead’. But if you scroll down through the 37 major world oil producers it becomes obvious that a majority of nations have peaked and most of them are in steep decline.

The above chart is EIA data however the next four charts below are JODI data with the last data point February 2015. The data on all charts is thousand barrels per day.

However in the last decade it has been two of the three world’s largest oil producers that have kept us from peak oil, the USA and Russia.

USA and Russia

Russia grew like gangbusters in the first six years of this century but has slowed down considerably in the last five years or so while the US, due to the shale revolution, has had four years of dramatic growth.

Less Russia & USA

Using a stacked zero based chart it looks like nothing much has happened since early 2005. And that is correct, the USA and Russia have kept production slightly inching up while the rest of the world slightly declines.

World less USA & Russia

Here we get an amplified view of the World less USA & Russia. The peak was in February 2006 and February 2015 is over 2,600,000 barrels per day below that point.

We have discussed, in several posts, why many of us believe that the USA has peaked, or will peak this year. But what about Russia? Is Russia at her peak also?

I have taken another look at the Global and Russian Energy Outlook to 2040 by two Russian think tanks, The Energy Research Institute of the Russian Academy of Sciences and The Analytical Center for the Government of the Russian Federation that was published last year. I never noticed it before but they actually predict peak oil. On page 35 of this study they say:

Conventional oil (excluding NGL) production will drop to 3.1 billion tonnes by 2040 from the current 3.4 billion tonnes, and the long-discussed ‘conventional oil peak’ will occur in the period from 2015 to 2020. The drop in its extraction will be due to the gradual working-out of reserves of the largest existing fields.

3.4 billion tons per year works out to be 68,000,000 barrels per day and world C+C was about 10 million barrels per day above that number so I don’t know what they are counting, perhaps crude only.

They predict that Russian exports of all petroleum products will peak in 2015. Page 111:

Exports of petroleum products will peak in 2015 and will then gradually decrease until they reach 2010 levels by as early as 2040, mainly due to the decrease in exports of fuel oil and non-marketed petroleum products.

Then on pages 132 and 133 they predict the peak C+C for Russia

In Outlook 2014’s Baseline Scenario, production of oil and gas condensate in the Russian Federation reaches a peak and gradually declines, from 523 million tonnes in 2013 to 522 million tonnes by 2015, after which it continues to decline, right up to the end of the period, to a level of 468 million tonnes. This reduction in production is, for the most part, brought about by the working out of already exploited deposits in the key oil producing regions of the country (in Western Siberia).

They said that Russia peaked in 2013, the year before this document was published, at 10.46 million bpd then decline to 10.44 million bpd by 2015. That is a tiny, almost an imperceptible decline of 20,000 bpd or .2% over two years. They could have said that Russia would plateau in 2013 and remain on that plateau through 2015, which is exactly what has happened… so far anyway.

Russia from Jan 10

Here is Russian C+C production through February 2015. It appears now that the peak will be 2014 and 2015 which means that they are at peak right now. The spikes in 2011 were likely caused by the huge Western Siberian wildfires they had that year.

It is interesting to note that both JODI and the EIA reports Russian C+C production at about half a million barrels per day less than what the Russian official web site CDU TEK reports.

Figure 3.23

Here they have the peak in 2015 but only a slow decline from here on out. Notice that about 60 percent of all Russian oil comes from those very old Western Siberian super giants fields with that percentage declining only very slightly in the future. How can that be? They drilled 8,688 new wells in Russia last year, most of them infill wells in Western Siberia. Do they really expect to poke more holes in those old fields and and continue to get oil from them for another 25 years… or more?

Well yes and no. The chart below shows where they expect all that new oil to come from.

Reserve Growth

As you can see from the shrinking red column they expect those old fields to decline rather dramatically. But at the same time they expect them to grow. By 2040 they expect fully half of their production to come from “reserves growth”. And they are not bashful in admitting such:

One should point out the significant role that will need to be played by geological exploration during the forecast period, since by 2040 more than 50 per cent of production in all scenarios will need to come from growth in reserves, and final reconnaissance of fields resulting in category C2 reserves becoming category C1.

So those tired old Western Siberian fields will shrink but at the same time they will grow. But in all fairness they will not grow quite as fast as they shrink.

Despite the fall in production in the Baseline Scenario, even at the end of the forecast period the key production capacities of the country will continue to be concentrated in the Tyumen region, with its share accounting for 51 per cent of all crude oil and gas condensate production by 2040 (compared with 61 per cent in 2010).

The Tyumen region, and the areas surrounding it, are the areas in Western Siberia where their oil fields are. So the share of Russian oil production from this region will go from 61% today to 51% in 2040. Because those tired old fields are gonna grow!

Bottom line, USA peaks in 2015 and Russia peaks in 2015 which means the world peaks in 2015. Also many other nations that have increased production over the last few years are also at peak and will be declining soon. And Russia will be declining just a whole lot faster than those two think tanks believe they will. Those old reserves are not going to grow nearly as much as they think they will.

Of course there is a possibility that the peak could actually be in 2014 or even 2016, but I am firmly convinced that we are at peak oil right now.  If you have a counter argument I would love to hear it so please post it in the comments below.

I have published a new page, World Oil Yearly Production Charts with annual data charts for all the world’s major oil producers.

Posted in Peak Oil, Peak Oil Barrel | Tagged , , , | Leave a comment

The Banking system is a House of Cards

Lynn Parramore. April 21, 2015. Our Banking System is a Giant House of Cards. Money & Banking.

It Could Fall On You.

Anat Admati teaches finance and economics at the Stanford Graduate School of Business and is co-author of The Bankers’ New Clothes, a classic account of the problem of Too Big to Fail banks.

On May 6th, at the Finance and Society Conference sponsored by the Institute for New Economic Thinking, she will join Brooksley Born, former chair of Chair of the Commodities Futures Trading Commission, to discuss how effective financial regulation can make the system work better for society.

Seven years after the worst financial crisis since the Great Depression, Admati warns that we are not doing nearly enough to confront a bloated, inefficient, and dangerous financial system. The system can’t fix itself. Here’s what you need to know.

Lynn Parramore: How would you describe the problem of Too Big to Fail banks. Whey does it matter to an ordinary person?

Anat Admati: Too Big to Fail is a license for recklessness. These institutions defy notions of fairness, accountability, and responsibility. They are the largest, most complex, and most indebted corporations in the entire economy.

We all have to be really alarmed by the fact that not only do we still have such institutions, but many of them are ever-larger and more complex and at least as dangerous, if not more so, than they were before the financial crisis.

They are too big to manage and control. They take enormous risks that endanger everybody. They benefit from the upside and expose the rest of us to the downside of their decisions. These banks are too powerful politically as well.

As they seek profits, they can make wasteful and inefficient loans that harm ordinary people, and at the same time they might refuse to make certain business loans that can help the economy. They can even break the laws and regulations without the people responsible being held accountable. Effectively we’re hostages because their failure would be so harmful. They’re likely to be bailed out if their risks don’t turn out well.

Ordinary people continue to suffer from a recession that was greatly exacerbated or even caused by recklessness in the financial system and failed regulation. But the largest institutions, especially their leaders — even in the failed ones — have suffered the least. They’re thriving again and arguably benefiting the most from efforts to stimulate the economy.

So there’s something wrong with this picture. And there’s also increasing recognition that bloated banks and a bloated financial system – these huge institutions—are a drag on the economy.

LP: Have we made any progress in dealing with the problem?

AA: The progress has been totally unfocused and insufficient. Dodd-Frank claims to have solved the problem and it gives plenty of tools to regulators to do what needs to be done (many of these tools they actually already had before). But this law is really complex and the implementation of it is very messy. The lobbying by the financial industry is a large part of the reason that the law has been implemented so poorly and inefficiently with so much difficulty. We are failing to take simple steps and at the same time undertaking extremely costly steps with doubtful benefits.

So we’ve had far from enough progress. We are told things are better but they are nowhere near what we should expect and demand. Much more can be done right now.

LP: Banks, compared to other businesses, finance an enormous portion of their assets with borrowed money, or debt – as much as 95%. Yet bankers often claim that this is perfectly fine, and if we make them depend less on debt they will be forced to lend less. What is your view? Would asking banks to rely more on unborrowed money, or equity, somehow hurt the economy?

AA: Sometimes when I don’t have time to unpack everything I use a quote from a book called Payoff: Why Wall Street Always Wins by Jeff Connaughton. He relates something Paul Volcker once said to Senator Ted Kaufman: “You know, just about whatever anyone proposes, no matter what it is, the banks will come out and claim that it will restrict credit and harm the economy…It’s all bullshit.”

Here’s one obvious reason such claims are, in Volcker’s vocabulary, bullshit: Lending suffered most when banks didn’t have enough equity to absorb their losses in the crisis — and then we had to bail them out. The loss they suffered on the subprime fiasco was relatively small by comparison to losses to investors when the Internet bubble burst, but there was so much debt throughout the system, and indeed in the housing markets, and so much interconnection that the entire financial system almost collapsed. That’s when lending suffered. So lending and growth suffers when the banks have too little equity, not too much.

Now, banks naturally have some debt, like deposits. But they don’t feel indebted even when they rely on 95% debt to finance their assets. No other healthy company lives like that, and nobody, even banks, needs to live like that — that’s the key. Normally, the market would not allow this to go on; those who are as heavily indebted feel the burden in many ways. The terms of the debt become too burdensome for corporations, and reflect the inefficient investment decisions made by heavily indebted companies. But banks have much nicer creditors, like depositors, and with many explicit and implicit guarantees, banks don’t face trouble or harsh terms. They only have to convince the regulators to let them get away with it. And they do.

So the abnormality of this incredible indebtedness is that they get away with it. There’s nothing good about it for society. If they had more equity then they could do everything that they do better —more consistently, more reliably, in a less distorted fashion.

Today’s credit market is distorted. A key reason is that bankers love the high risk and chase returns. They are less fond of some of the lending where they are needed the most — like business lending, for example. Instead, most people get many credit cards in the mail and too many people live on expensive revolving credit. Effectively, the poor may end up subsidizing the credit card of the person who pays on time and has zero interest (and we all end up paying the enormous fees merchants are charged). So we can have too much or too little lending and live through inefficient booms and busts. Part of the reason for that is that banks are continually living on the edge in a way that nobody else in the economy would, and regulations meant to correct it are insufficient and flawed in their design.

LP: Banking has been a very profitable business. Is it profitable because the risks are born by the taxpayer? Do you think the bank bonus system is part of the problem?

AA: Yes, banking is partly profitable because of subsidies from taxpayers. There are probably other reasons, and not all of them good ones, in terms of the way competition works and other things. The bonus system encourages recklessness, and recklessness increases the value of the subsidies from taxpayers. Bankers are effectively paid to gamble.

It is profitable for the banks to become big even when this is inefficient, because they can do so with subsidized borrowing on easy terms. Guarantees, explicit and implicit, are a form of free or subsidized insurance. We don’t control whether what banks do with the cheap funding benefits the economy or just bankers and some of their investors. We must reduce these large subsidies that end up rewarding recklessness and harming us. (See Admati’s July 2014 testimony before Congress on bank subsidies).

LP: We often hear about financial innovations that helped bring the global economy to its knees in 2008. Back in December, Congress rolled back a key taxpayer protection concerning derivatives, which Robert Lenzner of Forbes Magazine called a “Christmas present for the banks.” What do Americans need to know about derivatives? How do they affect the Too Big to Fail problem?

AA. The Christmas present was just one more small thing in a much bigger problem. The largest financial firms in America can hide an enormous amount of risk in derivatives. That’s very dangerous because it makes banks more interconnected, since much of the derivatives trading happens within the financial system. It creates a house of cards — a very fragile system.

We also have bankruptcy laws in this country that perversely give unusual priority to derivatives contracts and other reckless practices.

Derivatives exacerbate Too Big to Fail dramatically because there’s so much opacity in the system. Policy-makers get scared into bailing our or guaranteeing a lot of their commitments made in those markets because they won’t quite know the consequences of letting them fail. It’s very intimately related to Too Big to Fail. It’s as if they hold a gun to your head. You don’t know whether they have bullets so you may get scared into paying the ransom.

LP: Is breaking up the banks is a solution?

AA: People say those words but what does it mean? How would you do it? That’s the big problem. Banks are multiple times bigger than most of the corporations you think of as big. I once made a mistake rushing through a HuffPost piece in 2010 saying that Jamie Dimon wants to be as big as Walmart. Well, at the time, JP Morgan was already 10 times bigger than Walmart by assets! When it comes to the financial sector, big is really big. People don’t even appreciate how big we’re talking about. Nobody else gets to be as big, and in fact, In other parts of the economy, companies that get so big often break up on their own. But that doesn’t happen in banking partly because of the perverse subsidies taxpayers provide.

The most sensible approach is to force banks and other financial institutions to have more equity, which is actually going to expose their inefficiencies and bring more investor pressure for a break-up to happen naturally without us doing it actively. Regulators can also put significantly more pressure on banks to simplify their structure and divest unnecessary lines of businesses such as commodities (energy, aluminum, etc.). The size appears unmanageable and makes regulation difficult.

LP: What would make banking regulation more effective?

AA: First of all there could be simpler regulation in some places and some cost-ineffective things could be used a bit less. Right now, we know too little about the risk and we have too little margin for error. We must reduce the opacity and increase the safety margins dramatically. Regulators make it complicated because we are unnecessarily living at the edge of a cliff all the time. We live so dangerously! There’s no need for that. We are told that we have to live like that, but it’s that’s completely false. The system has to be made a lot more resilient. Then we can worry less and sleep better.

In addition to making things simpler, it’s very important that we are able to see more of the risk and then to enforce much stronger and simpler rules. And, of course, regulators need to be watching where the risks are going. They should not believe that just because the risks are off the accounting balance sheets that they are gone. That was a trick to get around regulations and get around accounting rules in cases like Enron. A lot of the risks were hiding — but they can be traced. Some laws that are counterproductive and make regulation harder should also be examined, including the tax code that encourages debt over equity, and the bankruptcy law that overly protects certain financial practices.

LP: If we don’t deal with the problem of Too Big to Fail, what happens?

AA: An ordinary person doesn’t realize it, but the impact of this unhealthy system on them happens every day. It’s doesn’t feel as acute as something like leakage from a nuclear facility because harm from the financial system is a little more abstract. You only see it when it blows. But it’s an unhealthy, inefficient, bloated and dangerous system. Because this system is so fragile, it can implode again, and our options next time will be dire again. We will either suffer a lot or bail out the system to suffer a little bit less.

I recently shared with my students a quote by the Rothschild brothers of London, writing to associates in New York in 1863: “The few who understand the system will either be so interested in its profits or be so dependent upon its favours that there will be no opposition from that class, while the great body of people, mentally incapable of comprehending the tremendous advantage that capital derives from the system, will bear its burdens without complaint, and perhaps without even suspecting that the system is inimical to their interests.”

This is a great quote! We get tricked into thinking that we have a great financial system because we have our credit cards and whatnot. We don’t see the enormous risks that are taken in derivatives markets and some of the other practices that can topple the entire system again and which extracts fees and bonuses. The truth is that we can have a safer system that serves the economy and society better. But getting there requires that better laws and regulations are implemented and enforced. The system will not correct itself; we must demand that policymakers do a better job for the public.


Posted in Banking | Tagged , , | Leave a comment

David Hughes: Has Well Productivity Peaked in the Nation’s Largest Shale Gas Play, the Marcellus?

[This is big news since other major plays have already peaked.  It looks like the Marcellus is on Hughes predicted by 2018 peak with the latest data published]

Has Well Productivity Peaked in the Nation’s Largest Shale Gas Play?

By David Hughes, April 28, 2015.

The Marcellus shale gas play of Pennsylvania and West Virginia came onto the scene in 2007 in a big way and has grown to become the nation’s largest. It has accounted for much of the growth of U.S. shale gas production, and made up for declines in former shale gas giants like the Haynesville and Barnett plays of Louisiana and eastern Texas. Companies have scrambled to build pipeline infrastructure to connect the Marcellus to consumers in the U.S. northeast. Canadians, once supplied by gas from western Canada, are also looking to the Marcellus (and the much smaller Utica play in Ohio) for future supply; the pipelines that delivered gas to the east might be converted to instead deliver bitumen from the western tar sands. Companies in both the northeastern U.S. and eastern Canada are looking to build LNG terminals to export the shale gas bounty, and the first LNG export terminal on the Gulf coast will open later this year.

The prognosis for the Marcellus is therefore very important, as it is being counted on to supply abundant cheap gas to the northeast and elsewhere for decades to come. One of the big problems in figuring out what is happening with the Marcellus is the tardiness with which the states provide production data to the general public and to data vendors such as Drillinginfo, which I utilize extensively to analyze shale plays. West Virginia provides data in one-year chunks, and won’t release what happened in 2014 until mid-2015. Pennsylvania is somewhat better, releasing data in six-month chunks. In the absence of recent accurate production data, there has been much speculation on Marcellus production using proxies such as pipeline receipts and algorithms to estimate what production might be. Pennsylvania’s recent release of data from the last half of 2014, however, provides an opportunity to take an updated look at the Marcellus, considering that Pennsylvania comprises 85% of Marcellus production.

In my recent Drilling Deeper report, I looked at Marcellus data through mid-2014 with a view to determining what future production might look like. Critical observations included:

  • Field decline averages 32% per year without drilling, requiring about 1000 wells per year in Pennsylvania and West Virginia to offset.
  • Core counties occupy a relatively small proportion of the total play area and are the current focus of drilling.
  • Average well productivity in most counties is increasing as operators apply better technology and focus drilling on sweet spots.
  • Production in the “most likely” drilling rate case is likely to peak by 2018 at 25% above the levels in mid-2014 and will cumulatively produce about what the Energy Information Administration (EIA) projected through 2040. However, production levels will be higher in early years and lower in later years than the EIA projected, which is critical information for ongoing infrastructure development plans.

The following analysis provides updates using all available production data for 2014 and reveals:

  • The EIA overestimates Marcellus production by between 6% and 18%, for its “Natural Gas Weekly” and “Drilling Productivity” reports, respectively.
  • Five out of more than 70 counties account for two-thirds of production. Eighty-five percent of production is from Pennsylvania, 15% from West Virginia and very small amounts from Ohio and New York. (The EIA has published maps of the depth, thickness and distribution of the Marcellus shale, which are helpful in understanding the variability of the play.)
  • The increase in well productivity over time reported in Drilling Deeper has now peaked in several of the top counties and is declining. This means that better technology is no longer increasing average well productivity in these counties, a result of either drilling in poorer locations and/or well interference resulting in one well cannibilizing another well’s recoverable gas. This declining well productivity is significant, yet expected, as top counties become saturated with wells, and will degrade the economics which have allowed operators to sell into Appalachian gas hubs at a significant discount to Henry hub gas prices.
  • The backlog of wells awaiting completion (aka “fracklog”) was reduced from nearly a thousand wells in early 2012 to very few in mid-2013, but has increased to more than 500 in late 2014. This means there is a cushion of wells waiting on completion which can maintain or increase overall play production as they are connected, even if the rig count drops further.
  • Current drilling rates are sufficient to keep Marcellus production growing on track for its projected 2018 peak (“most likely” case in Drilling Deeper).


Figure 1 illustrates production from the Marcellus through December 2014 compared to estimates from the EIA. Total production was about 13.7 billion cubic feet per day (bcf/d). Estimates from the EIA for December are 14.5 and 16.1 bcf/d for its “Natural Gas Weekly” and “Drilling Productivity” reports, respectively. The Utica play of Ohio and western Pennsylvania also added an estimated 1.9 bcf/d to northeast supply in December, for a total of 15.6 bcf/d. This compares to claims of more than 19 bcf/d by some analysts, an overestimate of 22%.

Marcellus Production Figure01

Figure 1 – Production from the Marcellus based on well production data through December, 2014, for Pennsylvania, and well production data through December, 2013, for West Virginia (2014 WV production is estimated assuming the continuation of growth rates observed in the latter half of 2013). Also shown for comparison are EIA estimates from its “Natural Gas Weekly” and “Drilling Productivity” reports, and the number of producing wells.

Marcellus gas production is highly concentrated in a few counties out of the more than 70 counties that have some production, as illustrated in Figure 2. Three counties account for nearly half of the play’s production, five counties account for two-thirds, and 12 counties account for 90%. Drilling is concentrated in the top counties which have the greatest economic payback; the cheapest gas is being produced, now leaving the expensive gas for later.

Marcellus Production Figure02

Figure 2 – Production by county for the top 15 counties in the Marcellus play illustrating the highly concentrated nature of production in sweet spots. The total number of counties with at least some Marcellus production is more than 70.

New maps published by the EIA (reproduced in Figure 3) illustrate two key Marcellus play parameters: elevation (from which one can determine depth) and thickness (an indicator of potential reservoir volume and gas concentration per unit area). These maps show the variability over the play’s extent and why it is not surprising that production is concentrated in relatively small core areas. Other parameters which factor into productivity include organic matter content, thermal maturity, presence of natural fractures, sediment composition in terms of its ability to propagate fractures, pressure, gas saturation, permeability and porosity.

Sweet spots have the most favorable combination of these parameters and are clearly concentrated in northeast Pennsylvania, northern West Virginia southwest Pennsylvania, as illustrated in Figure 4. The play’s future production trajectory depends on drilling rates and trends in well productivity as sweet spots become saturated with wells and drilling moves into lower-quality rock.

Marcellus Production Figure03

Figure 3 – Elevation (top) and thickness (bottom) of the Marcellus shale. The thickness controls the volume of the potential reservoir and potential gas distribution and the elevation determines reservoir depth which controls pressure and other critical parameters. From EIA, April, 2015.

Marcellus Production Figure04

Figure 4 – Distribution of wells in Marcellus play as of mid-2014, illustrating highest one-month gas production (from Drilling Deeper Figure 3-80).

Drilling Rates

Drilling rates and well productivity are key factors in offsetting the unceasing decline of existing wells (field decline), which in the Marcellus amounts to 32% of the play’s production that needs to replaced each year through more drilling. At the time of publication of Drilling Deeper, offsetting field decline in the Marcellus as a whole required 1,003 new wells per year, and in Pennsylvania 899 wells per year. Production is now about 10% higher but the productivity of wells has also increased on average such that the number of wells needed to offset field decline in Pennsylvania is still just over 900 per year.

Figure 5 illustrates the drilling rate in Pennsylvania and the average amount of gas added to the play’s production for each new well drilled. The current rate of about 1,050 wells per year is sufficient, should it continue, to see production rise overall until it is 15% above current levels.

Marcellus Production Figure05

Figure 5 – Annual number of producing wells added and Marcellus play production added per new well from 2007 through December 2014 in Pennsylvania.

Another important factor is wells that have been drilled but not completed, or wells waiting on a pipeline connection. A major issue in the Marcellus has been the lack of takeaway capacity as gathering pipelines must be constructed to new wells and larger pipelines constructed to markets. Another strategy, purportedly widely used, is to drill wells without completing them, putting them in abeyance until prices are higher; as the cost of hydraulic fracturing is half or more of the total cost of a well, this practice allows producers to get a head start while rig costs are low due to the number of rigs looking for work as a result of the drop in oil prices.

Figure 6 illustrates a comparison of the rate at which wells are spudded (based on well permits) and the rate at which new producing wells are added to the play. From this it is apparent that a large backlog of drilled but not connected wells was worked off in late 2012 through 2013; the backlog has since grown again, however, with 550 more wells drilled than connected in the 12 months prior to December, 2014. The current rig count in the Marcellus is down 50% from its highs in early 2012, so even with improved rig efficiency and the ability to drill more wells per unit of time, falling rig count will ultimately limit drilling rates and impact play production.

Marcellus Production Figure06

Figure 6 – Comparison of the annual rate of wells spudded to producing wells added each year to the Marcellus in Pennsylvania, illustrating the backlog of wells waiting to be connected to production infrastructure.

Well Productivity

Well productivity is key to maintaining and growing Marcellus production, which is a function of new well productivity times drilling rate minus field decline, and to the economics of drilling the wells. We were told the following at this year’s Ceraweek conference by a Marcellus operator:

Not only are we drilling longer wells, not only are we drilling them more cheaply, but we’re getting more recovery. Today, we’re drilling the best wells we’ve ever drilled. Some of it is advances in technology. Some of it is advances in our ability, and some of it is geology — we’re in areas where we have the ability to drill longer laterals. So you can see the problem. As gas prices come off… we continue to do better and better at drilling wells.

Is this really true? The answer varies depending on where an operator’s land holdings lie and the maturity of their development. Figure 7 illustrates daily well productivity over the highest month, first 6 months, first 12 months and first 24 months for all producing wells drilled in the Marcellus in Pennsylvania from January 2010 to December 2014. It is certainly true that productivity has increased markedly over this period up to early 2014 but has decreased since, suggesting the tide has turned in the inexorable battle between technology and geology.

Marcellus Production Figure07

Figure 7 – Average daily production for all wells drilled in the Marcellus play between 2010 and December 2014. Dots indicate production over each well’s highest month (red), first 6 months (yellow), first 12 months (green), and first 24 months (blue). Lines indicate average daily production for all wells at these same points in time of well life (polynomial best-fit trend lines).

Figure 8 accentuates trends in well productivity in the Marcellus by fitting a moving average to the data. This figure indicates that:

  • Well productivity increased by 70% between early 2012 and early 2014. This is a testament to both better technology and focusing on sweet spots.
  • Productivity peaked in mid-2014 and has fallen in the last half of 2014.
  • Highest month productivity is a key indicator of productivity over the longer term, as it is reflected in 6 month-, 12 month- and 24 month-average production.

Marcellus Production Figure08

Figure 8 – Average daily production for all wells drilled in the Marcellus play between 2011 and December 2014. Data have been fitted with a trailing 150-well moving average to make the trends more apparent. Dots indicate production over each well’s highest month (red), first 6 months (yellow), first 12 months (green), and first 24 months (blue). Lines indicate average daily production for all wells at these same points in time of well life (polynomial best-fit trend lines).

It is instructive to examine comparable data for the most productive and densely drilled counties, as these will be the first to experience saturation of available drilling locations (the top four counties are analyzed below).

Susquehanna County

Susquehanna is the top producing county and also has the highest average well productivity. It is second only to Bradford County in terms of the number of wells drilled. Figure 9 illustrates average daily well production in Susquehanna County over various periods. This figure indicates that:

  • Well productivity nearly doubled from early 2012 until late 2013 but has declined by nearly 20% during 2014. Susquehanna wells are still the top producers in the play.
  • The fall-off in well productivity is likely due to the density of current drilling (resulting in early signs of well interference) and to drilling more marginal parts of the county.
  • Geology appears to be trumping technology in Susquehanna County, which is the most productive of the play. Well density was 1.48 wells per square mile in mid-2014 (see Table 3-5 in Drilling Deeper) with the assumption that 4.3 wells per square mile could be drilled; this may be overly optimistic.

Marcellus Production Figure09

Figure 9 – Average daily production for all wells drilled in Susquehanna County between 2011 and December 2014. Data have been fitted with a trailing 50-well moving average to make the trends more apparent. Dots indicate production over each well’s highest month (red), first 6 months (yellow), first 12 months (green), and first 24 months (blue). Lines indicate average daily production for all wells at these same points in time of well life (polynomial best-fit trend lines).

Bradford County

Figure 10 illustrates average daily well production in Bradford County over various periods. This figure indicates that:

  • Well productivity increased by 50% from early 2012 until early 2014 but has declined by more than 10% since then. Bradford County has the most producing wells in the play.
  • The fall-off in well productivity is likely due to the density of current drilling, resulting in early signs of well interference, and to drilling more marginal parts of the county.
  • Geology appears to be trumping technology in Bradford County. Well density was 1.22 wells per square mile in mid-2014 with the assumption that 4.3 wells per square mile could be drilled; this may be overly optimistic.

Marcellus Production Figure10

Figure 10 – Average daily production for all wells drilled in Bradford County between 2011 and December 2014. Data have been fitted with a trailing 50-well moving average to make the trends more apparent. Dots indicate production over each well’s highest month (red), first 6 months (yellow), first 12 months (green), and first 24 months (blue). Lines indicate average daily production for all wells at these same points in time of well life (polynomial best-fit trend lines).

Lycoming County

Figure 11 illustrates average daily well production in Lycoming County over various periods. This figure indicates that:

  • Well productivity increased by 100% from early 2011 through 2014 but appears to be decreasing as of late 2014. Lycoming County is now second only to Susquehanna in average well productivity and third overall in production.
  • It is too early to say if growth in well productivity in Lycoming County has stalled out but it appears likely. Well density was 1.35 wells per square mile in mid-2014.

Marcellus Production Figure11

Figure 11 – Average daily production for all wells drilled in Lycoming County between 2011 and December 2014. Data have been fitted with a trailing 30-well moving average to make the trends more apparent. Dots indicate production over each well’s highest month (red), first 6 months (yellow), first 12 months (green), and first 24 months (blue). Lines indicate average daily production for all wells at these same points in time of well life (polynomial best-fit trend lines).

Washington County

Figure 12 illustrates average daily well production in Washington County over various periods. This figure indicates that:

  • Well productivity increased by 100% from late 2012 until early 2014 but has declined by about 10% since then. Washington County produces wet gas and most of the liquids in the Marcellus. It has generally lower well productivity than the top three counties but the liquids production has bolstered the economics.
  • The fall off in well productivity is likely due to the density of current drilling, resulting in early signs of well interference, and to drilling more marginal parts of the county.
  • Geology appears to be trumping technology in Washington County. Well density was 1.28 wells per square mile in mid-2014 with the assumption that 4.3 wells per square mile could be drilled; this may be overly optimistic.

Marcellus Production Figure12

Figure 12 – Average daily production for all wells drilled in Washington County between 2011 and December 2014. Data have been fitted with a trailing 50-well moving average to make the trends more apparent. Dots indicate production over each well’s highest month (red), first 6 months (yellow), first 12 months (green), and first 24 months (blue). Lines indicate average daily production for all wells at these same points in time of well life (polynomial best-fit trend lines).

Future Production

At this point there is little reason to change the “most likely” production trajectory for the Marcellus published in Drilling Deeper (reproduced in Figure 13), other than to say it may be a bit too optimistic as drilling rates including West Virginia have fallen below the 1320 wells per year assumed (although not by much). If drilling rates fall significantly from current levels the play will peak sooner and declines will lessen after peak. Also, the observed decline in well productivity in top counties suggests that drilling densities of 4.3 wells per square mile may be too optimistic which, if so, would serve to further reduce ultimate recovery, result in more rapid declines in well productivity than assumed, and further lower production rates after peak.

Marcellus Production Figure13

Figure 13 – Projection of future Marcellus production (“most likely” case from Drilling Deeper Figure 3-99). Cumulative recovery through 2040 is estimated at 129 tcf, which is more than 10 times the 12.6 tcf recovered to date.

Summary and Implications

Central points from this analysis include:

  • The northeastern U.S. and eastern Canada are counting on abundant cheap gas from the Marcellus and Utica plays for the foreseeable future, based in part on rosy projections from the EIA that expects production to grow well into the next decade. Large investments are being made in infrastructure to transport and use this gas, including pipelines, processing plants, LNG export terminals, gas-fired generation and other residential, commercial and industrial uses. Like other shale gas plays, Marcellus wells exhibit steep production declines and the play has a field decline of 32% per year that must be offset by continuous drilling. In Pennsylvania this amounts to more than 900 wells per year simply to maintain production at current levels.
  • The Marcellus play, although very large, has two-thirds of its production concentrated in 5 of 70+ counties. Although top counties have generally seen impressive increases in new well productivity over the past three years due to improved technology, most have exhibited declines in well productivity in 2014, with the top county, Susquehanna, down 20%. This is likely a result of well interference and/or moving to poorer quality locations within these counties, which suggests the assumption of a final well density of 4.3 wells per square mile may be too optimistic.
  • Although production will likely grow over the next two years, barring a radical reduction in drilling rates from current levels, projections of a peak in 2018 appear on track, followed by a terminal decline (which assumes gradual increases in price; sudden major increases in price could temporarily check this decline if reflected in significantly increased drilling rates). The backlog of wells waiting on connection to infrastructure will shield production from falling for some months should there be a large drop in rig count.
  • Industry invariably drills its best prospects first, hence the cheapest gas is being exploited now. Infinite faith in technology cannot make up for the realities of geology. These realities are showing up now in the most productive counties.
  • As for the massive investments in infrastructure on the assumption of cheap and abundant gas for the foreseeable future – CAVEAT EMPTOR.
Posted in Other Experts | Leave a comment

Electric vehicle overview

Most, but not all of what follows comes from the NPC paper below and I’ve reworded or interjected some comments as well. As you can see in Figure 1 below, light cars and trucks are the main guzzlers of petroleum.  When oil shocks hit, a rational, well-run society would ration fuel so that freight, agricultural, delivery, infrastructure maintenance, and other essential medium and heavy duty trucks could keep society running.  But time is running out. We’re probably going to be stuck with our existing, extremely wasteful, inefficient fleet of cars and trucks when oil shocks hit starting sometime between 2016 and 2024 depending on geological depletion, social unrest and war, declining exports from producing nations, financial crashes, etc.

NPC. 2012. Advancing Technology for America’s Transportation Future. Chapter 13. National Petroleum Council, 69 pages

As far as all-electric (BEV) trucks, it only makes sense for a small subset, those that that stop a lot, and go less than 100 miles a day, such as delivery trucks in classes 4, 5, and garbage trucks in class 8a, to capture regenerative braking energy.

Figure 1. Percentage of Total Transportation Fuel Use. Source: NAS. 2010. Technologies and Approaches to Reducing the Fuel Consumption of Medium- and Heavy-Duty Vehicles. National Academy of Sciences.

Figure 1. Percentage of Total Transportation Fuel Use.
Source: NAS. 2010. Technologies and Approaches to Reducing the Fuel Consumption of Medium- and Heavy-Duty Vehicles. National Academy of Sciences.










At current rates of purchase it will take over 35,000 years to replace America’s fleet of 247 million gasoline cars with plug-in vehicles 

plug-in vehicle sales up to feb 2014


The difference between a diesel combustion engine car and an all-electric car

VW Lupo 3L 1.2 TDI car Diesel ICE weight kg/lbs Battery Electric weight kg/lbs
Vehicle chassis minus power train 595 / 1312 595 / 1312
   engine + gearbox + drive shafts 180 /397 85 / 187
   cooling (radiator, hoses, coolant, etc.) 10 /   22 7 /     15
   exhaust 15 /   33
   power electronics (inverter, charger, DC-DC conv.) 20 /   44
   fuel tank + cooler + filter 9 /   20
   diesel (7 L) 6 /   13
   Battery pack kWh: front 8.3, center 7.7, rear 11 273 / 602
Total for Powertrain 245 / 540 435 / 959
Curb weight 840 / 1852 1030 / 2271

Figure 1. As you can see, drivetrain of an ICE is 29% of overall weight, but in the BEV is 42%, with the  battery 26.5% of overall weight in this tiny diesel car converted to all electric. Source: Besselink, I.J.M., et al. 2010. Design of an efficient, low weight battery electric vehicle based on a VW Lupo 3L.  25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition.

When all-electric cars and trucks break down

A key lesson learned during the previous period when electric vehicles were on the market from 1996–2003 is that the inability to diagnose and remedy a situation when the vehicle fails to charge is a source of tremendous customer dissatisfaction. This is most important for BEVs, as they do not have any gasoline or other secondary power source on-board.

The critical need is the ability to identify whether the fault lay with the charging device (EVSE) or the vehicle, without having to send a technician. Enabling this requires multiple components: A “smart” EVSE—one that includes communications technology, a service entity to either receive an automatic message or receive the customer inquiry, and remote access of the EVSE by the service entity to diagnose the fault. Ideally, the result is the dispatch of either an EVSE service technician, if the fault is with the EVSE, or a tow truck, if the fault is with the vehicle.

Plug-in hybrid (PHEV) and Battery only electric vehicles (BEV) configurations

Vehicle types and configurations

Vehicle types and configurations














Degradation & Longevity

There are two facets to battery longevity.

  1. The actual calendar life of the battery. It is currently unknown whether batteries used in PEVs will last for the life of the vehicle, and battery replacement is likely to remain a significant expense.
  2. The degradation of power and energy storage capacity that occurs over time.

The gasoline engine in PHEVs can compensate for this, but BEVs will experience reduced power and vehicle range.

Battery innovation, therefore—improved energy density, reduced degradation, and a predictable calendar life is necessary for the wide-scale adoption of BEVs.

PHEVs can easily recharge the battery overnight using a standard 110V outlet,

BEVs will most likely need to charge at a higher power level (240V). This requires the purchase and installation of a separate charging unit, which could be a barrier to vehicle purchase if the expense is high—e.g., if new panel capacity is needed, or there is no existing 240V connection in the garage.

No garage: For both PHEVs and BEVs, drivers in urban areas with on-street parking and drivers who live in multiple dwelling units such as apartments, both types of charging (110V and 240V) will be difficult to realize, as the installation cost can be high and the driver typically lacks the authority to install a charging unit.

A pickup or large SUV would require a very large battery, resulting in a very-high-priced vehicle. Additionally, a vehicle with a limited range would not align with the typical duty cycles and use cases for passengers, and long-distance recreational driving.

Battery System

To scale up to the hundreds of volts required for automotive powertrain use, many cells are assembled in series to form a battery pack, which connects and contains the cells, and also includes a battery management system (BMS)—an electronic control system that uses various sensors to monitor the state of each cell within the pack (e.g., for voltage, temperature, internal resistance) and to control electrical flows to and from the battery. Most packs include an integrated thermal management system to moderate the pack temperature. The most basic thermal management uses passive air-cooling of the pack from the outside, while more sophisticated systems employ liquid or refrigerant cooling. attributes that are critical to vehicles that use grid-supplied power for driving.

Source: Alexander Otto, Fraunhofer Institute for Electronic Nano Systems ENAS, presentation of May 30, 2012, "Battery Management Network for Fully Electrical Vehicles Featuring Smart Systems at Cell and Pack Level.”

Source: Alexander Otto, Fraunhofer Institute for Electronic Nano Systems ENAS, presentation of May 30, 2012,
“Battery Management Network for Fully Electrical Vehicles Featuring Smart Systems at Cell and Pack Level.”


Battery systems add a lot of extra weight, for example:

  1. If a battery system has a cell mass of 40 kg and a non-cell mass of 40 kg, the mass burden is 100%. Therefore, if the cell-based specific energy is 200 Wh/kg, the actual metric the vehicle designer must consider is 200/2.00 or 100 Wh/kg. It is not unusual for a battery system volume burden to be well in excess of 100%.
  2. A doubling of battery pack gravimetric performance requires not only a 2X improvement in cell based specific energy but also a 50% reduction in the mass of non-cell components.

As you can see below, there is no winning battery.  Not one has all of the essential performance, cost, and safety characteristics required.  Cycle life is essential — it must be high for the battery to last long enough to be acceptable.  Energy and power density must also be high to move the vehicle for hours and acceleration.  Cost must be low to be affordable.

Cathode Anode Abbrev. Energy Density Power Density Cycle Life Safety Cost
Lithium Cobalt Oxide Graphite LCO High Fair Fair Fair High
Nickel Cobalt Aluminum Oxide Graphite NCA High High Fair Fair High
Lithium Iron Phosphate Graphite LFP Low High High Very good Fair
Lithium Manganese Oxide Graphite LMO High High Fair Very good Fair
Lithium Manganese Oxide Spinel Graphite LMO High High Fair Good Low
Lithium Manganese Oxide Spinel Polymer Graphite LMO High High Fair Good Low
Manganese Nickel Cobalt Oxide Graphite MNC High Fair Low Fair High
Lithium Manganese Oxide Spinel Lithium Titanate Oxide LMO-LTO Low Low High Good High
Lithium Nickel Oxide Graphite LNO High Fair Fair Fair Fair
Lithium Manganese Nickel Oxide Spinel Graphite LMNS High High Fair Fair Low
Lithium Manganese Nickel Oxide Spinel Lithium Titanate Oxide LMNS-LTO Fair High High Good Low
Source: Shmuel De-Leon, “High Power Rechargeable Lithium Battery Market,” presented at IFCBC Meeting, February 4, 2010.

Below are the batteries used in autos as of February 2012:

batteries used in autos as of feb 2012
















Battery Energy Density

ragone plot of different battery chemistries




Energy Density is often used as a generic reference to the mass or volume that the cells or battery system occupy within the vehicle compared to the gross number of energy units, typically watt-hours (Wh), that can be stored in them. In reality, 2 different energy metrics must be considered when selecting the appropriate electrochemistry and battery size for a particular application, as either metric can be the key design driver.

Battery designs strive to strike a balance between the energy and power requirements of the battery. In a vehicle application, power density is needed to provide sufficient acceleration as well as optimal ability to capture regenerative braking energy, while specific energy is the primary determinant of the vehicle’s all-electric range. Both specific energy and power density are fundamental to optimal battery pack design. Depending on the context of the discussion, power density may reference power delivered by mass or by volume, measured in watts per kilogram and watts per liter, respectively. The former, specific power, is typically referenced when discussing battery performance while the latter is relevant in the context of vehicle packaging.

As the gasoline energy provides the necessary driving range, the battery packs of PHEVs with shorter all-electric range or with blended-mode operation, are generally optimized to meet peak power demands. Conversely, BEV packs are optimized around the vehicle’s energy demands. This may, however, come at the expense of power. Specific energy, rather than specific power, is the primary determinant of total battery cost.

  1. Specific, or gravimetric, energy refers to the amount of stored energy per unit mass, typically watt-hours per kilogram
  2. Energy, or volumetric, density, refers to the amount of stored energy per unit volume, typically watt-hours per liter. In PHEV cars, the battery volume is most important because there is little room for a battery pack, electric motor(s) and power control electronics, since they’re crammed into existing cars models that already have an internal combustion engine, transaxle, and fuel tank
  3. In BEVs, however, the battery can become the primary constraint as it becomes a substantial fraction of the total vehicle mass, significantly affecting energy requirements and overall vehicle dynamics.

The total amount of energy that a particular battery technology can deliver or accept is a function of the rate (power) requirement on discharge, recharge, or during energy recuperation—e.g., regenerative braking events. Therefore, in addition to understanding the energy density versus specific energy capabilities, the vehicle designer must also understand the functionality between power and energy for a particular battery type (chemistry). A conventional graphic method used to illustrate the relationship between power and energy for a particular battery type is the Ragone plot, which is a logarithmic curve of the energy available versus power demand. The Ragone relationship can be expressed on a gravimetric or volumetric basis. As shown above, compared to other chemistries, lithium-ion is relatively insensitive to power demand, while for LiM Polymer, increasing specific energy comes with a significant decrease in specific power.

Ragone plots express cell level performance. The mass and volume of the cells and the incremental mass and volume, or burden, of the non-cell battery system components (battery casing, thermal management system, etc.), however, are extremely important in determining whether or not the targeted goals for battery system energy density and specific energy are achievable. Although there is no universally accepted convention for calculating burden, it is typically defined as the ratio of the non-cell mass or volume to the cell mass or volume.

Batteries are very heavy

To go 350 miles, a battery needs to be so huge that the vehicle weighs about 3 times more than a gas car

range and weight as battery size increases








The battery has to be over-sized to reach the range promised by the manufacturer. A typical vehicle battery is controlled so that it never discharges fully, thus the total installed, or nominal, capacity is greater than the capacity actually used. This approach extends battery life, allows for sufficient power at low states of charge (important for BEVs), mitigates the risk associated with cell-to-cell variation in high-voltage packs, builds in engineering margin to enable the original equipment manufacturer (OEM) to promise a certain driving range for a certain time period.

Currently, the depth of discharge for a BEV is in the 70% range, with the battery’s state of charge (SOC) ranging from, for example, 20 to 90%. The effect of this is that a cost figure derived from useable capacity is greater than a cost based on total energy. So A $600/kWh cost based on total capacity would translate to a $857/kWh cost based on useable capacity when used in a BEV with a 70% SOC swing ($600 divided by 0.7).

Battery Degradation and Longevity

All batteries experience power and capacity fade over time as functions of cycling, time, and temperature. The mechanisms that degrade battery power and capacity vary with battery chemistry, the operating profile and ambient conditions. Instead of calendar life—the age of the battery in years—battery life is typically described by the number of times the battery can be charged and discharged, referred to as cycle life. Some battery chemistries are more sensitive than others to the number of charge-discharge cycles.

The cycle life of a battery is fundamentally determined by the reversibility of the electrochemical reaction(s) that are responsible for the energy storage function. In other words, the degradation of the battery life is the result of loss of the electrochemical reaction reversibility upon charge-discharge cycling. The key factors responsible for the cycle life, which are more pronounced at elevated temperatures are:

1) Mechanical/structural fatigue or failure of the active materials, especially at the microscopic level. Materials deteriorate due to stress, especially cyclic stress-induced fatigue upon charge-discharge cycling.
2) Side reactions of positive and negative electrodes with the electrolyte, which results in the formation of highly resistive interfacial layers that impede the electrochemical reaction(s) and a loss of active materials (lithium, anode material, cathode material, and electrolyte), which leads to loss of capacity.

In most commercial lithium-ion chemistries, the primary cause of the decrease in battery function is the undesirable side reactions between the electrolyte and active materials on the electrodes. These reactions consume lithium, thereby limiting the lithium available to participate in the desirable discharge/ recharge reactions. These irreversible side reactions also result in the formation of films on the active materials that impede ionic and interfacial transfer.

In laboratory “accelerated cycle” testing, current lithium-ion battery technologies have been shown to provide several thousands of deep cycles. The number of cycles before end of life is different for each chemistry. Lithium Manganese Oxide, Lithium Iron Phosphate, and Lithium-Nickel Cobalt Aluminate have demonstrated over 1,000, 5,000, and 4,000 cycles respectively. If used in a BEV100, these levels of cycle life could translate into 3 to 15 years of useful battery life. In real-world driving, however, it is difficult to apply the notion of a “cycle.” In addition to the broad charge-discharge cycles, there are also millions of “micro” cycles associated with regeneration and acceleration, which in some chemistries, are equally as impactful in the degradation of battery performance.

The vehicle performance ramifications of year-to-year decreases in capacity and power are most significant for BEVs. As battery capacity decreases over time, the allowable SOC swing must increase in order for the battery to deliver the same number of all-electric miles. As explained in the previous section, expanding the SOC swing can accelerate the degradation and decrease battery life. Further, at a low SOC, the battery may not be able to deliver sufficient power to the vehicle.

Example: Consider a 20 kWh battery with a 60% SOC swing at beginning of life, which equates to 12 kWh of useable energy. Assuming 3 miles of electric driving range per kWh, 12 kWh of useable energy would provide 36 miles of electric driving range. If 10 years later, the total capacity has degraded by 30% to 14 kWh, in order for the battery to deliver the same 36 miles of driving range, the SOC swing must increase to over 85% (12 kWh needed capacity for 36 miles, divided by the total capacity of 14 kWh).

Temperature Effects. Battery life is extremely sensitive to the time–temperature characteristics of the vehicle environment during both operation and storage, i.e., when the vehicle is parked. Battery life versus temperature is functionally described by the Arrhenius equation, which is logarithmic. A “rule of thumb” employed by battery engineers is that for every 10°C (50°F) increase in average temperature, battery life will be reduced by 50%.

Example: A battery operated at an average temperature of 70°F (21°C) that demonstrates 15 years of calendar life will, if operated at an average temperature of 106°F (41°C), yield at best 3.8 years of calendar life. It is imperative that the battery cells within a pack be exposed to the same thermal history. If some cells in a string degrade more rapidly than others due to temperature non-uniformities, the entire pack will be compromised and life will be negatively affected. Further, extreme heat or sustained temperatures over 120°F can be fatal to the battery.

Limiting SOC Swing. The predominant countermeasure being employed by automakers to ensure battery longevity is to counter the expected degradation by increasing the nominal (total) capacity of the battery and limiting the SOC swing, as discussed in the previous section on total energy versus useable energy.

While sufficient laboratory cycle life has been demonstrated for some of the commonly used chemistries for batteries in automotive use, much uncertainty remains about the calendar life of these batteries when used in real-world driving and conditions. Battery management systems, power controls, and thermal management techniques can extend the life of the battery, but substantial investment in research and development is needed to accurately evaluate and improve the calendar life of batteries.

Battery performance is more affected by extreme cold. In extreme cold weather, the power capability of batteries decreases. This occurs because the ionic and chemical processes that govern the internal battery processes are “thermally motivated.” Thus the key chemical reactions and ionic transport mechanisms happen more slowly at lower temperatures, thereby limiting both the amount of instantaneous power available as well as the overall amount of energy that can be delivered. The driver experiences reduced power and greatly reduced vehicle range. This issue is most significant for BEVs, as they are entirely dependent on the battery for propulsion.

Battery Cost

The generic term “battery cost” is imprecise. Many cost references are at the individual cell level, but moving from the cell to the module to the battery pack increases the cost—anywhere from 25 to 65%.7 Vehicle OEMs generally expect to purchase a battery pack from a supplier, integrate this battery with the vehicle, and provide it to their retail sales channel.8 Consequently, in addition to understanding the basic cost components of a battery pack, it is important to differentiate whether “costs” are based on the manufacturing costs for the battery cells, the costs for the battery pack supplier, the price the vehicle OEM would pay the battery supplier for the pack, or the amount of cost the vehicle OEM passes on to the retailers or end consumers, as each level within the supply chain adds costs. The “battery” could be a cell or a pack; the “cost” could be the cost to the OEM, the retailer, or the end consumer; the “cost” per kilowatt-hour could refer to either useable capacity or total capacity; and the pack may or may not include an active thermal management system.

From this point forward, “battery cost” will refer to the complete battery pack, including the battery management system but not including an active thermal management system, at the battery supplier-to-vehicle OEM level, on the basis of total capacity.

Vehicle Charging Infrastructure

Level 1 Charging—Level 1 (L1) charging is lowpower charging at 120 volts (V) of alternating current (AC) at a rate of approximately 1.4 kW.

Level 2 Charging—Level 2 (L2) charging is medium-power charging at 240V AC at a rate of approximately 3 kW, up to 19 kW. An L2 EVSE is larger than an L1 EVSE, and includes more sophisticated circuitry in order to ensure safety. (See Figure 13-12.)

Level 3 DC (Direct Current) Fast Charging —“DC Fast Charging” is high-power charging, with the electricity supplied by an off-board (off-vehicle) charger at variable DC voltages and currents. There are different power levels possible with DC, but for vehicle charging, the power is typically supplied at 200–450V, at a rate of up to 90 kW, depending on the requirements of the vehicle. Charging is controlled by the vehicle’s battery management system, which ensures that the battery is not charged at a rate in excess of predetermined limits.

Notes: Level 1 charge time assumes a rate of 1–2 kW, Level 2 assumes a rate between 3–10 kW, and DC Fast Charging assumes a charging rate between 11–90 kW. The vehicle efficiencies used are 290 Wh/mile for the PHEV10, 360 Wh/mile for the PHEV40, and 340 Wh/mile for the BEV100. (EPA fuel economy label values for the Toyota Prius Plug-in Hybrid, Chevrolet Volt, and Nissan LEAF, respectively.) Source: Electric Power Research Institute calculation. Approximate Charge Times at Different Power Rates

Notes: Level 1 charge time assumes a rate of 1–2 kW, Level 2 assumes a rate between 3–10 kW, and DC Fast Charging assumes a charging rate between 11–90 kW. The vehicle efficiencies used are 290 Wh/mile for the PHEV10, 360 Wh/mile for the PHEV40, and 340 Wh/mile for the BEV100. (EPA fuel economy label values for the Toyota Prius Plug-in Hybrid, Chevrolet Volt, and Nissan LEAF, respectively.) Source: Electric Power Research Institute calculation.
Approximate Charge Times at Different Power Rates


The charging times above are for a trip of 25 miles. Trucks need to go at least twice that far, and their vehicle weight is much heavier, so batteries would need to be much larger and take much longer to charge. It is possible that fast charging shortens and degrades battery life, so fast charging isn’t necessarily a solution.

The benefits of DC Fast Charging have led some to conclude that even higher charging rates would be beneficial, so that charge times could be reduced to be similar to refueling of current gasoline vehicles, about 5 minutes. In addition to the vehicle- and battery-related issues that would need to be solved, much higher charge rates would pose significant challenges to the grid. Ultra-fast charging of a 25 kWh battery pack in 5 minutes would require a power flow rate of approximately 300 kW, which is approximately equivalent to the peak power requirements for a 100,000 square foot office building. The power use for this load would have a sharp profile similar to an industrial load like a sawmill. Achieving the same charge time for larger batteries, such as those for a heavier or longer-range BEV, would require even more power. Although loads of this type can be provisioned, the equipment to supply this load without disrupting surrounding loads would be bulky and expensive, and the low utilization of these assets would make cost recovery difficult except in very high-traffic areas. It would be possible to use on-site energy storage to reduce the grid demands for this load, but this equipment is also bulky and expensive, particularly if the station is designed to accommodate back-to-back recharges.

One of the largest uncertainties for vehicle manufacturers is the impact of DC Fast Charging on battery life. Charging at high rates will likely increase the rate of battery degradation for near-term chemistries, which will increase the likelihood of warranty replacements and negative customer perceptions. Additionally, potential charge patterns, such as fast charging at high ambient temperatures or charging multiple times per day will likely have an additional negative impact on battery life. Real world data will be needed to measure the magnitude of the impacts from fast charging.

Where is battery money coming from?

The American Recovery and Reinvestment Act of 2009 (ARRA):

  1. $2.4 billion in loans to three electric vehicle factories in Tennessee, Delaware, and California. Provided
  2. $2 billion in grants (with 100% matching funds by private industry) to support 30 factories that produce batteries, motors, and other electric vehicle components. These grants are intended to build the capacity to produce 50,000 BEV/PHEV batteries annually by the end of 2011 and 500,000 batteries annually by December 2014.
  3. $400 million in grants (with 100% matching funds by private industry and/or state/municipal entities) to install 22,000 EVSEs in 20 U.S. cities.
  4. $2,500 to $7,500 per vehicle tax credits for the purchase of BEVs and PHEVs. The amount of the credit is based on the size of the battery
  5. Funded the “EV Project,” which included approximately 400 DC fast chargers.

Over 40 U.S. states have adopted other measures promoting electric-drive vehicle usage, including access to high occupancy vehicle lanes (with a single occupant BEV or PHEV), waived emission inspections, tax credits, rebates, and other programs.

The U.S. government and some U.S. state governments also fund extensive R&D efforts on batteries and other electric vehicle components.

Other regulatory programs, such as the ZeroEmission Vehicle program that applies in over 10 U.S. states, mandate the sale of substantial numbers of BEVs, PHEVs, and Fuel Cell Electric Vehicles (FCEVs).

What follows is from:  February 2015. Status and Issues for Plug-in Electric Vehicles and Hybrid Electric Vehicles in the United States. Alternative Fuel and Advanced Vehicle Technology Market Trends. Argonne National Laboratory

Subsidies & incentives, state and private for PEV:

subsidies state and private incentives


Cold Weather effects on driving range

Extreme losses of range for BEVs parked outdoors off the grid during cold snaps combined with snowstorms are particularly problematic.

Evidence of the negative effect of cold weather on BEV range is receiving attention (Allen, Lohse-Busch , Rosack, Santini, Yuksel).  The Volt Stats! website (Rosack), which includes an option to plot monthly mpg, shows a 14% drop in Volt electric drive “mpg equivalent” from the best month (September) to worst (January) , on a national average basis. Allen shows that range declines as great as 50% (relative to results in fall or spring) are possible for BEVs when temperatures are well below freezing. Coldest day drops for a BEV in a snowstorm in a cold state are the worst case (Santini).  This case may be worse than any of the above analyses have estimated, since driving in a snowstorm was not examined.

Allen, M. (2014). Electric Range for the Nissan Leaf & Chevrolet Volt in Cold Weather.  Fleet Carma. Waterloo, Canada.

Lohse-Busch, H. et al. 2012.  Advanced Powertrain Research Facility AVTA Nissan Leaf testing and analysis.  Argonne National Laboratory

Rosack, M. 2015 Volt Stats!

Santini et al. 2014. Daytime Charging–What is the Hierarchy of Opportunities and Customer Needs? – A Case Study Based on Atlanta Commute Data.  TRB 14-5337.  Presented at the Annual Meeting of the Transportation Research Board, Washington, DC

Yuksel, T., et al.  2015. Effects of Regional Temperature on Electric Vehicle Efficiency, Range, and Emissions in the United States.  Working paper.  Preliminary results presented at the Annual Meeting of the Transportation Research Board, Washington, DC.  Jan. Carnegie Mellon University.

Posted in Automobiles, Batteries, Electric Cars, Electrification | Tagged , , | 1 Comment

Natural Gas limits: 20-40% of recoverable resources are low EROI Sour Gas

SBC. October 2014. Factbook Natural Gas Factbook. SBC Energy Institute

Sour gas














There are 855 trillion cubic meters (tcm) of technically recoverable resources, which means that regardless of cost we could get at them, and would last 240 years at the current rate of 3.5 tcm produced per year — but 20 to 40% are sour and very expensive and difficult to produce, reducing the resource to 145 to 195 years.

In some fields, contaminants can be found in very high concentrations. This increases investment needs and production costs to the extent that production may even be rendered uneconomic. Natural gas rich in hydrogen sulfide (H2S) or carbon dioxide (CO2) is called sour gas or acid gas. CO2 and H2S are both extremely corrosive and H2S is also toxic. When these gases are present, special equipment is needed (e.g. special alloys for tubing and piping) to ensure that the natural gas can be safely transported and processed, prior to being sold.

20-40% of global recoverable gas resources could be considered, to varying degrees, to be sour gas, especially in the Middle East and Central Asia, but also in North America, Australia and Russia. Even if sour gas fields have a long history of successful development in several places, lowering the costs of sour-gas operation is essential if its potential is to be fully tapped. This could be through innovation in gas-separation technologies used in processing plants or more advanced deployment of capture and re-injection, including enhanced oil recovery.

Posted in Natural Gas | Leave a comment

Coal or Biomass & Coal to Liquids: CTL, DCL, & CBTL technology

Gray, D., et al. August 1, 2012.  Topic Paper #8 Production of Alternative Liquid Hydrocarbon Transportation Fuels from Natural Gas, Coal, and Coal and Biomass (XTL). National Petroleum Council

CTL plant configuration

CTL plant configuration


With national energy security still being a dominant concern because of increasing dependence on imported oil, there is interest in producing more of our oil from domestic sources. By far the largest single supplier of oil to the U.S. is Canada. But, we must be concerned that an equal amount of imports comes, in total, from four countries wracked by instability or with governments hostile to the U.S.: Algeria, Angola, Iraq, and Venezuela.

In addition, the global trade in oil means that, even though the U.S. imports no oil from Iran, and little from Libya, if further unrest in the Middle East should happen to take Iranian and/or Libyan crude off the world market for a time, global oil prices would skyrocket, directly impacting the American economy. Oil is truly the life blood of any industrialized society. Without it, continued and sustained economic growth and social stability would be impossible. Oil provides us with transportation fuels that give us the freedom of personal mobility. About two-thirds of petroleum consumption in the U.S. is in the transportation sector; from the other perspective, some 95-97% of transportation energy derives from petroleum. A second aspect of the vital importance of petroleum is that it provides key petrochemicals for plastics, urethanes, and synthetic fibers. This application accounts for an estimated 16% of petroleum used in the U.S., and over 25% of petroleum processed in the Gulf Coast region.

XTL is coal and/or biomass liquefactoin via Fischer Tropsch synthesis

XTL is the conversion of carbonaceous feedstocks to a mixture of hydrogen and carbon monoxide, called synthesis gas, followed by the separate step of producing liquid hydrocarbon fuels from the gas via Fischer-Tropsch synthesis. In principle, any carbonaceous feedstock could be used (given appropriate technology for its conversion to synthesis gas), including biomass, coal, coal/biomass blends, natural gas, municipal solid waste, natural bitumens and heavy oils, and waste tires. Synthesis gas conversion technologies also offer potential routes to hydrogen, substitute natural gas, and various solvents or intermediates such as alcohols and aldehydes.

How DCL differs from XTL

The principal alternative to XTL is direct coal liquefaction (DCL), which is the conversion of coal to liquids without the intervening step of producing synthesis gas. The primary DCL technology is hydroliquefaction, the reaction of coal with hydrogen and/or a hydrogen-donor solvent, usually in the presence of a catalyst. Liquids can also be obtained from coal by pyrolysis, and by solvent extraction with various solvents in the sub- or supercritical regimes. Some work has been done on the co-liquefaction of coal blended with materials such as scrap plastic, scrap rubber, or heavy oils. A second major difference between DCL and XTL is that usually XTL products are clean liquids that can be used as transportation fuels with minimal refining, whereas the primary liquids from DCL are usually aromatic with nitrogen, oxygen, and/or sulfur incorporated, so will require substantial downstream refining to meet performance and environmental requirements for transportation fuel usage.

The Shenhua Process The world’s only commercial-scale hydroliquefaction plant is the so-called Shenhua plant, built by the Shenhua Group Corporation in Majata, Inner Mongolia. The Shenhua process represents evolutionary development of earlier work beginning with the H-Coal process (Hydrocarbon Research, Inc.), with further improvements by Hydrocarbon Technology Inc. and Headwaters. Bituminous coal is slurried with recycle solvent and catalyst. The slurry is fed to a liquefaction reactor (the largest one ever built, with a 6000 ton/day capacity), followed by solid-liquid separation. The primary liquids are hydrotreated to produce primarily diesel fuel and naphtha, in amount of 24,000 barrels per day. On an annual basis, the Shenhua plant expects to utilize about 3.5 million metric tons of coal, producing 715,000 metric tons of diesel fuel, 250,000 metric tons of naphtha, 120,000 metric tons of LPG, and about 3,500 metric tons of phenols. On a dry, ash-free basis, about 57% of the coal is converted to liquids.

There are no Coal/Biomass CBTL plants

CBTL process

CBTL process

The concept of gasifying mixtures of coal and biomass together in the same plant to produce liquid fuels is novel and no such plant currently exists. There are many gasifiers that can gasify biomass but most of these are usually small scale, use air instead of oxygen, operate at lower temperatures thus producing tars, and operate at low or atmospheric pressure. All of those characteristics would make them unsuitable for producing FT liquid fuels.

CTL technology has a proven track record and is technically viable. However, although Sasol has successful commercial plants in operation, the integration of modern entrained-flow coal gasification with advanced FT synthesis has yet to be demonstrated commercially. There are no commercial or even small scale plants are currently in operation to convert mixtures of coal and biomass to liquid fuels.

If a CBTL plant did exist it would work like this

The plant would operate just like a CTL plant except that biomass is gasified in addition to the coal. Separate gasifiers could theoretically be used for the biomass and the coal; however it may be more efficient and less costly if the same gasifier could convert both feeds simultaneously. This would be similar to the situation at NUON where the Shell gasifier was able to gasify both wood and biomass. In this conceptual plant, high pressure, entrained-flow gasification with oxygen is used to convert the coal and biomass into synthesis gas. This synthesis gas is cleaned using conventional gas cleaning technology. Slurry-phase FT reactors are used to convert the clean synthesis gas into raw FT products. Hydrotreating and hydrocracking/hydroisomerization are used to convert the raw FT products into naphtha and diesel. All power required in the plants is generated on-site. Unfortunately, there is very little data in the literature for the gasification of biomass in entrained high pressure gasifiers. Because of the fibrous nature of most biomass sources, the material is very difficult to pretreat and feed into a high pressure gasifier. Typical problems include clumping and bridging. However, the successful demonstration at the NUON plant does indicate that co- gasification is technically feasible provided that the biomass receives the appropriate pretreatment and preparation.

Barriers to XTL plants being built

Although the United States still imports about 11 MMBPD of oil from the unstable Middle East and other potentially hostile countries and world oil prices are currently hovering around $90 to $100 per barrel, no commercial U.S. XTL plants are being built. This is because of the considerable number of barriers to deployment of XTL. These barriers can be classified as technical, economic, environmental, commercial, and social.

Under economic barriers, the uncertainties about future oil prices are a significant barrier. The high capital expenditures needed for large scale CTL plants is a major barrier. It is anticipated that the capital for large (greater than 50,000 BPD) CTL plants will be over $150,000 -$160,000 per daily barrel. Therefore, a 50,000 BPD FT CTL plant could cost over $8 billion. The investment risk for such a large sum is considerable. For GTL the capital cost is lower but a 50,000 BPD plant would still require an investment of over $3.5 billion. Also for CBTL the cost of delivered biomass is very high.

Water use in CTL plants is also an important environmental issue particularly in geographical areas of low rainfall.

Significant deployment of CTL facilities will require the use of large quantities of coal and this will mean an expansion of the coal mining industry. For example, a 50,000 BPD CTL plant will use approximately 7 million tons of coal per annum. There is considerable opposition to increased coal mining. Another issue concerns actual commercial deployment of CTL. Who would take the lead in commercial deployment of XTL technologies?

If many XTL plants were to be built worldwide at the same time then there will be competition for critical process equipment and engineering and labor skills. There is already evidence that this bottleneck is being encountered worldwide because of the large number of simultaneous construction projects. Finally, there are the issues of permitting and the usual public reluctance to accept the need for new facilities especially coal based plants. Particular barriers to deployment of CBTL technologies include the high cost of biomass feedstock, the availability of sustainable quantities of biomass feed stock, the GHG and energy penalties associated with the cultivation, harvesting, and preparation of the biomass feed, the high cost of biomass transport, and the technical problems with feeding biomass to high pressure gasification systems.

If water availability presents no problems and water cooling is used for all applications the expected use would be in the range 7-10 barrels of water per barrel of liquid fuel product for CTL and CBTL plants. On the other hand, if water is scarce, in Western locations for example, then maximum use of air cooling could be made.

Because no FT CTL plants have been built since Sasols II and III in South Africa in the early 1980s, it is very difficult to accurately estimate the capital costs of new FT CTL plants that would be built in the U.S. in today’s economic climate. The tight EPC market has resulted in large escalations of capital costs for major projects. For example, costs for new IGCC plants are estimated to be over $4,000/kW compared to estimates of around $2,500/kW just a few years ago. Likewise, the costs for new Oil Sands projects in Canada have experienced escalations of 70% or more.

DCL deployment faces many of the same barriers that have already been identified and discussed in the XTL section of this white paper. These include the significant technical risks (especially given only one commercial-scale DCL plant running in the world, and that only for about two years) with the attendant question of who would take the lead in building the first plant(s); the very high capital expense, at least for hydroliquefaction, and the related investment risk; questions of permitting, which will be made all the more complicated by the antipathy of the public and many NGOs to coal; likely shortages of process equipment and skilled labor; the need for substantial expansion of the mining industry; and a need to deal with CO2 and other environmental issues.

The primary liquid from hydroliquefaction, carbonization, or solvent extraction is likely to be highly aromatic, also containing various compounds of oxygen, nitrogen, and sulfur. It will require significant downstream refining to produce liquid fuels that meet market and environmental specifications. These additional downstream processes will add capital and operating costs. These processes, especially hydroliquefaction, will consume substantial amounts of hydrogen. The likely way of obtaining hydrogen is via coal gasification. Not only does this also add to capex and opex, it implies that all of the various operations of a gasification plant must be embedded inside a hydroliquefaction plant. If one needs to install gasifiers and ancillary equipment anyway, perhaps XTL would be a better choice. Especially with low-temperature carbonization, and somewhat will solvent extraction, inevitably there will be a solid product containing unreacted or partially reacted coal and ash. Unless a use exists for the solid, it will be a major cost to collect and dispose of in an environmentally acceptable manner.

Barriers to economically successful, commercial-scale direct liquefaction of coal include:

  • Selection of materials of construction for reactor vessels and ancillary equipment, to withstand high-temperature, high- pressure hydrogen environments and abrasive coal or mineral slurries.
  • Finding an inexpensive and convenient source of process heat.
  • Finding an inexpensive source of hydrogen, ideally one that does not contribute to the carbon footprint.. Separation of coal mineral matter and unreacted or partially reacted coal particles from the process stream.
  • Subsequent post-liquefaction upgrading and refining of the “synthetic crude oil” from liquefaction into commercial-quality, marketable liquid fuels. It has been presumed that the primary liquids would be treated in the standard unit operations of an oil refinery, but there seems to be little verification of this. A related issue is that the final, upgraded products of DCL have been assumed to be fungible with the comparable petroleum-derived products. This point does not seem to be fully demonstrated either.

Estimated Economics for DCL Plants It has been nearly twenty years since a detailed economic analysis was done for hydroliquefaction, and possibly much longer for solvent extraction or carbonization. A hydroliquefaction plant capital cost, for coal being converted to clean, specification-grade transportation fuels, is likely in the range of $120,000 per daily barrel of capacity. Estimated cost of the finished liquid products is $0.20 per gallon higher than from a CTL plant. It should be noted that the estimated cost of $120,000 per daily barrel is about double of the claimed cost of the Shenhua plant ($62,500). The figure for the Shenhua plant was based on 2008 dollars; the world has seen significant increases in capital equipment prices since then. In addition, it is not clear what basis was used for conversion of yuan to US dollars. Therefore, this is not to say that one figure or the other is grossly in error, but they probably can be taken as “bookends” for the cost of a plant.


Posted in Coal to Liquids (CTL) | Tagged , , , | Leave a comment

Trucks running on CNG or LNG

National Petroleum Council 2012

  1. Chapter Fourteen Natural Gas.
  2. Topic Paper #21 An Initial Qualitative Discussion on Safety Considerations for LNG Use in Transportation.

Below is an overview of obstacles to using Compressed natural gas (CNG) or Liquefied Natural Gas (LNG) in transportation:

NPC chapter 14 obstacles to truck CNG LNG

NPC chapter 14 obstacles to truck CNG LNG 2

One large objection trucks and railroads have to CNG and LNG are their low energy densities (shown below) compared to diesel fuel, and also natural gas’s volatility and low energy density make handling it difficult. Whatever the technology, gas conditioning incurs high handling costs and has limited flexibility. Unlike oil, for instance, which is fungible, natural gas relies on a heavy infrastructure pressurized or storage caverns or cryogenic carrier).

energy density volume MJ per liter



The volumetric energy density of chemical fuels in MJ/liter

The Energy Information Administration (EIA) Annual Energy Outlook (AEO) 2010 estimates natural gas resources at 2,575 trillion cubic feet (tcf) with technically recoverable shale gas resources estimated at 368 tcf and annual production rates of approximately 23 tcf. In the AEO2011 outlook, technically recoverable shale reserves increased to 827 tcf. Shale gas and coalbed methane are forecast to account for 57% of U.S. production by 2030.

My note: But this is awfully optimistic as Mason Inman showed in Nature and many others have written about.  To boldly move towards a natural gas CNG or LNG fleet when this energy resource may decline even more precipitously than oil would be reckless without building more LNG import facilities that will not be affected by sea-level rise.

Brief Review Of LNG As A Transportation Fuel

LNG has been used as a transportation fuel since the 1970’s, although in limited volumes for heavy-duty and fleet applications. In 2001, LNG vehicles accounted for only about 7.6 million gallons (about 2%) of the 366 million gallons of alternative fuels consumed in the United States and a fraction of the 30 billion gallons of diesel consumed by freight trucks annually.

There are an estimated 7,000 vehicles with LNG fuel tanks operating in the U.S. today; public transit systems operate hundreds of LNG-fueled buses in Dallas, Phoenix, El Paso, Austin, Los Angeles and Orange County. LNG is also established and growing quickly as a transport fuel for short-haul, heavy-duty fleets. For example in June 2010, the Ports of Los Angeles and Long Beach announced the replacement of 800+ diesel drayage trucks with LNG trucks and, in April 2011, ordered 200 LNG vehicles for water services operations.

Mining and refuse collection vehicles also represent major existing applications. LNG has also been used to fuel the LNG vessels engaged in international trade and in 20 other marine vessel applications (as of 2010) like ferries, offshore supply vessels and patrol vessels, outside of the U.S., predominantly in Norway. A future increase use of LNG as marine fuel on inland waterways and near-sea shipping is expected.

Large vehicles with frame rail mounted tanks can hold up to 300 gallons of LNG. Most natural gas engines can use either LNG or CNG as a fuel source. LNG is typically used in medium/heavy duty applications where the higher fuel density compared to CNG maximizes driving range while minimizing weight and space required for fuel storage.

Imports of LNG or local LNG production for transportation fuel are currently performed throughout the U.S. These producers then contract the transportation of LNG fuel to approximately 65 refueling sites across the country to fleets with purpose-built cryogenic trailers. There are an estimated 170 LNG transportation trailer trucks operating in North America and each truck has the capacity to deliver 9,000-13,000 gallons per load, limited by maximum payload.

Currently LNG vehicle use is heavily concentrated in California with 71% of US refueling facilities located in the state. It is estimated that at least 200,000 gallons/day of LNG were trucked into California in 2006. National consumption in transportation has continued to increase with the addition of new LNG production sites such as Clean Energy’s plant in Boron, CA which produces 160,000 gallons of LNG per day.

Refueling sites are almost all owned and used by transit fleet vehicles.

CNG lower mileage, heavy and expensive tanks

Mileage will not be nearly as good, not only because the energy density of CNG and LNG is much less than diesel, but the tanks to store CNG are very heavy and expensive:

Classification and Comparisons of Light-Duty CNG Cylinder Options

Classification and Comparisons of Light-Duty CNG Cylinder Options

The primary natural gas Heavy-Duty market hurdles that need to be overcome include:

  • High vehicle costs due to limited volumes of factory finished vehicles and engines, and low volume of demand for natural gas systems.
  • Limited refueling infrastructure currently in-place.
  • A broader range of engine options is required to meet the wide variety of HD vehicle applications.

Natural gas retail refueling infrastructure is in an early stage development and will require major expansion and investment to meet the growing demands for natural gas transportation fuel as the industry commercializes. As of March 2012, there were 988 CNG stations compared to ~160,000 retail gasoline stations, and 47 LNG stations serving HD vehicles. The transition to a fully scaled and mature retail infrastructure system to serve the Light and heavy-duty markets will take time and investment.

The technology opportunities for infrastructure include: Improvements in modular CNG dispensing systems to improve the cost effectiveness of retail station upgrades. Cost and performance of CNG compressor systems. Small-scale LNG technology to support localized HD fleets.

There are approximately 500 trucks distributing LNG through specific cryogenic tank trailers. Major LNG tanker firms move the product for two markets: peak shaving facilities in the Northeast and the Heavy-duty transportation market in the Southwest. The economics of LNG distribution have a disadvantage over diesel as typical trailers carry 10,000 gallons of LNG or 6,700 diesel equivalent gallons (DEG) compared to 9,000 gallons of diesel.

CNG stations are designed to accept incoming fuel from the distribution system, and then compress that incoming gas to the dispensing pressures of approximately 3,600 pounds per square inch (psi). On-site equipment typically includes dryers to remove moisture from the natural gas, multistage compressors to boost natural gas from distribution/transmission pressures to 4,500 to 5,000 psi, high-pressure storage cylinders to act as pressure buffers for pressure filling vehicles, and dispensers to transfer fuel to vehicles. CNG is pressure transferred from storage to the lower pressure of the vehicle, which is typically 3,600 psi at full fill. Incremental land requirements for CNG stations are minimal when compared to gasoline stations since large volumes of fuel are not required to be stored due to the interconnection with the distribution system.

There are less than 10,000 truck stops across the nation providing diesel fuel to the heavy-duty truck fleet. These truck stops sell approximately 32 billion gallons of diesel for on-road heavy-duty trucks.

The majority of the engines in the medium and heavy categories tend to be certified using the Diesel engine provisions as they are based on diesel engine platforms. One of the key distinguishing features of the alternative pathways is the useful life.

  • For gasoline Otto engines, this is 10 years or 110,000 miles, whichever occurs first, across all categories.
  • For Diesel engines, the useful life for medium heavy-duty diesel engines is 10 years or 185,000 miles, whichever occurs first, and for heavy heavy-duty diesel engines, useful life rises to 10 years, 435,000 miles, or 22,000 hours, whichever occurs first.

For Class 8b combination trucks running high annual mileage, U.C. Davis estimates fuel can be up to 40% of the total cost.  In an industry with small operating margins, managing the cost of fuel is a key strategic activity, and hence the drive to improve fuel economy or minimize the purchased cost of fuel.

Some of the critical technical pathways for natural gas systems in HD vehicles include: Combustion strategy, Torque and power, Fuel economy and fuel strategies Complexity of changes to base diesel engine, After treatment, Fuel storage (CNG and LNG), and  System incremental cost.

Compared to the diesel baseline engine, the natural gas variants typically have a reduced thermal efficiency due to throttling and low compression ratio resulting in approximately 7 to 10% lower fuel economy in current applications.

Adapting diesel engines to operate with natural gas using spark ignition technologies similar to gasoline engines has been the prevalent approach to date. The adaptation involves lowering compression ratio, modifying cylinder heads to incorporate spark plugs, and the addition of a throttle to modulate airflow, often accompanied by a reduced size of turbocharger because of the lower air demands relative to diesel.

Typical Operating Cost Breakdown of Class 8b Truck. American Trucking Association, “Is Natural Gas a Viable Alternative to Diesel for the Trucking Industry?

Because of the low energy density of natural gas compared to diesel, CNG has largely been restricted to vehicle applications that either require only modest operating range or that can accommodate significant numbers of cylinders such as transit buses and refuse collection.

Cost of Renewable Natural Gas (RNG)

Cost of Renewable Natural Gas (RNG)


U.S. Department of Energy, Alternative Fuels and Advanced Vehicles Data Center (website), “Alternative Fueling Station Total Counts by State and Fuel Type,” 2012, fuels/stations_counts.html


Posted in Natural Gas Vehicles, Trucks | Leave a comment

Better truck fuel efficiency to delay the end of the oil age

NAS. 2010. Technologies and Approaches to Reducing the Fuel Consumption of Medium- and Heavy-Duty Vehicles (National Academy of Sciences Study)

This free, 251 page document is one of the best I’ve seen on the myriad ways trucks could double their miles-per-gallon.  Below are some of the main overview charts. If you don’t have the time to read this paper, the NPC Chapter 10 Heavy-duty Engines & Vehicles is just 26 pages long and explains some of the unfamiliar terms well (see their excellent chart of priorities and obstacles at the end of this post).

Figure 14-31. Energy Balance of a Fully Loaded Class 8 Tractor-Trailer on a Level Road at 65 mph

Figure 14-31. Energy Balance of a Fully Loaded Class 8 Tractor-Trailer on a Level Road at 65 mph

fuel eff consumption by class













The first thing to know about trucks is that they are custom built to perform specific duties. What’s  good for a long-haul truck might not do as much for an urban delivery truck as shown in Figure S-1.  For example, all trucks and buses benefit from better engines, but long-distance trucks and coach buses don’t stop enough to benefit as much from Hybrid batteries that store braking energy as delivery and refuse trucks do, but benefit more than other trucks from aerodynamic improvements.

fuel eff varies by truck type


FIGURE S-1 Comparison of 2015-2020 new-vehicle potential fuel-saving technologies for seven vehicle types: tractor trailer (TT), Class 3-6 box (box), Class 3-6 bucket (bucket), Class 8 refuse (refuse), transit bus (bus), motor coach (coach), and Class 2b pickups and vans (2b).  Optimal driver management and coaching would also help but this has never been quantified.

FIGURE 2-7 Energy “loss” range of vehicle attributes as impacted by duty cycle, on a level road

FIGURE 2-7 Energy “loss” range of vehicle attributes as impacted by duty cycle, on a level road


According to the Transportation Energy Data Book, trucks move over 8.7 billion tons of freight annually in the United States, accounting for more than two-thirds of national freight transport. There are over 8 million Class 3–8 trucks on the road, according to the American Trucking Association. A significant share of trucking companies are small businesses, with 96% operating fewer than 20 trucks and nearly 88% operating six trucks or less. Consequently, the trucking industry is a highly fragmented industry, resulting in intense competition and low profit margins.

Class 7 & 8 trucks account for over 4.5 million units. These trucks represent heavy working trucks consuming typically 6,000–8,000 gallons of fuel per year for Class 7 and 10,000–13,000 gallons of fuel per year for Class 8a. Class 8b trucks are typically long-haul trucks weighing more than 33,000 pounds that have one or more trailers for flatbed, van, refrigerated, and liquid bulk. Class 7 represents some 200,000 vehicles while Classes 8a and 8b consist of 430,000 and 1,720,000, respectively. These trucks consume typically 19,000–27,000 gallons of fuel per year and account for more than 50% of the total freight tonnage moved by trucks.

The Class 8 truck market is 98% controlled by 6 brands owned by 4 companies: Freightliner, International, Peterbilt, Kenworth, Volvo, and Mack. Many of them also make Class 3-6 trucks and buses. These companies either develop their own engine platforms or buy them from independent engine manufacturers, dominated by a small number of players: Cummins, Detroit Diesel, Navistar, and Volvo Powertrain, with GM holding a key position in the Class 3-6 truck space.

TABLE S-1 Fuel Consumption Reduction Potential for Power Train Technologies
Diesel engines:  15 to 21%
Gasoline engines: Up to 24%
Diesel over gasoline engines: 6 to 24%
Improved transmissions: 4 to 8%
Hybrid power trains: 5 to 50%

TABLE S-2 Fuel Consumption Reduction Potential for Vehicle Technologies
Aerodynamics: 3 to 15%
Auxiliary loads: 1 to 2.5%
Rolling resistance (tires): 4.5 to 9%
Mass (weight) reduction: 2 to 5%
Idle reduction: 5 to 9%
Intelligent vehicle: 8 to 15%

TABLE S-3 Overall Fuel Consumption Reduction Potential for Typical New Vehicles, 2015-2020
51% Tractor-trailer
47% Class 6 box truck
50% Class 6 bucket truck
45% Class 2b pickup
38%  Refuse truck
48%  Transit bus
32% Motor coach

fuel eff NPC technology pct by truck class


fuel eff NPC technology price by truck class

Class Applications Gross Wt Range (lb)
1c Cars only 3200-6000
1t Minivans, small SUVs, small pick-ups 4000-6000
2a Large SUVs, standard pick-ups 6001-8500
2b Large pick-ups, Utility Van, Multi-porpose, Mini-bus, Step Van 8501-10000
3 Utility Van, Multi-purpose, Minibus, Step Van 10001-14000
4 City Delivery, Parcel Delivery, Large Walk-in, Bucket, Landscaping 14001-16000
5 City Delivery, Parcel Delivery, Large Walk-in, Bucket 16001-19500
6 City Delivery, School Bus, Large Walk-in, Bucket 19501-26000
7 Furniture, refuse, concrete, tow, fire engine, tractor-trailer 26001-33000
8a Dump, refuse, concrete, furniture, tow, fire engine, city bus 33001-80000
8b Tractor-trailer, Bulk Tanker, Flat bed 33001-80000


The 6 miles per gallon (mpg) fuel economy of a line-haul truck seems paltry compared to the 40 mpg of a car. But when the fuel used to move cargo weight is considered, the big truck looks pretty good. A large class 8 truck carrying 42,000-pounds of goods at 6 mpg of fuel economy is carrying 126 tons of freight per mile, or 126 ton mpg. But a car with 500 pounds of people and luggage is only getting 10 ton mpg, less than 10% of the large truck.

So when diesel and gasoline are rationed, someone needs to figure out situations where it makes more sense to delivery food and other essential items to homes rather than having each household drive to stores.

Gross Weight Range (lb) Empty Weight Range (lb) Typical Payload (cargo) weight Payload capacity Max (% of Empty) 2006 Unit sales volume Typical mpg range 2007 Avg ton mpg
1c 3200-6000 2400-5000 250-1000 10-20 7,781,000 25-33 15
1t 4000-6000 3200-4500 250-1500 8-33 6,148,000 20-25 17
2a 6001-8500 4500-6000 250-2500 6-40 3,020,000 20-21 26
2b 8501-10000 5000-6300 3700 60 545,000 10-15 26
3 10001-14000 7650-8750 5250 60 137,000 8-13 30
4 14001-16000 7650-8750 7250 80 48,000 7-12 42
5 16001-19500 9500-10800 8700 80 41,000 6-12 39
6 19501-26000 11500-14500 11500 80 65,000 5-12 49
7 26001-33000 11500-14500 18500 125 82,411 4-8 55
8a 33001-80000 20000-34000 20000-50000 100-150 45,600 2.5-6 115
8b 33001-80000 23500-80000 40000-54000 125-200 182,395 4-7.5 155


FIGURE 3-7 Some aerodynamic technologies

FIGURE 3-7 Some aerodynamic technologies




FIGURE 4-15 Battery type versus specific power and energy. SOURCE: Kalhammer et al. (2007).

FIGURE 4-15 Battery type versus specific power and energy. SOURCE: Kalhammer et al. (2007).











It’s easy to see why lithium ion batteries are winning out over other battery technologies — they have both higher power (I want it NOW) and higher energy (long distance for hours) and weigh less. But li-ion perform poorly, and don’t achieve their optimal driving range under 32 F and over 95 F. Hot temperatures also shortens li-ion battery life.

% of weight lbs Major Description
24 4080 Powertrain Engine and cooling system, transmission, accessories
19 3230 Truck body structure Cab-in-white, sleeper unit, hood & fairings, Interior & glass
18 3060 Misc Accessories/systems Batteries, fuel system, exhaust hardware
17 2890 Drivetrain & Suspension Drive axles, steer axle, suspension system
12 2040 Chassis / Frame Frame rails & crossmembers, Fifth wheel and brackets
10 1700 Wheels and Tires Set of 10 aluminum wheel + tire

Figure 5-32 Weight distribution of major component categories in Class 8 tractors. SOURCE: Smith and Eberle (2003).

fuel eff weight of different parts class 8 trucks


Potential for Lightweighting Trucks

Class 8 trucks, per 1,000 pounds lighter, get up to 1% better fuel efficiency on level ground, 1.6% better in stop and go traffic, and up to 2.4% better going uphill (Table 5-16, not shown).

Trucks, trailers, and buses are benefiting from greater use of lightweight materials and structures. Components already making use of aluminum include the cab structure, wheels, fifth wheel, bellhousing, and more (see Table 5-17). Aluminum composite panels have been introduced on trailers, and the use of wood in trailers is diminishing. The barrier to additional use of aluminum or carbon composites is primarily cost effectiveness, with carbon fiber composites, for example, costing several times more per unit mass than aluminum. Some technical and cost-effectiveness issues with carbon composites and have been studied in DOE programs with industry (Rini, 2005).

While progress is being made in weight reduction through materials and design, certain weight-adding components have been necessary. Emissions control components are adding roughly 400 lb, and aerodynamic devices another 200 lb, but are deemed a positive tradeoff with aerodynamic drag reduction. Similarly, the weight addition from efficiency technologies such as waste heat recovery are projected to provide net benefits. In hybrid applications, batteries and other hybrid components add 300 to 1000 lb for trucks and even more in bus applications.

FIGURE 5-38 Weight reduction opportunities with aluminum.

FIGURE 5-38 Weight reduction opportunities with aluminum.









Technologies Class 8 Class 3-7 Refuse Truck
Trailer aerodynamics X
Cab aerodynamics X X
Tires and Wheels X X X
Weight reduction X X X
Transmission & driveline X X X
Accessory electrification X X
Overnight idle reduction X
Idle reduction X X
Engine efficiency X X X
Waste heat recapture X
Hybridization X X X
Dieselization (from gasoline) X
TABLE 6-1 Technologies and Vehicle Classes Likely to See Benefits
Fuel Consumption reduction %
Engine 20 11-14 14
Aerodynamics 11.5 6 0
Rolling Resistance 11 3 1.5
Transmission & driveline 7 4 4
Hybrids 10 30-40 35
Weight 1.25 4 1
TABLE 6-2 Fuel Consumption Reduction (percentage) by Application and Vehicle Type

fuel eff NPC technology pct tech improvement







fuel eff NPC technology pct cost improvement







fuel eff NPC cruise cntrl pct by truck class






fuel eff NPC cruise cntrl cost by truck class





fuel eff NPC obstacles


Tractors designed with aerodynamics in mind have been on the market for almost 30 years. A relatively wide range of aero-related improvements have been implemented on modern truck tractors, which has substantially improved their fuel economy. Figure 10-7 shows a summary of aerodynamic

More on aerodynamic design

[I think that this will be less of an issue as roads crumble and long-haul trucks can’t go fast, and hopefully railroads will be carrying a larger share of long-distance freight since they’ll have plenty of room once a great depression hits and less goods are traveling]

Some measures, such as roof fairings and deflectors, have been widely adopted throughout the trucking industry, while others are less prevalent. Improvements in fuel economy on the order of 10% have already been documented, owing to a combination of tractor aerodynamic measures.

The prospect of high fuel prices has renewed industry interest in active aerodynamics. Examples of active aerodynamic systems include the following: Grille shutters to close off the grille when active y engine cooling is not needed.

Active ride height control to lower the tractor and trailer at highway speeds. This technology lowers the total vehicle by 0.75 to 1.0 inch, which reduced overall form drag by reducing frontal area. Deployable mirrors or in-cabin vision systems to take over mirror functionality at highway speeds when the mirrors would be stowed for improved aerodynamics. Current safety regulations, which require fixed mirrors, prevent this technology from being deployed.

Trailer side skirts, mounted under the trailer and deflecting airflow from sweeping the trailer underside, have been shown to have a substantial aerodynamic effect. Fuel economy improvements of between 3.8 and 5.2% have been reported for such devices. However, while aerodynamically compelling, these features cause a wide variety of problems for fleet operators. Service, inspection, tire storage, and tire maintenance are all hindered by lack of easy access to the trailer underside. And skirts are prone to damage and breakage in the harsh environment where trailers must operate. These include the conditions at work sites, around fork trucks, in ice and snow, at steep loading docks, and similar conditions.

The rear of the trailer can be optimized for low drag using a “boat-tail” or similar device to reduce the massive separation bubble that follows the trailer back surface. Improvements in fuel economy ranging from 2.9 to 5.0% have been reported. As with side skirts, however, such devices have been resisted by the truck-buying fleets due to practical concerns.

Generally speaking, aerodynamic improvements to trailers have been slower and less noticeable than those on the tractor. This is largely due to the different ownership models of tractors versus trailers. Tractors are specified meticulously, and represent a major investment for their owners. A trailer costs much less, and is often seen as an interchangeable commodity with substantial cost pressure. Further, trailers are far more numerous than tractors, by a factor of 4:1 in a typical fleet. Many trailers are therefore sitting idle at any given time; the net result is a much longer payback time for investments in trailer efficiency. And finally, in some cases, the trailer is not owned by the same entity that owns the tractor and pays for the fuel. This misalignment of incentives is a hurdle to more aggressive implementation of trailer aerodynamic measures.


Rolling resistance accounts for roughly one-third of the power required to move a heavy truck over a level road at highway speeds. Rolling resistance comes primarily from inelastic deformation of the tire as it rotates. This deformation is a complex function of the load level, tire materials, tire and tread design, inflation levels, and the road surface itself. Generally speaking, the resistive force is proportional to the weight of the vehicle. In terms of energy consumption, the impact of rolling resistance is directly proportional to vehicle speed. Opportunities for reducing tire resistance are highly dependent on application as discussed below.

Wide-Base Single Tires. In Class 8 line-haul applications, operation is exclusively on-road and most time is spent at higher speeds, which provides several opportunities for optimization. The most significant development is the so-called “New-Generation Wide-Base Single” (NGWBS) tire, which employs a wider tread to replace two traditional truck tires with a single tire. Studies show fuel economy improvements in the range of 5 to 10% for the use of NGWBS tires in line-haul applications. These gains must be traded off against several downsides of such tires, including an added capital cost of around $3,600 per vehicle, and a perception of reduced safety.

NGWBS tires are not the only means to reduce tire rolling resistance. Proper inflation and alignment can also contribute to better fuel economy. Maintenance of proper inflation levels can be improved by tire pressure monitoring, and in some cases by the use of nitrogen gas in the place of air. The total effect of such changes is around 1.5 to 3%. However, with the exception of pressure monitoring, these modifications are very low-cost options, requiring only basic service and attention to the vehicle. The cost of such activity is estimated at only around $300 per vehicle, for both Class 3-6 vehicles and vocational Class 8 trucks. These improvements are particularly relevant for non-line-haul vehicles, where NGWBS tires are often not an option.

Vehicle Weight

Vehicle weight is a significant factor in fuel economy and puts stress on roads causing billions of dollars of needed repairs every year. It has an impact on the power required to accelerate, and the power dissipated in the form of braking.

Vehicle weight also impacts such factors as rolling resistance and transmission performance, so that weight is an ever-present factor in truck fuel economy. It is most prevalent for vehicles with frequent changes in speed, which tends to dissipate more energy braking than constant-speed

The benefit of lower weight has been studied for a wide class of vehicle types, with varying results.

For line-haul trucks over level terrain, a benefit of between 0.4 and 1.0% in fuel economy is reported per 1,000 pounds of weight reduction. The benefit improves to 1.5– 2.0% for uphill climbing routes, where more energy is invested in pulling the weight of the vehicle to higher elevation. Data on other types of vehicle are less consistent, with results generally in the low single-digits of fuel economy improvement, depending on vehicle class and duty cycle.

All-electric truck battery weight

The battery weight for a class 8 truck is roughly 55,000 lbs and the max cargo weight is 59,000 lbs, so clearly it is impossible to electrify class 7-8 trucks unless battery energy density increases at least 10-fold.

All-electric truck batteries weight about 22 kg per kwh, and a typical class 3-6 battery is 80 kWh, so 1760 kg or 3880 lbs, and that will not only lower the distance the truck can go, but the amount of cargo weight it can carry.

Idling. Line-haul truck engines spend many hours idling in a given 24-hour period. Engine idling is used for a variety of functions when the truck is stationary, such as powering air-conditioners, providing electrical power for TVs, laptops, kitchenettes, etc., providing cabin heat in cold temperatures, and maintaining engine temperature. Though an idling diesel engine is not efficient, it is a simple and easy way to provide these functions to the typical long-haul trucker.

Reducing a single truck’s idling time by 15 minutes per day can save hundreds of dollars per year in fuel costs.

Telematic systems provide information to fleet managers and truckers with the primary objective of improving fleet efficiencies and fuel economy. Idle reduction and route management are examples of telematic applications. Idle time can be reduced using telematics in multiple ways. For example, with real-time knowledge of truck location and route traffic, a fleet manager can direct drivers to nearby trucks stops with hotel-load capacity or similar idle-elimination capabilities. By using telematic technologies to keep trucks on-route, fewer loads are delayed through unplanned route changes, and more trucks arrive at their destination on-time without overnight stops. These advantages can have a sizeable impact on fleet fuel economy.

Telematic technologies are also instrumental in route management. This includes both planning of routes based on past history of truck routes and active real-time management of truck route following. A study conducted for a report by TIAX in 2009 found that route optimization software was able to reduce pick-up and delivery fleet mileage between 5 and 10% per year. For regional and line-haul fleets, which spend relatively less time in traffic, the fuel savings potential was approximately 1% per year according to the NRC.




















fuel eff NPC obstacles 2


Posted in Transportation, Trucks | Tagged , , , , , , , | Leave a comment

New land converted to cropland to grow biofuel crops equal to 34 coal-fired power plants

Summary of article below: Between 2008 and 2012 over 7 million acres new land, much of it grasslands, were converted to croplands, damaging native ecosystems, and mimics the extreme land-use change that led up to the Dust Bowl in the 1930s. Because most new cropland was planted to corn that may ultimately fill our gas tanks we could be, in a sense, plowing up prairies with each mile we drive. The researchers also found that most new croplands were on marginal lands not well suited for agriculture and often prone to heightened risks of erosion, flooding and drought.

Read the full paper at: Lark, T. L., et al. 2015. Cropland expansion outpaces agricultural and biofuel policies in the United States. Environ. Res. Lett. 10

Tyrrell, K. A. Apr 2, 2015. Plowing prairies for grains: Biofuel crops replace grasslands nationwide. 

Clearing grasslands to make way for biofuels may seem counterproductive, but University of Wisconsin-Madison researchers show in a study today (April 2, 2015) that crops, including the corn and soy commonly used for biofuels, expanded onto 7 million acres of new land in the U.S. over a recent four-year period, replacing millions of acres of grasslands.

The study—from UW-Madison graduate student Tyler Lark, geography Professor Holly Gibbs, and postdoctoral researcher Meghan Salmon—is published in the journal Environmental Research Letters and addresses the debate over whether the recent boom in demand for common biofuel crops has led to the carbon-emitting conversion of natural areas. It also reveals loopholes in U.S. policies that may contribute to these unintended consequences.

“We realized there was remarkably limited information about how croplands have expanded across the United States in recent years,” says Lark, the lead author of the study. “Our results are surprising because they show large-scale conversion of new landscapes, which most people didn’t expect.”

The conversion to corn and soy alone, the researchers say, could have emitted as much carbon dioxide into the atmosphere as 34 coal-fired power plants operating for one year—the equivalent of 28 million more cars on the road.

The study is the first comprehensive analysis of land-use change across the U.S. between 2008 and 2012, in the “critical time period” following passage of the federal Renewable Fuel Standard (RFS), and during a “new era” of agriculture and biofuel demand, Lark and Gibbs say. The results may aid policymakers as Congress debates whether to reform or repeal parts of the RFS, which requires blending of gasoline with biofuels that are supposed to be grown only on pre-existing cropland, in order to minimize land-use change and its associated greenhouse gas emissions.

Lark recently visited Washington, D.C., to present the findings to the Environmental Protection Agency and the White House Office of Management and Budget, which share responsibility for rule-making and review of the RFS.

For instance, the study found that 3.5 million acres of corn and soy grown during this time period was produced on new, rather than pre-existing, cropland, rendering it potentially ineligible for renewable fuel production under the RFS. However, this went undetected due to limitations in current federal monitoring, which captures only national-level, aggregate land-use change rather than the high-resolution changes found in the study.

The study also showed that expanding the geographic scope of another policy, the Sodsaver provision of the 2014 Farm Bill, could better prevent widespread tilling of new soils. This policy reduces federal subsidies to farmers who grow on previously uncultivated land, but it applies in only six Northern Plains states. The researchers say the findings suggest a nationwide Sodsaver is needed to protect remaining native ecosystems, since roughly two-thirds of new cropland conversion occurred outside of these states.

Using high-resolution satellite imagery data collected over the last 40 years by the U.S. Department of Agriculture and the U.S. Geological Survey, the researchers identified where land had been converted to cropland, to what extent conversion had occurred, and the nature of the conversion—for instance, whether wetlands were converted for soy, or grasslands were turned into cornfields.

Grasslands are home to a diversity of species and store an abundance of carbon in their soils; yet, the researchers found nearly 80 percent of cropland expansion replaced grasslands, among them 1.6 million acres of undisturbed natural grassland equivalent in area to the state of Delaware.

Though not included in the study, the researchers estimate this conversion emitted as much carbon dioxide as 23 coal-fired power plants running for a year.

In fact, nearly a quarter of all land converted for crop production came from these long-standing prairies and ranges, much of it within the Central Plains from North Dakota to Texas. “It mimics the extreme land-use change that led up to the Dust Bowl in the 1930s,” Lark says.

Because most new cropland was planted to corn that may ultimately fill our gas tanks, he added, “we could be, in a sense, plowing up prairies with each mile we drive.”

The researchers also found that most new croplands were on marginal lands not well suited for agriculture and often prone to heightened risks of erosion, flooding and drought.

“There could be severe environmental consequences for bringing this land into crop production,” Lark says.

Gibbs, also a professor in the UW-Madison Nelson Institute Center for Sustainability and the Global Environment, believes the findings present an opportunity to address the shortcomings in existing U.S. policies while also facilitating a more climate-friendly approach to biofuels.

“The good news is that our existing policies could be refined to help improve conservation,” she says. “By closing the gaps in the existing Sodsaver and RFS, we could better protect our nation’s grasslands and prairies.”

Posted in Biofuels | Tagged , , , , , , | 1 Comment