From the “it’s OK, we used other people’s money” department and Andrew Follet at The Daily Caller:
Two wind turbines on the Lake Land College Campus in Mattoon, IL Image: Google Earth
Lake Land College recently announced plans to tear down broken wind
turbines on campus, after the school got $987,697.20 in taxpayer support
for wind power.
The turbines were funded by a $2.5 million grant from the U.S. Department of Labor, but the turbines lasted for less than four years and were incredibly costly to maintain.
“Since the installation in 2012, the college has spent $240,000 in
parts and labor to maintain the turbines,” Kelly Allee, Director of
Public Relations at Lake Land College, told The Daily Caller News
Foundation.
The college estimates it would take another $100,000 in repairs to make the turbines function again
after one of them was struck by lightning and likely suffered
electrical damage last summer. School officials’ original estimates
found the turbine would save it $44,000 in electricity annually,
far more than the $8,500 they actually generated. Under the original
optimistic scenario, the turbines would have to last for 22.5 years just
to recoup the costs, not accounting for inflation. If viewed as an
investment, the turbines had a return of negative 99.14 percent.
“While they have been an excellent teaching tool for students, they
have only generated $8,500 in power in their lifetime,” she said. “One
of the reasons for the lower than expected energy power is that the
turbines often need to be repaired. They are not a good teaching tool if
they are not working.”
The college estimates it would take another $100,000 in repairs to make the turbines function again after one of them was struck by lightning and likely suffered electrical damage last summer.
Even though the college wants to tear down one of the turbines, they
are federal assets and “there is a process that has to be followed”
according to Allee.
The turbines became operational in 2012 after a 5-year long building campaign intended to reduce the college’s carbon dioxide (CO2) emissions to fight global warming. Even though the turbines cost almost $1 million, but the college repeatedly claimed they’d save money in the long run.
“It is becoming more and more difficult for us financially to maintain the turbines,” Josh Bullock, the college’s president, told the Journal Gazette and Times-Courier last week. “I
think it was an extremely worthy experiment when they were installed,
but they just have not performed to our expectations to this point.”
Bullock states that the turbines simply haven’t been able to power
the campus’ buildings and that most of the electricity wasn’t
effectively used.
Lake Land plans to replace the two failed turbines with a solar power system paid for by a government grant. “[T]he photovoltaic panels are expected to save the college between $50,000 and $60,000 this year,”Allee told the DCNF.
Globally, less than 30 percent of total power wind capacity is actually utilized as
the intermittent and irregular nature of wind power makes it hard to
use.Power demand is relatively predictable, but the output of a wind
turbine is quite variable over time and generally doesn’t coincide with the times when power is most needed.
Thus, wind power systems require conventional backups to provide power
during outages. Since the output of wind turbines cannot be predicted
with high accuracy by forecasts, grid operators need to keep excess conventional power systems running.
Wind power accounted for only 4.4 percent of electricity generated in
America in 2014, according to the Energy Information Administration.
Lake Land College (LLC) wind turbine history can be seen here: 2007-2010 COST BREAKDOWN: 2007: Wind feasibility study completed for $30,000 2010: LLC provided $500,000 from Illinois DCEO to “build one turbine.” 2010: LLC provided 18% of $2,542,762 from US Dept of
Labor for “green job training program and related equipment including a
100 kW turbine.” (The turbine portion of this US DoL grant calculates
to $457,697.20 per the small print details.) WHAT DO WE HAVE TO SHOW FOR TAXPAYER $987,697.20 spent to build these boondoggles?https://www.lakelandcollege.edu/as/tec/sustain/documents/Sustainability%20Media%20Guide%202012.pdf Operation date: 2012 http://jg-tc.com/news/lake-land-wind-turbines-are-up-running/article_c011d95a-0f6b-11e2-9f3a-0019bb2963f4.html
(read the comments from “gringa”in 2012……totally good point about it never paying for themselves)
“No mention of payback periods in this article. Seems like LLC
would include the economic effectiveness of this investment in any
discussion of it. After all, isn’t this all about return on investment?
Maybe not. Wind is free, but the land and equipment and maintenance to
that equipment is NOT free. “ in 2014, another article was written touting the “savings”: http://jg-tc.com/news/lake-land-college-saves-with-green-energy/article_e9f30825-81cb-5f7c-b3c7-f4fa0e7673e4.html
LLC should update their college website “infomercial” found here
since the turbines no longer (if ever) actually saved $44,000 per year
per the over-optimistic claims: https://www.lakeland.cc.il.us/as/tec/green_jobs/documents/EAR%20INSERT%20LLC%20May%202013.pdf
Just for fun, IF the turbines saved $44,000 per year, these two
junkers would have to last 22.5 years, but they only lasted a shameful
FOUR YEARS!!!!! The Lake Land College Newsletter was full of praise in 2012:
Source: Laker Low Down eNewsletter published January 26, 2012
Despite the bad weather, the first of the college’s two
100 kW wind turbines has been installed. This project is made possible
by American Recovery and Reinvestment Act funding via the Illinois
Department of Commerce and Economic Opportunity and a Community-Based
Job Training Grant from the U. S. Department of Labor.
The new turbines will offer students advanced training for
large-scale turbine maintenance and energy production. They will also
power buildings on campus with alternative energy, further reducing the
cost of utilities for Lake Land College. Because Lake Land College
officials and experts worked with the manufacturer to create customized
turbines, it is projected that there will be a significant return on
this investment of Class Two wind speeds, making these turbines a very
affordable option for the college.
The two turbines are estimated to produce more than 220,000 kilowatt
hours each year, thereby reducing the number of kilowatt hours of
electricity needed by 440,000. The college estimates that the initial
energy savings will be around $44,000 annually.
What exactly does this mean? Here’s a real-world example: the average
Illinois home consumes 1,100 kilowatt hours each month. The two
turbines should produce 36,667 kilowatt hours each month. Based
on this information, the two turbines could produce enough energy to
power the average Illinois home for just over 33 months, or 2.8 years!
Carbon dioxide emissions from industrial society have driven a huge growth in trees and other plants.
A
new study says that if the extra green leaves prompted by rising CO2
levels were laid in a carpet, it would cover twice the continental USA.
Climate sceptics argue the findings show that the extra CO2 is actually benefiting the planet.
But the researchers say the fertilisation effect diminishes over time.
They warn the positives of CO2 are likely to be outweighed by the negatives.
The
lead author, Prof Ranga Myneni from Boston University, told BBC News
the extra tree growth would not compensate for global warming, rising
sea levels, melting glaciers, ocean acidification, the loss of Arctic
sea ice, and the prediction of more severe tropical storms.
The new study is published in the journal Nature Climate Change by a team of 32 authors from 24 institutions in eight countries.
It
is called Greening of the Earth and its Drivers, and it is based on
data from the Modis and AVHRR instruments which have been carried on
American satellites over the past 33 years.The sensors show significant
greening of something between 25% and 50% of the Earth's vegetated land,
which in turn is slowing the pace of climate change as the plants are
drawing CO2 from the atmosphere.
Just 4% of vegetated land has suffered from plant loss.
The extra growth is the equivalent of more than four billion giant sequoias – the biggest trees on Earth
This is in line with the Gaia thesis promoted by the
maverick scientist James Lovelock who proposed that the atmosphere,
rocks, seas and plants work together as a self-regulating organism.
Mainstream science calls such mechanisms "feedbacks".
The
scientists say several factors play a part in the plant boom, including
climate change (8%), more nitrogen in the environment (9%), and shifts
in land management (4%).
But the main factor, they say, is plants using extra CO2 from human society to fertilise their growth (70%).
Harnessing energy from the sun, green leaves grow by using CO2, water, and nutrients from soil.
"The
greening reported in this study has the ability to fundamentally change
the cycling of water and carbon in the climate system," said a lead
author Dr Zaichun Zhu, from Peking University, Beijing, China.
The
authors note that the beneficial aspect of CO2 fertilisation have
previously been cited by contrarians to argue that carbon emissions need
not be reduced.
Co-author Dr Philippe Ciais, from the Laboratory
of Climate and Environmental Sciences in Gif-sur‑Yvette, France (also
an IPCC author), said: "The fallacy of the contrarian argument is
two-fold. First, the many negative aspects of climate change are not
acknowledged.
"Second, studies have shown that plants acclimatise
to rising CO2 concentration and the fertilisation effect diminishes
over time." Future growth is also limited by other factors, such as lack
of water or nutrients.
A co-author Prof Pierre Friedlingstein, from Exeter
University, UK, told BBC News that carbon uptake from plants was
factored into Intergovernmental Panel on Climate Change (IPCC) models,
but was one of the main sources of uncertainty in future climate
forecasts.
Warming the Earth releases CO2 by increasing
decomposition of soil organic matter, thawing of permafrost, drying of
soils, and reduced photosynthesis - potentially leading to tropical
vegetation dieback.
He said: "Carbon sinks (such as forests, where
carbon is stored) would become sources if carbon loss from warming
becomes larger than carbon gain from fertilisation.
"But we can't
be certain yet when that would happen. Hopefully, the world will follow
the Paris agreement objectives and limit warming below 2C."
Nic
Lewis, an independent scientist often critical of the IPCC, told BBC
News: "The magnitude of the increase in vegetation appears to be
considerably larger than suggested by previous studies.
"This
suggests that projected atmospheric CO2 levels in IPCC scenarios are
significantly too high, which implies that global temperature rises
projected by IPCC models are also too high, even if the climate is as
sensitive to CO2 increases as the models imply."
And Prof Judith
Curry, the former chair of Earth and atmospheric sciences at the Georgia
Institute of Technology, added: "It is inappropriate to dismiss the
arguments of the so-called contrarians, since their disagreement with
the consensus reflects conflicts of values and a preference for the
empirical (i.e. what has been observed) versus the hypothetical (i.e.
what is projected from climate models).
"These disagreements are
at the heart of the public debate on climate change, and these issues
should be debated, not dismissed."
The dropping of two atomic bombs on the Japanese cities of Hiroshima
and Nagasaki in August 1945 remains the only wartime use of nuclear
weapons in history.
No one knows exactly how many Japanese citizens were killed by
the two American bombs. A macabre guess is around 140,000. The atomic
attacks finally shocked Emperor Hirohito and the Japanese militarists
into surrendering.
John Kerry recently visited Hiroshima. He became the first Secretary
of State to do so -- purportedly as a precursor to a planned visit next
month by President Obama, who is rumored to be considering an apology to
Japan for America's dropping of the bombs 71 years ago.
The horrific bombings are inexplicable without examining the context in which they occurred.
In 1943, President Franklin Roosevelt and British Prime Minister
Winston Churchill insisted on the unconditional surrender of Axis
aggressors. The bomb was originally envisioned as a way to force the
Axis leader, Nazi Germany, to cease fighting. But the Third Reich had
already collapsed by July 1945 when the bomb was ready for use, leaving
Imperial Japan as the sole surviving Axis target.
Japan had just demonstrated with its nihilistic defense of Okinawa --
where more than 12,000 Americans died and more than 50,000 were
wounded, along with perhaps 200,000 Japanese military and civilian
casualties -- that it could make the Americans pay so high a price for
victory that they might negotiate an armistice rather than demand
surrender.
Tens of thousands of Americans had already died in taking the Pacific
islands as a way to get close enough to bomb Japan. On March 9-10,
1945, B-29 bombers dropped an estimated 1,665 tons of napalm on Tokyo,
causing at least as many deaths as later at Hiroshima.
Over the next three months, American attacks leveled huge swaths of
urban Japan. U.S. planes dropped about 60 million leaflets on Japanese
cities, telling citizens to evacuate and to call upon their leaders to
cease the war.
Japan still refused to surrender and upped its resistance with
thousands of Kamikaze airstrikes. By the time of the atomic bombings,
the U.S. Air Force was planning to transfer from Europe much of the idle
British and American bombing fleet to join the B-29s in the Pacific.
Perhaps 5,000 Allied bombers would have saturated Japan with napalm.
The atomic bombings prevented such a nightmarish incendiary storm.
The bombs also cut short plans for an invasion of Japan -- an
operation that might well have cost 1 million Allied lives, and at least
three to four times that number of well-prepared, well-supplied
Japanese defenders.
There were also some 2 million Japanese soldiers fighting throughout
the Pacific, China and Burma -- and hundreds of thousands of Allied
prisoners and Asian civilians being held in Japanese prisoner of war and
slave labor camps. Thousands of civilians were dying every day at the
hands of Japanese barbarism. The bombs stopped that carnage as well.
The Soviet Union, which signed a non-aggression pact with Japan in
1941, had opportunistically attacked Japan on the very day of the
Nagasaki bombing.
By cutting short the Soviet invasion, the bombings saved not only
millions more lives, but kept the Soviets out of postwar Japan, which
otherwise might have experienced a catastrophe similar to the subsequent
Korean War.
World War II was the most deadly event in human history. Some 60
million people perished in the six years between Germany's surprise
invasion of Poland on Sept. 1, 1939, and the official Japanese surrender
on Sept. 2, 1945. No natural disaster -- neither the flu pandemic of
1918 nor even the 14th-century bubonic plague that killed nearly
two-thirds of Europe's population -- came close to the death toll of
World War II.
Perhaps 80 percent of the dead were civilians, mostly Russians and
Chinese who died at the hands of Nazi Germany and Imperial Japan. Both
aggressors deliberately executed and starved to death millions of
innocents.
World War II was also one of the few wars in history in which the
losers, Japan and Germany, lost far fewer lives than did the winners.
There were roughly five times as many deaths on the Allied side, both
military and civilian, as on the Axis side.
It is fine for Secretary of State Kerry and President Obama to honor
the Hiroshima and Nagasaki victims. But in a historical and moral sense,
any such commemoration must be offered in the context of Japanese and
German aggression.
Nazi Germany and Imperial Japan started the respective European and
Pacific theaters of World War II with surprise attacks on neutral
nations. Their uniquely barbaric war-making led to the deaths of some 50
million Allied soldiers, civilians and neutrals -- a toll more than 500
times as high as that of Hiroshima.
This spring we should also remember those 50 million -- and who was responsible for their deaths.
Why 70 percent of companies paid zero in corporate taxes: They had zero profits
Two claims from the recently released Government Accountability Office (GAO) report on corporate income taxes are receiving widespread attention in the media. The first is that in 2012, 70% of all active companies paid zero corporate taxes
and 20% of “profitable” companies had no tax liability. The second is
that the effective tax rate for large profitable corporations amounted
to about 16% of their “pre-tax income.” As it turns out, a closer look
at the underlying IRS tax data shows that these claims rely heavily on
how we define “profitable” and “pre-tax income.” Here’s why.
Corporations with zero tax liability The
IRS Statistics of Income provides data on the total returns of all
active U.S. corporations. The GAO study relies heavily on data provided
in the Corporation Complete Reports.
Looking more closely at the 2012 data (Table 18) and restricting the
sample to the types of companies that the GAO included in its report, my
own analysis finds the following. In 2012, out of 1.6 million corporate
tax returns, only 51% were returns that had positive “net incomes,” and
only 32% were returns that had positive “incomes subject to tax.” These
are both measures of pre-tax income. However, while “net income”
generally refers to the net profit or loss after allowing for certain
usual deductions (such as depreciation allowances, compensation payments
and interest), “income subject to tax” allows companies with positive
“net incomes” to claim an additional deduction as a result of prior-year
operating losses. These losses can be carried forward to offset taxable
incomes in years when firms are making a profit or have positive net
incomes; this is known as a net operating loss deduction (NOLD). For
2012, the data show that approximately 20% of companies with positive
“net incomes” (or profits) claimed a net operating loss deduction
resulting in a zero tax liability.
So my analysis of the data
shows two things: first, the GAO claim that 70% of companies paid no
income tax is largely because more than 50% of these companies had zero
profits or net incomes, and therefore they had zero tax liability. Even
tax reform is unlikely to get us to the point where we would start
taxing unprofitable companies.
Secondly, some of these currently
“profitable” (positive net income) companies have experienced large
losses in prior years. For these companies, the NOL deduction allowed
them to reduce their tax liability to zero. This trend explains the 20%
of currently “profitable” companies that are paying zero taxes. Again,
the NOLD provision is a largely sensible provision in the tax code. As
per a recent report
by the Congressional Research Service, having this provision in the tax
code improves economic efficiency and reduces the distorting effect of
taxation on investment decisions. The intent of this provision is, for
example, to avoid a company with 5 years of consecutive operating losses
of $20 million each having to pay income tax in year 6 simply because
it realizes income of $20 million in that year. The principle underlying
NOLD is intended to allow companies to get out of the hole of
accumulated losses before the government can start claiming its fair
share of the company’s income. Effective Tax Rates
The
16% tax rate that is receiving wide attention is calculated by the GAO
as total taxes paid divided by “net income.” However, the real taxable
income or tax base (as intended in the tax code) is different from “net
income” because, as discussed in the previous paragraph, it allows
companies with positive net incomes to further write off losses incurred
in previous years, using the NOLD. Only after accounting for the loss
carryforwards and certain special deductions, do we get to the taxable
income of the company, or what the IRS defines as “income subject to
tax.” So the difference in effective tax rates (ETR) also comes from
whether we define “net income” or “income subject to tax” as the pre-tax
income. In a recent post, the Tax Foundation explains the many reasons why net income may differ from taxable income. A series
of earlier Tax Foundation studies using data for 1997-2008 compute the
ETR using the “income subject to tax” in the denominator, and find that in 2008 firms with asset sizes between $5 and $100 million had an the average ETR of 32.5%.
In the table below, using IRS data (Table 2), I compute the effective tax rate using both measures of the tax base for large companies in 2012. Effective Tax Rates, 2012
Asset Size (Millions)
Using “net income” in denominator
Using “income subject to tax” in denominator
$5-$10
9.19%
32.26%
$10-$25
11.51%
32.32%
$25-$50
14.11%
32.51%
$50-$100
18.68%
32.34%
$100-$250
19.99%
31.67%
These
differences are striking. With pre-tax income defined as “income
subject to tax,” the effective tax rate is nearly 20 percentage points
higher than what we get using “net income” as the tax base. One reason
this divergence matters currently is because over the course of the
recession, many companies likely accumulated operating losses and their
ability to use them in later years lowers their “income subject to tax.”
Of course, there are many different ways to define average effective tax rates. A January 2016 paper
by the Office of Tax Analysis shows that there is wide divergence in
tax rates depending upon the methodology used and particularly depending
on the treatment of loss making firms.
A 2006 NBER paper
by Alan Auerbach constructs a measure of average taxes paid and shows
that average tax rates were above 45% in 2003, primarily because of the
way losses are treated in the corporate tax code. So it is important to
bear in mind that the 16% rate is the consequence of a chosen
methodology and is at the lower end of comparable estimates. Sum Up In fairness to the GAO, page 10 of the report
highlights several reasons why firms may pay no federal tax. They claim
that in “each of the years from 2008 to 2012, between approximately 49
to 54 percent of all active corporations had negative net tax income.”
They also note that “corporations had positive net tax income that was
completely offset by net operating loss deductions carried forward from
prior tax years. In each year from 2008 to 2012, approximately 15 to 19
percent of all active corporations had their income completely offset in
this manner.” Finally, they add that “the use of federal tax credits
appears to have had little effect on the number of corporations that
paid no tax in each year.”
So, the GAO clearly acknowledges that
the reason 70% of companies are paying no taxes is because they are
either not currently profitable or they are able to offset taxes because
of prior-year losses. Further, using taxable income to compute ETRs, I
find that effective tax rates are fairly close to the statutory rate of
35%, at least when using aggregate data.
As usual, headlines are
more about hype than substance in this election season. We cannot
simultaneously worry about U.S. companies inverting
to low-tax countries to take advantage of low tax rates, while also
claiming that U.S. companies already pay really low taxes. Clearly, the
devil is in the details.
The Obama administration is establishing new to limits on public land
uses in 10 Western states to protect the habitat of the ground dwelling
greater sage grouse. The federal efforts are an attempt to prevent a
continued decline in sage grouse numbers which could necessitate listing
the sage grouse as threatened or endangered. The new limits are the
federal government’s biggest land-planning effort to date to conserve a
single species.
The proposal would affect energy development. Among other actions,
the regulations would require oil and gas wells to be clustered in
groups of a half-dozen or more to avoid scattering them across habitat
of the greater sage grouse. Drilling near breeding areas would be
prohibited during mating season, and power lines would be moved away
from prime habitat to avoid serving as perches for raptors who prey on
sage grouse. State Plans, Industry Concerns Ignored
The new federal limits undermine plans Western state governments
previously developed in conjunction with federal land management
agencies, with input from affected industries, federal permit holders
and environmental groups, in recent years.
Kathleen Sgamma vice-president of government and public affairs with
the Western Energy Alliance told the Wall Street Journal, she believe
believes state efforts adequately protect the sage grouse and “Western
Energy Alliance will protest all land amendments that fail to conform
with state plan, and will continue to support actions by Congress to
delay these use plans and a final listing decision.” Sgamma told the
associated press, “The economic impact of sage-grouse restrictions on
just the oil and natural gas industry will be between 9,170 and 18,250
jobs and $2.4 billion to $4.8 billion of annual economic impact across
Colorado, Montana, Utah and Wyoming.” Congressional Republicans Object
Congressional Republicans’ criticized the new regulations as
unnecessary federal overreach. For instance, Rep. Rob Bishop (R-UT),
chairman of the House Natural Resources Committee , said, “This is just
flat out wrong.”
“The state plans work,” said Bishop. “This proposal is only about controlling land, not saving the bird.”
Shortly after Interior Secretary Sally Jewell announced the sage
grouse rule changes, language sponsored by Rep. Bishop blocking the
listing of the greater sage grouse passed the House as part of the
Fiscal Year 2016 National Defense Authorization Act. The Public Lands
Council and the National Cattlemen’s Beef Association supported House
efforts to prevent the sage grouse from being listed as endangered.
“Livestock grazing and wildlife habitat conservation go hand-in-hand,
and ranchers have historically proven themselves to be the best
stewards of the land,” said Brenda Richards, PLC president and NCBA
member in a statement. “If sage grouse are designated for protection
under the ESA, many ranchers may no longer be permitted to allow
livestock to graze on or near sage grouse habitat, habitat which spans
across 11 western states and encompasses 186 million acres of both
federal and private land. This decision would not only destroy the
ranching industry in the west, which is the backbone of many rural
communities, it would also halt the conservation efforts currently
underway by ranchers.”
Bishop’s provision would also prohibit the federal government from
instituting their own management plans on federal lands going further
than state plans already in place, thus countermanding the rule changes
announced by Sec. Jewell if it becomes law.
Richards, who ranches in Idaho, explains in her statement, “The state
plans that are already in place focus on improving sage grouse habitat,
through decisions based on-the-ground where impacts to the bird can be
best dealt.
“Ranchers in particular have consistently lived and operated in
harmony with the sage grouse for many decades, and in fact, the core
habitat areas are thriving largely due to a long history of well-managed
grazing,” continues the statement. “It is a known fact that livestock
grazing is the most cost effective and efficient method of removing fine
fuel loads, such as grass, from the range thus preventing wildfire,
which is one of the primary threats to the sage grouse. We must allow
time for these state plans, orchestrated by folks closest to the land
and to the issue at hand, to be fully implemented and to accomplish
their goal of protecting this bird.” Bette Grande (governmentrelations@heartland) is a Heartland Institute research fellow and former North Dakota state legislator.
A mistake in climate model architecture changes everything. Heat
trapped by increasing carbon dioxide just reroutes to space from water
vapor instead.
The scare over carbon dioxide was just due to a simple modelling
error. A whole category of feedbacks was omitted, which greatly
exaggerated the calculated sensitivity to carbon dioxide.
Main Messages
The scientists who believe in the carbon dioxide theory of
global warming do so essentially because of the application of “basic
physics” to climate, by a model that is ubiquitous and traditional in
climate science. This model is rarely named, but is sometimes referred
to as the “forcing-feedback framework/paradigm.” Explicitly called the
“forcing-feedback model” (FFM) here, this pen-and-paper model estimates
the sensitivity of the global temperature to increasing carbon dioxide.1 The FFM has serious architectural errors.2 It
contains crucial features dating back to the very first model in 1896,
when the greenhouse effect was not properly understood. Fixing the
architecture, while keeping the physics, shows that future warming due
to increasing carbon dioxide will be a fifth to a tenth of current
official estimates. Less than 20% of the global warming since 1973 was
due to increasing carbon dioxide. The large computerized climate models (GCMs) are indirectly
tailored to compute the same sensitivity to carbon dioxide as the FFM.
Both explain 20th century warming as driven mostly by increasing carbon
dioxide.3 Increasing carbon dioxide traps more heat. But that heat mainly
just reroutes to space from water vapor instead. This all happens high
in the atmosphere, so it has little effect on the Earth’s surface, where
we live. Current climate models omit this rerouting. Rerouting cannot
occur in the FFM, due to its architecture—rerouting is in its blindspot.4 The alarm over carbon dioxide can be traced back to an
erroneous assumption implicitly made in 1896 and never corrected—that
there are no significant feedbacks in response to increasing carbon
dioxide rather than to surface warming. The rerouting feedback is such a
feedback. The FFM introduced another erroneous assumption—that the heat
blocked from leaving to space by increasing carbon dioxide causes the
same surface warming as if, instead, absorbed sunlight is increased by
the same amount,5 or more generally, surface warming is
proportional to the sum of all radiative forcings. These assumptions are
built into the architecture of the FFM, and are echoed in the GCMs. Increasing carbon dioxide causes warming in the upper
troposphere, because it blocks some heat from escaping to space from
there. In the GCMs that heat travels down to warm the surface, where it
is like heat from increased absorbed sunlight — due to water vapor
amplification of surface warming, less heat is then radiated to
space from water vapor. In reality that heat mainly reroutes, radiating
to space from water vapor molecules instead. Crucial observations from
the last few decades indicate that the heat radiated to space from water
vapor has been increasing slightly, suggesting that the effect
of rerouting (which lowers the water vapor emission layer) was
outweighed by the effect of water vapor amplification due to the surface
warming (which raises it).
Synopsis (26 pages, last update 17 Feb 2016—new pictorial on atmosphere, pages 17 - 21).
Spreadsheet
(Excel, 250 KB). Contains the alternative model to the FFM, with the
same physics but the fixed architecture, applied using the data from
recent decades. Also contains the OLR (outgoing longwave radiation)
model, and a computation of the Planck sensitivity/feedback.
This material was introduced in a series of blog posts on Joanne's
blog. Note that the forcing-feedback model (FFM) was called the
“conventional basic climate model” in these posts (omitting some words
for brevity where context allowed). Those with a climate science
background will likely find the posts tagged in red of more interest.
New Science 1: Introduction to the Series.
The conventional basic model (now forcing-feedback model, FFM) of
climate is the application of “basic physics” to climate. The idea that
“it’s the physics” makes the CO2 theory impregnable in the minds of the
establishment. Despite the numerous mismatches between theory and
climate observations to date, many climate scientists remain firm in
their belief in the danger of carbon dioxide essentially because of the
conventional basic model, rather than because of huge opaque computer
models. The basic model ignited concern about carbon dioxide; without it
we probably wouldn’t be too worried.
New Science 2: The Conventional Basic Climate Model — Simple.
Presenting the conventional basic climate model, in its simplest
configuration—the only input is the change in carbon dioxide level, and
there are no feedbacks. Computes the no-feedbacks equilibrium climate
sensitivity as 1.2 °C.
New Science 3: The Conventional Basic Climate Model — In Full.
Presenting the conventional basic model (FFM) of climate, in
full—multiple inputs, and feedbacks. Computes the equilibrium climate
sensitivity (ECS) as 2.5 °C.
New Science 4: Error 1: Partial Derivatives.
The conventional basic model relies heavily on partial derivatives. A
partial derivative is the ratio of the changes in two variables, when everything apart from those two variables is held constant.
But in climate everything depends on everything, so it is not possible
to hold everything constant except for only two variables, as required
for a partial derivative to exist. The partial derivatives are not
empirically verifiable, so employing them in a climate model incurs
unknown approximations.
New Science 5: Error 2: Omitting Feedbacks that are not Temperature-Dependent.
In the conventional basic model every “feedback” (something that
affects what caused it) is in response to surface warming—directly
dependent on the surface temperature, but not on the climate drivers or
on other feedbacks. Feedbacks rule the climate. Due to its architecture,
if there feedbacks to climate drivers exist (such as the rerouting
feedback in post 7 below) the model omits them.
New Science 6: How the Greenhouse Effect Works.
Heat radiated to space (outgoing longwave radiation, or OLR) is mostly
emitted by four disparate emissions layers: the water vapor emissions
layer, the CO2 emissions layer, cloud tops, and the surface. The hotter a
layer, the more it emits. The so-called greenhouse effect exists
because OLR is emitted from an emission layer high in the atmosphere,
where it is cold, rather than from the surface, where it is warm. The
total emissions must equal the heat absorbed from the Sun and has to be
emitted somehow, so the surface is much warmer than it would be if most
of the OLR wasn’t emitted from high in the cold atmosphere.
New Science 7: The Rerouting Feedback.
We propose the “rerouting feedback”, in which OLR blocked by an
increasing CO2 concentration is mostly just rerouted to space via
emission from water vapor and clouds tops instead. Occurring high in the
atmosphere, this feedback to increasing CO2 is omitted from the
conventional basic climate model, which can only contain feedbacks in
response to surface warming. Increasing CO2 warms the upper troposphere,
because less OLR is emitted from there by CO2 molecules. This heats
neighboring molecules, including water vapor molecules in the water
vapor emissions layer (WVEL), so more OLR is emitted by water vapor
molecules. Because the WVEL emits more it must be at a higher average
temperature. The average height of the WVEL declines, becauses the upper
troposphere is more stable and convection is less vigorous. Humidity
builds up and clouds condense at lower levels, suggesting the average
height of the cloud top emission layer would also decline, and more OLR
is emitted from cloud tops.
New Science 8: Applying the Stefan-Boltzmann Law to Earth.The
Stefan-Boltzmann equation only applies to a solid isothermal surface,
so it cannot be literally applied to Earth. However it can effectively
be applied to the Earth as seen from space if the Earth's temperature is
considered to be its “radiating temperature”, defined simply as the
temperature that satisfies the Stefan-Boltzmann equation with the OLR
and emissivity (~0.995) of the Earth.
New Science 9: Error 3: All Radiation Imbalances Treated the Same.
The response of any climate model to increased absorbed solar radiation
(ASR) is its “solar response”. Due to its architecture, the
conventional basic model applies its solar response to the radiation
imbalance caused by any influence on climate, even a radiation imbalance
due to increased CO2—one size fits all. However increased ASR causes
increased OLR, whereas increased CO2 does not change the total OLR (when
steady state resumes, ignoring minor surface albedo feedbacks). Also,
increased ASR mainly adds energy to the surface, but increased CO2
blocks energy leaving Earth from the upper atmosphere. So it is
physically unrealistic to apply the solar response to the influence of
extra CO2.
New Science 10: Externally-Driven Albedo (EDA).
Albedo is the fraction of incoming radiation reflected back out to
space without heating the Earth, about 30%. Externally-driven albedo
(EDA) is the albedo other than that due to feedback in response to
surface warming—presumably it is caused by external influences. Here we
show that EDA has at least twice as much influence on surface warming,
and maybe much more than that, as the direct effect of variations in the
total solar irradiance (TSI).
New Science 11: An Alternative Modeling Strategy.
The road-map for building an alternative model without the problems of
the conventional basic model. A paradigm shift from summing forcings to
summing warmings is proposed. Each climate influence has its own
response (sensitivity and feedbacks), instead of all using the solar
response as in the conventional basic model. Radiation must still
balance, so this constraint is applied to the sum-of-warmings model. An
OLR model based on physical parameters of emission layers estimates the
change in OLR, leaving only the CO2 response parameter as an unknown
when the sum-of-warmings model is joined to the OLR model to form the
alternative model. Observations over a period allow the CO2 response
parameter to be estimated, and thus the sensitivity to CO2.
New Science 12: Modeling the Thermal Inertia of the Earth.
The relationship between absorbed solar radiation (ASR) and the
radiating temperature is a low pass filter. This is at the heart of the
solar response in the sum-of-warmings model within the alternative
model.
New Science 13: The Sum-of-Warmings Model.
The sum-of-warmings model independently calculates the surface warming
due to each climate driver (such as increasing absorbed solar radiation,
or increasing carbon dioxide), then adds them. This allows each climate
driver to have its own specific response, including feedbacks.
New Science 14: Emission Layer Parameters.
Basic information about the layers that emit OLR—such as how much OLR
comes from each emission layer, and the heights of the emissions layers.
New Science 15: The OLR Model.
The OLR model estimates how much the outgoing longwave radiation (OLR)
to space changes with changes to the heights of the emission layers, the
lapse rate, the surface temperature, the cloud fraction, and the CO2
concentration.
New Science 16: The Alternative Basic Climate Model. The sum-of-warmings model (post 13) and the OLR model (post 15) are joined together to form the alternative basic model.
New Science 17: Solving the Mystery of the Missing “Hotspot”.
In the conventional models (including the GCMs), surface warming for
any reason causes the water vapor emissions layer (WVEL) to ascend,
creating “the hotspot”. In the alternative model, surface warming and
the solar response both cause the WVEL to ascend, while the CO2 response
(how the planet reacts to increased CO2) causes the WVEL to
descend—which is consistent with the rerouting feedback. The last few
decades saw surface warming, increased ASR, and increased CO2, while the
empirical data from the radiosondes and the better satellite analysis
showed that the WVEL did not ascend and may have descended. The
conventional models (including the GCMs) are wrong—they apply the solar
response to both increased ASR and increased CO2, so they say all the
forces on the WVEL were causing it to ascend. The alternative model
resolves the data—there were opposing forces acting on the WVEL, the
hotspot is indeed missing, and the CO2 response was stronger than the
solar response over the last few decades.
New Science 18: Calculating the ECS Using the Alternative Model.
Fitting the data to the alternative model, we conclude that the
equilibrium climate sensitivity (ECS), the surface warming per doubling
of the CO2 concentration, might be almost zero, is likely less than 0.25
°C, and most likely less than 0.5 °C. Most likely, less than 20% of the
global warming since 1970 is due to increasing carbon dioxide. The CO2
response is less than a third as strong as the solar response—both
measured in degrees of surface warming per unit of radiation imbalance.
Lucia has a Bad Day with Partial Derivatives.
Over at the Blackboard, Lucia thought David had made some errors with
partial derivatives in post 3, and was talking about GCMs in post 4.
This post is a reply, showing her how to do partial differentiation, and
correcting her misconception.
Lucia has a Bad Week on Partial Derivatives.
Over at the Blackboard, Lucia dug a deeper hole, this time focusing on
the existence of the partial derivatives in the conventional basic
model. This post is a reply, showing that her alternative development
was mere notational trickery. Having read carefully through Lucia‘s two
posts and their comments, we are still waiting for Lucia to find any
mistakes in our posts above or even made any informed criticism of them.
1 The physicists got it right; the climate scientists
got it wrong. It’s the application to climate that is problematic, not
the physics. That application is called the “forcing-feedback model”
(FFM) here so that it can be discussed explicitly. The FFM is ubiquitous
in climate science, embedded in the conversation. It is the basic
expression of the feedback-forcing paradigm or framework, which
underlies much of climate science. It’s the basic mental model, so
pervasive that one might overlook it because it is everywhere. One can
construct the FFM just from what “everyone knows” in climate science.
Yet it does not have a formal name, perhaps because it has been
omnipresent for decades. 2 The errors presumably went unnoticed because critics
focused on the values of the parameter values in the model, such as how
much heat is trapped by increasing carbon dioxide, rather than on how
the model combines those parameters to estimate future warming. Also,
for some of the last century, the model seemed to explain the
temperature trend. 3 While the GCMs obviously do not treat extra carbon
dioxide and extra absorbed sunlight identically, they treat them
essentially the same—the GCMs warm the surface by about the same amount
for a given forcing of either, and in both cases the GCMs reduce the heat radiated to space by water vapor (due to “water vapor amplification” of the surface warming).
The GCMs are bottom-up models that try to produce observable macro
trends by modelling masses of minor details; many details are not known
exactly, so some scaling and tweaking is necessary. However the GCMs
are effectively tailored to produce the same sensitivity to carbon
dioxide as the forcing-feedback model (FFM), in three steps:
The FFM estimates the equilibrium climate sensitivity (ECS) to carbon dioxide as ~2.5 °C.
A sensitivity of ~2.5 °C very roughly accounts for observed
warming since 1910. To believers in the FFM, this confirms that
increasing carbon dioxide is mostly responsible for 20th century
warming.
So GCMs use increasing carbon dioxide as the dominant driver to
reproduce 20th century warming. GCMs that do not succeed in this task
are not published (see p. 32 here).
But this ECS estimate is too large: fixing the faulty architecture
shows it is less than 0.5 °C. So the GCMs omit the main driver(s) of
global warming, and are doomed to never be able to explain global
warming properly. Notice how they cannot explain global surface
temperatures outside the period 1910 to 2000, and how they have not
narrowed the ECS estimate by the FFM in the Charney Report of 1979
(namely 1.6 to 4.5 °C)—despite all the effort, computing power, and
money spent since 1979, the ECS estimate in AR5 is 1.5 to 4.5 °C. 4 The rerouting feedback may have evaded notice because
it cannot exist in the conventional architecture: the conventional
basic model only includes feedbacks in response to surface warming. But
the rerouting feedback is a response to increased carbon dioxide, which
is not a “feedback” as the term is traditionally used in climate
science. The Glossary of the IPCC's 5th Assessment Report (2013), while
acknowledging the usual meaning of “feedback”, defines a “feedback” more
narrowly as a response to surface warming. Some feedbacks that are not
in response to surface warming have started appearing in the GCMs, but
they are minor.
Including the rerouting feedback in the GCMs would greatly lower
their estimate of sensitivity of surface temperature to increasing
carbon dioxide—presumably to less than 20% of current estimates, as per
the alternative basic model here. This would mean the GCMs could not
account for recent warming (either from 1910 or from 1970) with
increased carbon dioxide. This is politically difficult, perhaps
unthinkable. 5 This assumption is obviously wrong. Extra absorbed
sunlight changes the total heat radiated by the Earth, but extra carbon
dioxide does not (ignoring the minor surface albedo changes due to
surface warming)—because total outflow is just equal to the inflow (once
steady state resumes). Increasing carbon dioxide merely redistributes
the emissions between the various emitters to space: water vapor,
carbon dioxide, the surface, cloud tops, etc. Ever since 1896, climate
scientists have been convincing themselves that a decrease in heat
outflow is equivalent to a matching increase in heat inflow, as assumed
in the FFM. While it is equivalent with respect to the amount of heat on
Earth, it is not equivalent in terms of how the outgoing heat is
distributed between the various emitters—which is what matters, because
surface warming is determined only by the change in emissions from the
surface (a warmer surface emits more to space). Externally-driven albedo involving the Sun is the main cause of warming, but it is omitted from all current climate models.
“We Muslims are one family even though we live under different governments and in various regions.” – Ayatullah Ruhollah Khomeini, leader of Iran’s revolution
Thirty-seven years ago, Time magazine dedicated its cover to “Islam, The Militant Revival,” and published a lengthy article, “The World of Islam,”
in which John A. Meyer wrote, “We want to examine Islam’s resurgence,
not simply as another faith but as a political force and potent third
ideology competing with Marxism and Western culture in the world today.”
It was April 16, 1979.
The editor, Marguerite Johnson, penned the cover story because “the
Iranian revolution has made it especially important for Westerners to
understand the driving energy and devotion Islam commands from so many.”
Senior editor John Elson indicated that “Islam has been frequently
misunderstood, partly because so many people have tried to apply terms
from Christianity and Judaism to it.” In writing this article, the
editors have attempted to draw a picture of Islam for what it really is,
“a way of ordering society.”
According to Time magazine, there were 750 million Muslims and 985
million Christians in 1979, a large group ready to assert the “political
power of the Islamic way of life.” The people of Iran apparently voted
overwhelmingly to create an Islamic republic, the nation’s first
“government of God,” as Ayatullah Khomeini declared.
This theocratic and freedom-stifling government replaced, after one
year of revolution, “a dynastic autocrat who dreamed of turning his
country into a Western-style industrial and secular state.” Changing a
westernized society into a government by religious mullahs was described
as “a new dawn for the Islamic people.”
Time magazine quoted the Cairo’s magazine, Al Da’wah (The Call): “The Muslims are coming, despite Jewish cunning, Christian hatred, and the Communist storm.”
And it came to pass - the Muslims are really coming, by the millions,
invading Europe and America, welcomed with open arms by a senescent and
suicidal Europe ruled by technocrat elitists who are only interested in
failed multi-culturalism, their power, control of the emerging tower of
Babel, and their bank accounts.
Time magazine described the revival of Islam in the 70 countries
around the world, reflected in the hajj, the pilgrimage to Mecca,
unchanged for 14 centuries; this alleged revival took place among the
young who desired sharia law, burkas, hijab and other forms of enslaving
women to their half-person status, genital mutilation, and harsh
punishments in cases of rape, divorce, adultery, and abortion.
Time magazine assumed that this alleged revival had taken place for
decades because “Islam is no Friday-go-to-mosque kind of religion. It is
a code of honor, a system of law and an all-encompassing way of life.”
This resurgence was inspired and fueled by a “quest for stability and
roots,” a deep-seated hatred for Western values, and “the population
explosion in those Islamic nations where birth control is little
practiced.”
Marvin Zonis is quoted that “Islam is being used as a vehicle for
striking back at the West, in the sense of people trying to reclaim a
very greatly damaged sense of self-esteem. They feel that for the past
150 years the West has totally overpowered them culturally, and in the
process their own institutions and way of life have become second rate.”
“Islam is a political faith with a yearning for expansion,” said
Marguerite Johnson. And the history of its expansionist desires is quite
telling, an expansionism that necessitated the Christian Crusades in
order to regain territories occupied by Islam.
Arabs raided and conquered many lands and their traders carried Islam
with them; the Persian Empire, the Byzantine Empire, North Africa into
Spain, the Middle East, Malaysia, Indonesia, Singapore, the Philippines,
the black tribes in Africa south of the Sahara Desert, were all forced
into submission and conversion to Islam. Time magazine added that, “On
the Indian subcontinent, in Southeast Asia, in Africa and the Pacific,
millions of Muslims were under colonial rule.”
Time magazine remarked that “Islam is frequently stereotyped as
unmitigatedly harsh in its code of law, intolerant of other religions,
repressive toward women and incompatible with progress.” No mention is
made of the Koranic quotes which advise their faithful to kill the
infidels.
Salem Azzam, then Saudi secretary-general of the Islamic Council of
Europe, was quoted as saying that seeing this Islamic resurgence in a
negative light is nothing but a “return to colonialism – indirect but of
a more profound type.”
Other Muslims and their defenders claimed that “Islam is not
monolithic, that it is compatible with various social and economic
systems, and that far from being a return to the Dark Ages, it is wholly
consonant with progress.” The reality is that the Taliban had oppressed
and regressed a thriving Afghan society.
The “war refugees” from Syria, who have invaded welfare-generous
European countries and are raping and pillaging the host societies, have
gravely affected the very tolerant and multi-cultural nations who
foolishly invited them in with open arms.
Devout Muslims are described in this article as opposed to the
“materialism of the West and the atheism of Communism.” But they welcome
“individual initiative, respect private property, and tolerate
profits.” But moderation and “communal responsibility” are most
important. Usury is forbidden but interest is allowed if used for the
“common good.” Community and the common good are communist tenets. A zakat of 2.5%, levied against individual assets, was allowed for the benefit of the community.
A devout Muslim objects to the evils associated with modern life but
enjoys everything free that the west must provide, such as welfare,
housing, TVs, cell phones, cable, cars, electricity, A/C, etc. The Time
article stated that Islam objects to “liquor factories,” to the
“breakdown of the family structure, the lowering of moral standards,
[and] the appeal of easygoing secular life-styles.” But Muslims “are
demanding the best of the West: schools, hospitals, technology,
agricultural and water development techniques.”
Devout Muslims, no matter where they live, are required to abide by
Sharia Law, “the path to follow.” The consensus of Islamic scholars in
charge and the deeds and sayings of Muhammad become an “all embracing
code of ethics, morality and religious duties.” It is thus a complete
control of one’s life.
Once entrenched in the western civilization, Muslims start chiseling
at its foundation in an effort to turn it into Sharia-compliance;
everything that made western schools, hospitals, agriculture, military,
and technology great in the first place, will be replaced with what is
approved by the Islamic theocracy.
No matter how the media spins Islam then and now, Sharia Law and our
U.S. Constitution are not compatible. Moral relativism and tolerance to
the point of ignorance will result in national suicide.
When
you think of a recent example of wartime oppression, typically the
Holocaust comes to mind. How could it not? The atrocious death toll, the
inhumane death camps and the victorious end to World War II are still
prevalent subjects in books, film, and on college campuses.
But what about Soviet Communism? Some estimate that under Joseph
Stalin’s regime, 20 million people were murdered. Even more
mind-boggling: Technology gurus had been making telephone calls from mobile phones for ten years
when Soviet Communists were still forcing millions of people to work in
slave labor camps under unimaginably brutal conditions by the time our
Allies forced that regime to end.
What of those communism touched? The Victims of Communism Memorial Foundation preserves
the stories of those who’ve lived under communist regimes, now in the
form of several short online videos—and they are as horrifying as they
are hopeful.
What Life Is Like in Communist Countries
Founded in 1994, the foundation’s mission is simple but powerful: To
educate this and future generations about the ideology, history, and
legacy of communism. Through its Truman-Reagan Medal of Freedom, over
the years the foundation has honored dissidents against communism,
including recipients such as Pope John Paul II. Two years after
President George W. Bush dedicated a memorial to remembering the victims
of communism, the foundation launched several exhibits online. These
showcase various expert-authored essays on communism, maps, timelines,
and, most recently, a 3D interactive Gulag, demonstrating what life
might have been like in a camp.
The website features a 3D interactive Gulag, demonstrating what life might have been like in a camp.
The Foundation estimates there are 100,000,000 victims of communism,
but until now, it was nearly impossible to convey their experience in
one place, save for a few books and documentaries here and there, and especially
in brief, simple, layman’s terms. Many high-school textbooks are
leaving out or rearranging big chunks of communist history. The
Foundation’s Witness Project fills those gaps entirely, and in a way
that’s as appealing to older generations as it is to younger.
The Witness Project is a collection of online videos that marry
history textbook and novel, war movie and documentary. The roughly
five-minute videos tell one person’s specific experience dealing with
communism. Men and women, wealthy and poor, they hail from Hungary,
China, Vietnam, and Ukraine, among others. Their stories are as
unbelievable as they are fascinating, as chilling as they are powerful.
Their goal is as obvious as the truth in the stories they tell: People
must understand the horrors of communism to avoid repeating them.
Survivors of Communism Tell Their Stories
One woman’s story is particularly brutal. Jinhye Jo was born in North
Korea. With no food to feed his starving family, her father attempted
to escape to China. North Korean officials caught and killed him, then
lied to her family about how he died. Later, her mother was nearly beat
to death for speaking poorly of the North Korean government. The video
concludes with Jo’s sad words: “I want to say to the American people.
Don’t think the North Korean government is like you or I…They are like
the devil.”
The project’s first video is about the life of Daniel Magay.
He gives us a brief but compelling glimpse into what life was like when
shaped by communism in Hungary. Magay grew up in a wealthy family, the
son of a landowner. But then Russia occupied the country and, he says,
“Everything turned upside down.”
Magay describes the regime this way: “Under the communist system,
they started developing the Hungarian KGB. The people who were willing
to compromise themselves became a part of the system. My father did not
compromise himself.” His father paid a price for this courage. His
family was forced from their home and “friends” of the government
watched him night and day. Magay won a gold medal for fencing in the
1956 Olympics in Australia, and decided not to return home, but to seek
refuge in the United States. Still, the scars of communism remain. “I
hated the system. I felt it took my soul.”
Short but powerful history lessons at the touch of a fingertip: what
better way to educate the next generation about a political system that
destroyed so many people? Hope, it seems, appears in many different
ways.
Guest essay by Tom D. Tamarkin Abstract
AGW or climate change is not the big problem many claim. The perceived
scare of AGW has been used by sub-groups in the United Nations to bluff
wealthy industrialized nations into transferring money to poor often
times corrupt nations. Monies gained from this mechanism have not been
invested in the root cause of AGW (if in fact any exists.) At the same
time over $1 trillion USD is spent worldwide annually on climate change
studies, consultants, related government agencies, and the rapidly
growing but totally ineffective green and renewable energy industry.
This has also lead to the emergence of the carbon trading brokerage
industry. This is based on fraudulent science as CO2 has an
extremely small “greenhouse” effect far exceeded by water vapor from the
oceans. Only fossil hydrocarbon fuels and nuclear energy can supply
material amounts of energy due to their many orders of magnitude higher
energy flux densities than so called renewables. Well over half the
world’s economically viable recoverable fossil fuels have been consumed
while over 3 billion human inhabitants live in “energy poverty:” over
1.5 billion without electricity. Once fossil fuels are depleted beyond
the point of economically viable production there is only one energy
source available to provide the Earth’s energy needs. That is the
conversion of matter into energy as formulated in the equation E=mc2.
Man must learn to generate energy based on his knowledge of the laws of
physics and the interchangeability of matter into energy. Today we have
started with the baby step of nuclear fission. Fission is practical and
works today but is unsustainable due to radioactive waste issues.
Therefore, we must immediately invest in the experimental understanding
of the science leading to the successful demonstration of controlled
atomic fusion followed by the R&D needed to commercialize it. Fusion
is 100% safe, uses virtually unlimited fuel cycle non-radioactive light
element components, and produces no significant radioactive byproducts.
In the alternative, man will run out of fossil fuels. AGW is then a
100% moot point because hydrocarbon fuels are not being burned in
material quantities. Under these conditions worldwide population will
shrink to preindustrial revolution levels of about 10% of today’s
population or about 700,000 750 million people worldwide.
Anthropogenic Global Warming (AGW) or climate change is not the BIG
problem its advocates make it out to be. Even if it could be proved that
man is creating it through his use of hydro-carbon fossil fuels, it is
not the truly BIG problem.
Climate change has always been a part of the Earth’s dynamic
atmospheric system. During the last 2 billion years the Earth’s climate
has alternated between a frigid “Ice House” climate, today’s moderate
climate, , and a steaming “Hot House” climate, as in the time of the
dinosaurs.
Principal contributing factors to the variability of the Earth’s
median temperature and climate are the Earth’s complex orbit in the
solar system as defined by the Milankovitch cycles,
the sun’s variable radiated energy output, and geological factors on
Earth such as undersea volcanic activity leading to inconsistent
temperature gradients in the oceans.
This chart shows how global climate has changed over geological time.
Unfortunately, the potential threat of predicted future climate change has been used to transfer enormous amounts of money from wealthy nations to poor nations [1].
This has enabled the survival instinct mechanisms of the climate change
community. That includes governments, consultants, and scientific
researchers who simply study the perceived problem and generate academic
journal articles and reports. The ineffective green energy solutions
manufacturing and service industry also owes their life…and government
subsidies…to the climate change scare. No serious money raised by the
“climate scare” has been spent on solving the BIG problem.
The BIG problem is the fact that man was provided with about 400
years’ worth of hydrocarbon based fossil fuels which took several
hundred million years to be created on Earth. The energy came from the Sun[2].
Integrated over large amounts of geological time, daily Sun energy was
converted into chemicals through plant photosynthesis. These chemicals
can, in-turn, be ignited to release the stored energy through an oxidation reduction reaction with oxygen [3]. Once they are gone they are gone in human life cycle terms.
What is energy? A physicists will answer by saying “the ability to
perform work.” They will elaborate by saying: “energy is a property of
objects which can be transferred to other objects or converted into
different forms, but cannot be created or destroyed.”
A housewife will say energy is what moves our cars, powers our
airplanes, cooks our food, and keeps us warm in the winter – cool in the
summer.
You cannot power a world estimated to have 9 billion people by 2060 on energy produced from solar cells and wind turbines.
They are not sustainable meaning they cannot create enough energy
quickly enough to reproduce themselves (build more) and provide energy
to man. The reason is that the amount of energy received from the Sun is far “too dilute”
meaning a very small amount of energy is received per square unit of
surface area for relatively short periods of time given the day-night
cycle and weather conditions [4].
Wind energy is a secondary effect of solar energy because wind is
created by the atmosphere’s absorption of the Sun’s thermal energy in
combination with the Coriolis force effect [5]. This is based on the rotation of the Earth coupled with atmospheric pressure differences relating to elevation, mountains, and the like [6].
Hydro power from dammed rives is also a secondary effect of solar
energy. The movement of water in the Earth’s vast system of rivers
occurs because of solar energy. This happens as seawater is evaporated,
forms clouds, and ultimately water is released as rain and snow keeping
our rivers full and flowing out to sea from higher elevations propelled
by gravity. Unlike solar and wind, hydropower can consistently produce
material but limited amounts of energy.
The above illustration shows energy flux density in million Joules
per litter on the left hand vertical axis with a scale spanning 10 to
the 16th power in scientific notation. The horizontal axis depicts time
on the top row from 0 years Common Era to 2200. The bottom row depicts
worldwide population which is directly controlled by available energy to
produce food, potable water and to provide for man’s comfort. As can be
seen, once fossil hydro-carbon fuels are no longer available in
quantity, fusion energy must be developed or worldwide population will
contract to that of the preindustrial age in the 1600s. Energy flux
density refers to how much energy is contained per unit volume of an
energy source. Appendix 1 below provides tabulation for various energy sources.
We must begin to turn to what Dr. Steve Cowley in the UK calls “energy from knowledge;” the conversion of mass into energy [7]. Albert Einstein formulated the relationship between energy and mas (matter) in his famous equation E = mc2. This means that a very small amount of mass is equal to a very large amount of energy as explained by Dr. Einstein in his own voice [8].
We must solve energy for the long term through the conversion of matter into energy. No other energy source has a suitable energy flux density
to provide our electricity, transportation, potable water and
agricultural needs once fossil hydro-carbon fuels are no longer
economically viable to recover due to depletion [9].
We must begin now because it will take several decades to master the
science. We began this journey when we developed nuclear fission power.
However nuclear fission is not a long term solution for several reasons;
most notably the long-term radioactive waste it produces.
The next step is the development of nuclear fusion. Fusion is much different than fission[10–11–12].
It uses light elements in the fuel cycle, is fail safe, and can do no
environmental harm. It has the highest flux density of any energy source
short of matter anti-matter annihilation.
It will take several more years
of pure experimental scientific research to demonstrate a sustained
fusion reaction in the laboratory producing a net energy gain meaning
more energy is produced than was “pumped in” to start the energy
production [13].
Once controlled fusion is proven in a controlled environment,
regardless of how expensive and complicated the reactor mechanism and
facility is, man’s ingenuity will take over in the private sector. The
complexities and costs will be driven down just as turn of the 20th
century vacuum tubes gave way to transistors and later
microcomputers-on-a-chip.
That is the BIG problem. If we do not solve this, in 50 to 100 years our coal, oil, and natural gas resources will no longer be economically and environmentally recoverable [14].
Then mankind reverts back to life in the 16th century. If we do not
solve energy the entire argument of being good environmental stewards of
the Earth is moot. Why? Because in less than 100 years we will no
longer be burning fossil hydro-carbon fuels. Global warming and climate
change caused by man is no longer an issue. The problem takes care of
itself. In a few thousand years the processes of nature…geological and
geo-chemical…will erase most signs of our past industrialized existence.
If there are not sizable numbers of cognitively intelligent humans
capable of thinking and distinguishing beauty, it is a nonconsequential
point as aliens are not flocking to our planet. No one or no thing will
ever know the difference. Which begs the question: “Is there intelligent
life on Earth?” This author believes so. As Bill & Melinda Gates
recently stated in their recentfoundation’s annual open letter, our youth needs to be challenged to produce what they called an “energy miracle” [15].
This is the biggest problem man faces. Climate change…if caused by
man…automatically reverses itself over the next 100 years. But if we do
not solve energy mankind’s population will contract by a factor greater
than 10 over the course of the following 100 years. Collectively, we as a
species must recognize this reality and begin the energy race today.
Energy Flux Density Comparisons
Energy density is the amount of energy stored in a given system or
region of space per unit volume. Specific energy is the amount of energy
stored per unit mass (weight.) Only the useful or extractable energy is
measured. It is useful to compare the energy densities of various
energy sources. At the top of the list is fusion followed by nuclear
fission and then hydrocarbon fuels derived from petroleum, coal and
natural gas. At the bottom of the list are batteries which either
generate energy or store energy as well as “renewable energy” such as
solar. 1 Kg of Deuterium fused with 1.5 Kg of Tritium can produce 87.4 GWH of electricity
Here are the underlying calculations supporting the statement above:
The energy released by fusion of 1 atom of Deuterium with 1 atom of Tritium is 17.6 Mev = 2.8 X 1012 Joules.
The energy liberated by the fusion of 1 Kg of Deuterium with 1.5 Kg of Tritium is 2.8 X 1012X 2.99 X 1026 = 8.3 X 1014 Joules = (8.3 X 1014 ) / (3.6 X 1012 ) = 230 GWHours.
This energy is released as heat. A conventional steam turbine power
plant with an efficiency of 38%, would produce 87.4GWH of electricity 1 Deuterium is a naturally occurring isotope of hydrogen readily available from sea water. 2 Tritium is produced in the fusion reactor from Lithium as
part of the fuel cycle and energy exchange process. Lithium is an
abundant naturally occurring element. Comparison of conventional fuel energy density Comparison of “renewable” energy density 1 How much solar power per cubic meter is there? The
volume of the space between a one-meter-square patch on Earth and the
center of our orbit around the sun is 50 billion cubic meters (the earth
is 150 billion meters from the sun, or 4,000 earth circumferences).
Dividing the usable 100 watts per square meter by this volume, yields
two-billionths of a watt per cubic meter. Sunlight takes about eight
minutes(499 seconds) to reach the earth. Multiplying 499 seconds by
twenty-six billionths of a W/m3 reveals that solar radiation has an energy density of 1.5 microjoules per cubic meter (1.5 x 10-6 J/m3). 2 The only way to extract thermal energy from the
atmosphere is to construct an insulated pipe between it and a reservoir
at lower temperature (preferably a much lower one). This is how
geothermal heat pumps work. Typical ground temperature is 52F (284 K).
On a 90F day, such a system has a peak efficiency of 7%, and a power
density of only 0.05 mW/m3 (Stopa and Wojnarowski 2006): typical surface power fluxes for geothermal wells are on the order of 50 mW/m2
and have typical depths of 1 km. To find the energy density, a
characteristic time must be included. The time used should be that of
the time required for water being pumped into the ground to circulate
through the system once. This number is on the order of ten days
(Sanjuan et al. 2006). The resulting energy density is 0.05 J/m3, or roughly two to three orders of magnitude lower than wind or waves. 3 Wind is driven by changes in weather patterns, which in
turn are driven by thermal gradients. Tides are driven by fluctuations
in gravity caused by lunar revolutions. The energy densities of wind and
water systems are proportional to the mass, m, moving through them, and
the square of the speed, v, of this mass, or ½mv2. At sea
level, air with a density of about one kilogram per cubic meter moving
at five meters per second (ten miles per hour) has a kinetic energy of
12.5 joules per cubic meter. Applying Betz’s Law, which limits
efficiency to 59% (Betz 1926), yields about seven joules per cubic
meter. Thus, wind energy on a moderately windy day is over a million
times more energy-dense than solar energy.
There are two prevalent mechanisms for extracting tidal energy. In
one system, barrages move up and down, extracting energy with the rise
and fall of the tides. On the second type strategy, tidal stream systems
act more like underwater wind turbines, extracting energy from tidal
waters as they move past. As with wind, the energy of a moving volume of
water is also ½mv2. Tidal systems have the advantage over wind systems
in that water is approximately one thousand times denser than air. Their
disadvantage lies in generally low tidal velocities of only ten
centimeters per second to one meter per second. Thus, a cubic meter of
water, with a mass of about 1000 kg, yields an energy density of about
five joules per cubic meter for slow water1and five hundred joules per cubic meter for fast water2.
These are also subject to Betz’s law and represent only peak values, so
the average energy densities are closer to one-half of a joule per
cubic meter to fifty joules per cubic meter, or about the same as wind. 1 kinetic energy (tidal low velocity) = ½ mv2 = ½ · 1000 kg · (0.1 m/s)2 = 5 joules. 2 kinetic energy (tidal high velocity) = ½ mv2 = ½ · 1000 kg · (1 m/s)2 = 500 joules.