More Info

Friday, April 10, 2009

Cetacean Conservation Center

The Cetacean Conservation Center (Centro de Conservación Cetacea or CCC) is a Chilean NGO dedicated to the conservation of cetaceans and other marine mammals that inhabit the coastal waters of Chile. The CCC also engages in public education and information campaigns at the national and regional level.


Centro de Conservación Cetacea (CCC) is a Chilean non-governmental and non-profit organization that actively and effectively works on the conservation of marine mammals and their aquatic ecosystems in Chilean waters. The goals in support of this mission are:

1. Promote effective [marine conservation] and management policies.

2. Conduct management and conservation related research on cetacean species and their ecosystems, with special focus on studying endangered species.

3. Identify and assess anthropogenic impacts on marine [wildlife] and propose mitigation measures.

4. Promote the sustainable development of coastal communities through responsible marine wildlife watching activities.

5. Increase public awareness and promote the informed and active participation of people/government on marine biodiversity conservation as well as encourage the reduction of human impacts.

6. Strengthen national and international cooperation in marine conservation strategies

* History

Since 2001, Centro de Conservacion Cetacea has conducted national marine conservation projects, including a wide range of areas like scientific research, public education, coastal community development and strengthening of marine conservation policies:

“CITES CHILE 2002, Our Opportunity to Protect Life”, that reached over 140 thousand people of all the country with an educational exhibit of life size inflatable whales and sharks. As a result, the government of Chile strongly opposed proposals oriented to down list whale species from Appendix I and supported proposals oriented to include whale and basking sharks in CITES Appendix II, during the 12th Conference of the Parties of CITES conducted in Santiago de Chile.

“Southern Right Whale Project/Chile”, a project that has been conducted since 2003 with the official support of the Chilean Navy and the cooperation of leading right whale conservation organizations from Argentina, Brazil and Uruguay. In 2008 the southeast Pacific population of this species was classified as critically endangered by the International Union for the Conservation of Nature (IUCN) and the Chilean Navy granted maximum level of protection to the species.

“National Marine Mammal Sighting Network”, it has work effectively thanks to the support and cooperation of the Chilean Navy and more than 500 members that actively participate in recording of cetacean sightings and stranding events along the Chilean coast.

“Alfaguara Project, Conservation of Blue Whales” It has consolidated as a scientific and coastal sustainable development project of national interest by receiving the official support of the Chilean Navy, the Ministry of Foreign Affairs and the Ministry of Education of Chile. The project has also identified the area with the highest sighting rate of blue whales in the Southern Hemisphere (northwestern Chiloé Island) and has raised international awareness regarding the health status of this blue whale population by describing skinny blue whales and skin lesions associated to coastal pollution from the salmon farming industry.

“Chile 2008, A Whale Sanctuary”, that was conducted during eight months in conjunction with Centro Ecoceanos and the National Confederation of Artisan Fishers of Chile. As a result the sanctuary was achieved in only eight months with the unanimous support by the Chilean Congress of the bill/law that bans all types of whaling operations in Chilean jurisdictional waters (EEZ) and sets the basis for consolidating a national marine conservation policy. This is the first marine protection policy adopted in Chilean history and the most important measure taken until date in the country for the effective conservation of cetacean species and their marine environment.

Carbon tax

A carbon tax is an environmental tax on emissions of carbon dioxide and other greenhouse gases. It is an example of a pollution tax.

Carbon atoms are present in every fossil fuel (coal, petroleum, and natural gas) and are released as CO2 when they are burnt. In contrast, non-combustion energy sources—wind, sunlight, hydropower, and nuclear—do not convert hydrocarbons to carbon dioxide. Accordingly, a carbon tax is effectively a tax on the use of fossil fuels, and only fossil fuels. Some schemes also include other greenhouse gases; the global warming potential is an internationally accepted scale of equivalence for other greenhouse gases in units of tonnes of carbon dioxide equivalent.

Because of the link with global warming, a carbon tax is sometimes assumed to require an internationally administered scheme. However, that is not intrinsic to the principle. The European Union considered a carbon tax covering its member states prior to starting its emissions trading scheme in 2005. The UK has unilaterally introduced a range of carbon taxes and levies to accompany the EU ETS trading regime. Note that emissions trading systems do not constitute a Pigovian tax, because they entail the creation of a property right.

The purpose of a carbon tax is to protect the environment by reducing emissions of carbon dioxide and thereby slow climate change. It can be implemented by taxing the burning of fossil fuels—coal, petroleum products such as gasoline and aviation fuel, and natural gas—in proportion to their carbon content. Unlike other approaches such as carbon cap-and-trade systems, direct taxation has the benefit of being easily understood and can be popular with the public if the revenue from the tax is returned by reducing other taxes. Alternatively, it may be used to fund environmental projects.


In economic theory, pollution is considered a negative externality because it has a negative effect on a party not directly involved in a transaction. To confront parties with the issue, the economist Arthur Pigou proposed taxing the goods (in this case fossil fuels) which were the source of the negative externality (carbon dioxide) so as to accurately reflect the cost of the goods' production to society, thereby internalizing the costs associated with the goods' production. A tax on a negative externality is termed a Pigovian tax, and should equal the marginal damage costs.

A carbon tax is an indirect tax—a tax on a transaction—as opposed to a direct tax, which taxes income. As a result, some American conservatives have supported such a carbon tax because it taxes at a fixed rate, independent of income, which complements their support of a flat tax.

Prices of carbon (fossil) fuels are expected to continue increasing as more countries industrialize and add to the demand on fuel supplies. In addition to creating incentives for energy conservation, a carbon tax would put renewable energy sources such as wind, solar and geothermal on a more competitive footing, stimulating their growth. Former Federal Reserve chairman Paul Volcker suggested (February 6, 2007) that "it would be wiser to impose a tax on oil, for example, than to wait for the market to drive up oil prices."

Social cost of carbon

Many estimates of aggregate net economic costs of damages and benefits from climate change across the globe, the social cost of carbon (SCC), expressed in terms of future net benefits and costs that are discounted to the present, are now available. Peer-reviewed estimates of the SCC for 2005 have an average value of US$43 per tonne of carbon (tC) (i.e., US$12 per tonne of carbon dioxide) but the range around this mean is large. For example, in a survey of 100 estimates, the values ran from US$–10 per tonne of carbon (US$–3 per tonne of carbon dioxide) up to US$350/tC (US$95 per tonne of carbon dioxide.)

One must be very careful when comparing weights of carbon versus carbon dioxide, since carbon comprises only 27.29% (12.0107 / [12.0107 + 2 × 15.9994])[4] of the mass of carbon dioxide. In simple terms, there are only 27 tonnes of carbon in 100 tonnes of carbon dioxide.

In an October, 2006, report entitled the Stern Review by then HM Treasury official and former Chief Economist and Senior Vice-President of the World Bank, Nicholas Stern, he states that climate change could affect growth which could be cut by one-fifth unless drastic action is taken.[5] Stern has warned that one percent of global GDP is required to be invested in order to mitigate the effects of climate change, and that failure to do so could risk a recession worth up to twenty percent of global GDP. Stern’s report suggests that climate change threatens to be the greatest and widest-ranging market failure ever seen. The report has had significant political effects: Australia reported two days after the report was released that they would allot AU$60 million to projects to help cut greenhouse gas emissions. The Stern Review has been criticized by some economists, saying that Stern did not consider costs past 2200, that he used an incorrect discount rate in his calculations, and that stopping or significantly slowing climate change will require deep emission cuts everywhere.

According to a 2005 report from the Association of British Insurers, limiting carbon emissions could avoid 80% of the projected additional annual cost of tropical cyclones by the 2080s. A June 2004 report by the Association of British Insurers declared "Climate change is not a remote issue for future generations to deal with. It is, in various forms, here already, impacting on insurers' businesses now." It noted that weather risks for UK households and property were already increasing by 2–4% per year due to changing weather, and that claims for storm and flood damages in the UK had doubled to over £6 billion over the period 1998–2003, compared to the previous five years. As a result insurance premiums are rising. In the UK the insurance industry normally offers insurance against natural disasters, however there is a risk that in some areas flood insurance will become unaffordable for some, and it has been mooted that cover may be withdrawn in some areas entirely unless there is government backing.

In the U.S., according to Choi and Fisher (2003) each 1% increase in annual precipitation could enlarge catastrophe loss by as much as 2.8%. Financial institutions, including the world's two largest insurance companies, Munich Re and Swiss Re, warned in a 2002 study that "the increasing frequency of severe climatic events, coupled with social trends" could cost almost US$150 billion each year in the next decade. These costs would, through increased costs related to insurance and disaster relief, burden customers, taxpayers, and industry alike.

Border Issues

Concerns have been raised about carbon leakage which is the tendency for energy-intensive industries to migrate from nations with a carbon tax to those nations without a carbon tax where some of the receiving nations might be less energy-efficient. A possible antidote is for carbon-taxing countries to levy carbon-equivalent fees on imports from non-taxing nations.

Petroleum (motor gasoline, diesel, jet fuel)

Many OECD countries have taxed fuel directly for many years for some applications; for example, the UK imposes duty directly on vehicle hydrocarbon oils, including petrol and diesel fuel. The duty is adjusted to ensure that the carbon content of different fuels is handled with equivalence.

While a direct tax should send a clear signal to the consumer, its use as an efficient mechanism to influence consumers' fuel use has been challenged in some areas:

* There may be delays of a decade or more as inefficient vehicles are replaced by newer models and the older models filter through the 'fleet'.
* There may be practical political reasons that deter policy makers from imposing a new range of charges on their electorate.
* There is some evidence that consumers' decisions on fuel economy are not entirely aligned to the price of fuel. In turn, this can deter manufacturers from producing vehicles that they judge have lower sales potential. Other efforts, such as imposing efficiency standards on manufacturers, or changing the income tax rules on taxable benefits, may be at least as significant.
* In many countries fuel is already taxed to influence transport behavior and to raise other public revenues. Historically, they have used these fuel taxes as a source of general revenue, as their experience has been that the price elasticity of fuel is low, thus increasing fuel taxation has only slightly impacted on their economies. However, in these circumstances the policy behind a carbon tax may be unclear.

Some also note that a suitably priced tax on vehicle fuel may also counterbalance the "rebound effect" that has been observed when vehicle fuel consumption has improved through the imposition of efficiency standards. Rather than reduce their overall consumption of fuel, consumers have been seen to make additional journeys or purchase heavier and more powerful vehicles.

Carbon offset

A carbon offset is a financial instrument aimed at a reduction in greenhouse gas emissions. Carbon offsets are measured in metric tons of carbon dioxide-equivalent (CO2e) and may represent six primary categories of greenhouse gases. One carbon offset represents the reduction of one metric ton of carbon dioxide or its equivalent in other greenhouse gases.

There are two markets for carbon offsets. In the larger compliance market, companies, governments, or other entities buy carbon offsets in order to comply with caps on the total amount of carbon dioxide they are allowed to emit. In 2006, about $5.5 billion of carbon offsets were purchased in the compliance market, representing about 1.6 billion metric tons of CO2e reductions.

In the much smaller voluntary market, individuals, companies, or governments purchase carbon offsets to mitigate their own greenhouse gas emissions from transportation, electricity use, and other sources. For example, an individual might purchase carbon offsets to compensate for the greenhouse gas emissions caused by personal air travel. In 2006, about $91 million of carbon offsets were purchased in the voluntary market, representing about 24 million metric tons of CO2e reductions.

Offsets are typically achieved through financial support of projects that reduce the emission of greenhouse gases in the short- or long-term. The most common project type is renewable energy, such as wind farms, biomass energy, or hydroelectric dams. Others include energy efficiency projects, the destruction of industrial pollutants or agricultural byproducts, destruction of landfill methane, and forestry projects.

Carbon offsetting has gained some appeal and momentum mainly among consumers in western countries who have become aware and concerned about the potentially negative environmental effects of energy-intensive lifestyles and economies. The Kyoto Protocol has sanctioned offsets as a way for governments and private companies to earn carbon credits which can be traded on a marketplace. The protocol established the Clean Development Mechanism (CDM), which validates and measures projects to ensure they produce authentic benefits and are genuinely "additional" activities that would not otherwise have been undertaken. Organizations that are unable to meet their emissions quota can offset their emissions by buying CDM-approved Certified Emissions Reductions.

Offsets may be cheaper or more convenient alternatives to reducing one's own fossil-fuel consumption. However, some critics object to carbon offsets, and question the benefits of certain types of offsets.

Features of carbon offsets

Carbon offsets have several common features:

* Vintage. The vintage is the year in which the carbon reduction takes place.
* Source. The source refers to the project or technology used in offsetting the carbon emissions. Projects can include land-use, methane, biomass, renewable energy and industrial energy efficiency. Projects may also have secondary benefits (co-benefits). For example, projects that reduce agricultural greenhouse gas emissions may improve water quality by reducing fertilizer usage.
* Certification regime. The certification regime describes the systems and procedures that are used to certify and register carbon offsets. Different methodologies are used for measuring and verifying emissions reductions, depending on project type, size and location. For example, the Chicago Climate Exchange uses one set of protocols, while the CDM uses another. In the voluntary market, a variety of industry standards exist. These include the Voluntary Carbon Standard and the CDM Gold Standard that are implemented to provide third-party verification of carbon offset projects.

Carbon nanofoam

Carbon nanofoam is an allotrope of carbon discovered in 1997 by Andrei V. Rode and co-workers at the Australian National University in Canberra. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web.

Each cluster is about 6 nanometers wide and consists of about 4000 carbon atoms linked in graphite-like sheets that are given negative curvature by the inclusion of heptagons among the regular hexagonal pattern. This is the opposite of what happens in the case of buckminsterfullerenes, in which carbon sheets are given positive curvature by the inclusion of pentagons.

The large-scale structure of carbon nanofoam is similar to that of an aerogel, but with 1% of the density of previously produced carbon aerogels—or only a few times the density of air at sea level. Unlike carbon aerogels, carbon nanofoam is a poor electrical conductor. The nanofoam contains numerous unpaired electrons, which Rode and colleagues propose is due to carbon atoms with only three bonds that are found at topological and bonding defects. This gives rise to what is perhaps carbon nanofoam's most unusual feature: it is attracted to magnets, and below −183 °C can itself be made magnetic.

Carbon sequestration

Carbon sequestration is the storage of carbon dioxide (usually captured from the atmosphere) through biological, chemical or physical processes, for the mitigation of global warming. Most projects can be regarded as geoengineering. It has been proposed as a way to mitigate the accumulation of greenhouse gases in the atmosphere released by the burning of fossil fuels.

Where the CO2 is captured as a pure by-product in processes related to petroleum refining (upgrading), or from flue gases from power generation. CO2 sequestration can then be seen as being synonymous with the storage part of carbon capture and storage, a term which refers to the large-scale, permanent artificial capture and sequestration of industrially-produced CO2 using subsurface saline aquifers, reservoirs, ocean water, aging oil fields, or other sinks.

The first large-scale co2 sequestration project (1996) is called Sleipner, and is located in the North Sea where Norway's StatoilHydro strips carbon dioxide from natural gas with amine solvents and disposes of this carbon dioxide in a deep saline aquifer. In 2000, a coal-fueled Synthetic Natural Gas plant in Beulah, North Dakota, became the world's first coal using plant to capture and store carbon dioxide. "Weyburn-Midale CO2 Project, World’s first CO2 measuring, monitoring and verification initiative". Petroleum Technology Research Centre. Retrieved on 2009-04-09.

Biological processes

Biological processes have a huge effect on the Global carbon cycle. Major climatic fluctuations have been driven by these processes in the past, such as at the Azolla event which started the current Arctic climate. Fossil fuel formation is as a result of such processes, as is the formation of clathrate or limestone. By manipulating such techniques, geoengineers seek to enhance sequestration. Methods such as ocean iron fertilization are examples of such geoengineering techniques.

Ocean iron fertilization

Iron fertilization of the ocean to encourage plankton growth which removes carbon from the atmosphere on a temporary, or arguably permanent basis. This technique is controversial due to difficulties of predicting its effect on the marine ecosystem, and the potential for side effects or large deviations from expected efficacy. Such effects potentially include the release of nitrogen oxides and disruption to the nutrient balance in the ocean. Iron fertilization is a natural process and it is the enhancement of this process which is the geoengineering technique.

Ocean urea fertilisation

Proposed by Ian Jones with the purpose to fertilize the ocean with urea, a nitrogen rich substance, to encourage phytoplankton growth.

Australian company Ocean Nourishment Corporation (ONC) plans to sink hundreds of tonnes of urea into the ocean, in order to boost the growth of CO2-absorbing phytoplankton, as a way to combat climate change. In 2007, Sydney-based ONC completed an experiment involving 1 tonne of nitrogen in the Sulu Sea off the Philippines


Reforestation of marginal crop and pasture lands to transfer CO2 from the atmosphere to new biomass. It is essential to ensure that the carbon did not return to the atmosphere from burning or rotting when the trees died. To this end, it would be important to either manage such forests in perpetuity or use the wood from them for biochar, BECS (see below) or landfill. This technique can give 0.27W/m2 of globally-averaged negative forcing, which is sufficient to reverse the warming effect of 1/6 of current levels of anthropogenic CO2 emissions. It is notable, however, that this CO2 levels will have risen by the time this could be achieved.

Peat production

Peat bogs are a very important store of carbon. By creating new bogs, or enhancing existing ones, carbon sequestration can be achieved.

Ocean mixing

Encouraging various layers of the ocean to mix can move nutrients and dissolved gases around and thus act as a geoengineering approach. Placing large vertical pipes in the oceans to bring nutrient rich water to the surface, triggering algal blooms, which also store carbon when they die. - a mechanism somewhat similar to ocean iron fertilization. This technique may result in a short-term rise in CO2 in the atmosphere, which limits its attractiveness. Forced upwelling can give 0.28W/m2 of globally-averaged negative forcing, which is sufficient to reverse the warming effect of 1/6th current levels of anthropogenic CO2 emissions. An alternative forced downwelling approach can give 0.16W/m2 of globally-averaged negative forcing, which is sufficient to reverse the warming effect of about 1/10th current levels of anthropogenic CO2 emissions. It is notable, however, that this CO2 levels will have risen by the time this could be achieved.

Biochar burial

Biochar is charcoal created by pyrolysis of biomass. The resulting charcoal-like material is landfilled, or used as a soil improver to create terra preta. Biogenic carbon is recycled naturally in the carbon cycle. By pyrolysing it to biochar, it’s rendered inert and sequestered in soil. Further, the soil encourages bulking with new organic matter, which gives additional sequestration benefit.

The carbon contained in the soil is therefore unavailable for oxidation to CO2 and consequential atmospheric release. As a result, the radiative forcing potential of the avoided CO2 is removed from the planet’s energy balance. This technique is advocated by prominent scientist James Lovelock, creator of the Gaia hypothesis.It can give 0.52W/m² of globally-averaged negative forcing, which is sufficient to reverse the warming effect of about 1/3 current levels of anthropogenic CO2 emissions. It is notable, however, that CO2 levels will have risen by the time this could be achieved. According to Simon Shackley, "I would say people are talking more about something in the range of one to two billion tonnes a year."

The mechanisms related to the carbon sequestration properties of biochar, is referred to as bio-energy with carbon storage, BECS.


The term BECCS refers to Bio-energy with carbon capture and storage – Burning biomass in power stations and boilers which utilise carbon capture and storage.[25] Using this technology with sustainably produced biomass would result in net-negative carbon emissions, as the carbon sequestered during the growth of the biomass would be captured and stored, thus removing carbon dioxide from the atmosphere.

This technology is sometimes referred to as bio-energy with carbon storage, BECS, though this term can also refer to the carbon sequestration potential in other technologies, such as biochar.

Biomass burial

Burying biomass (such as trees) directly, thus sequestering the carbon in the ground rather than allowing it to escape, mimicking the natural processes that created fossil fuels. Landfill of trash also represents a physical method of sequestration.

Biomass ocean storage

The production of fossil fuels is a natural process which often involves the ocean burial of biomass in the ocean, often near river mouths which bring large quantities of nutrients and dead material into the ocean. Transporting material, such as crop waste, out to sea and allowing it to sink into deep ocean storage has been proposed as a means of sequestration of carbon.

Carbon Capture and Storage

CO2 can be injected into old oil wells and other geological features, or can be stored in pure form in the deep ocean.

CO2 has been used extensively in enhanced crude oil recovery operations in the United States beginning in 1972. There are in excess of 10,000 CO2 wells in the state of Texas alone. The gas comes in part from anthropogenic sources, but principally from large naturally-occurring geologic formations of CO2. It is transported to the oil-producing fields through a large network of over 5,000 kilometres (3,100 mi) of CO2 pipelines. The use of CO2 for Enhanced oil recovery (EOR) methods in heavy oil reservoirs in the Western Canadian Sedimentary Basin (WCSB)has also been proposed. However, cost of transport remains an important hurdle. A similar CO2 pipeline system to that of Texas does not yet exist in the WCSB that could connect most of the sources for CO2 in Canada associated with the mining and upgrading operations in the Athabasca oil sands, with the subsurface heavy oil reservoirs that could most benefit from CO2 injection hundreds of km to the south.

Carbon tetrachloride

Carbon tetrachloride, also known by many other names (see Table) is the organic compound with the formula CCl4. It is a reagent in synthetic chemistry and was formerly widely used in fire extinguishers, as a precursor to refrigerants, and a cleaning agent. It is a colourless liquid with a "sweet" smell that can be detected at low levels.

Both carbon tetrachloride and tetrachloromethane are acceptable names under IUPAC nomenclature. Colloquially, it is called "carbon tet".

History and synthesis

The production of carbon tetrachloride has steeply declined since the 1980s due to environmental concerns and the decreased demand for CFCs, which were derived from carbon tetrachloride. In 1992, production in the U.S.-Europe-Japan was estimated at 720,000 tonnes.

Carbon tetrachloride was originally synthesised in 1839 by reaction of chloroform with chlorine, from the French chemist Henri Victor Regnault, but now it is mainly synthesized from methane:

CH4 + 4 Cl2 → CCl4 + 4HCl

The production often utilizes by-products of other chlorination reactions, such as the syntheses of dichloromethane and chloroform. Higher chlorocarbons are also subjected to "chlorinolysis:"

C2Cl6 + Cl2 → 2 CCl4

Prior to the 1950s, carbon tetrachloride was manufactured by the chlorination of carbon disulfide at 105 to 130 °C:

CS2 + 3Cl2 → CCl4 + S2Cl2


In the carbon tetrachloride molecule, four chlorine atoms are positioned symmetrically as corners in a tetrahedral configuration joined to a central carbon atom by single covalent bonds. Because of this symmetrical geometry, the molecule has no net dipole moment; that is, CCl4 is non-polar. Methane gas has the same structure, making carbon tetrachloride a halomethane. As a solvent, it is well suited to dissolving other non-polar compounds, fats and oils. It can also dissolve iodine. It is somewhat volatile, giving off vapors having a smell characteristic of other chlorinated solvents, somewhat similar to the tetrachloroethylene smell reminiscent of dry cleaners' shops.

Solid tetrachloromethane has 2 polymorphs: crystalline II below -47.5 °C (225.6 K) and crystalline I above -47.5 °C.

At -47.3 °C it has monoclinic crystal structure with space group C2/c and lattice constants a = 20.3, b = 11.6, c = 19.9 (.10-1 nm), β = 111°.


In the early 20th century, carbon tetrachloride was widely used as a dry cleaning solvent, as a refrigerant, and in lava lamps. Small, carbon tetrachloride fire extinguishers were widely used and took the form of a brass bottle with a hand pump to expel the liquid.

However, once it became apparent that carbon tetrachloride exposure had severe adverse health effects, safer alternatives such as tetrachloroethylene were found for these applications, and its use in these roles declined from about 1940 onward. Carbon tetrachloride persisted as a pesticide to kill insects in stored grain, but in 1970, it was banned in consumer products in the United States.

One specialty use of "carbon tet" was by stamp collectors to reveal watermarks on the backs of postage stamps. A small amount of the liquid was placed on the back of a stamp sitting in a black glass or obsidian tray. The letters or design of the watermark could then be clearly detected.

Prior to the Montreal Protocol, large quantities of carbon tetrachloride were used to produce the freon refrigerants R-11 (trichlorofluoromethane) and R-12 (dichlorodifluoromethane). However, these refrigerants are now believed to play a role in ozone depletion and have been phased out. Carbon tetrachloride is still used to manufacture less destructive refrigerants.

Carbon tetrachloride has also been used in the detection of neutrinos. Carbon tetrachloride is one of the most potent hepatotoxins (toxic to the liver), and is widely used in scientific research to evaluate hepatoprotective agents 7,8


Carbon tetrachloride has practically no flammability at lower temperatures. Under high temperatures in air, it forms poisonous phosgene.

Because it has no C-H bonds, carbon tetrachloride does not easily undergo free-radical reactions. Hence it is a useful solvent for halogenations either by the elemental halogen, or by a halogenation reagent such as N-bromosuccinimide.

In organic chemistry, carbon tetrachloride serves as a source of chlorine in the Appel reaction.


It is used as a solvent in synthetic chemistry research, but because of its adverse health effects, it is no longer commonly used, and chemists generally try to replace it with other solvents.[citation needed] It is sometimes useful as a solvent for infrared spectroscopy because there are no significant absorption bands > 1600 cm-1. Because carbon tetrachloride does not have any hydrogen atoms, it was historically used in proton NMR spectroscopy. However, carbon tetrachloride is toxic, and its dissolving power is low. Its use has been largely superseded by deuterated solvents, which offer superior solvating properties and allow for deuterium lock by the spectrometer. Use of carbon tetrachloride in determination of oil has been replaced by various other solvents.

Chemical weapon designation

Chemical, biological, and radiological warfare agents are sometimes assigned what is termed a military symbol. Military symbols evolved out of the First World War from the British in part for secrecy, and to simplify reference to chemicals by something other than a chemical name. These symbols are sometimes applied as marking on weapons to indicate the agent contents.

Military symbols constantly change and have transitory definitions. For example, mustard gas was assigned the military symbol originally HS for "Hun Stuff". Later in the First World War the S in HS signified mustard gas that had about 25% solvent added to it. This was only in England, as HS was adopted as the military symbol by the United States - signifying crude mustard. In the Second World War the purity of mustard gas was improved through distillation, and this purified chemical warfare agent was designated HD. When it was mixed with a thickener (Agent VV), it was given the symbol HV. Today mustard gas is indicated by the single capital letter H, but HD is still in common use.

Military symbols can also reflect the name of where a chemical agent is manufactured. For example, chloropicrin has the symbol PS, which was derived from the British town in which it was manufactured during the First World War: Port Sunshine. Another device in assigning military symbols is in honor of the person that had devised the agent, such as Agent TZ (saxitoxin), which was derived after the name of its principal investigator, Dr. Edward Shantz.

Numbers are occasionally added to military symbols to reflect particular preparations. With riot control agents a 1 signifies micropulverized (e.g., CS1), and a 2 signified microencapsulated (e.g., CS2). With biological agents a 1 signifies a wet-type agent (e.g., UL1), and a 2 signifies a dry-type agent (e.g., UL2). Binary chemical weapons are signified by adding a 2, as in binary sarin (i.e., GB2).

Other formulations have their own designation. When the Tear Agent CS is formulated in a solvent it is signified by CSX. When agents are thickened with the addition of a polymer a T is usually added to the beginning of the symbol (e.g., thickened soman is TGD). The tear agent Mace, or Agent CN, had been formulated in several solvent forms, indicated by CNB (with benzene), CNC (with chloroform), and CNS (with chloropicrin and chloroform). Mixtures of agents have been identified with either a hyphen (e.g., CN-DM), or combining letters of the two agents (e.g., HD mixed with L is HL). Furthermore, one strain of the biological agent Tularemia has the symbol SR (lethal Schu strain), while another strain has JT (incapacitant 452 strain).

Military symbols for agents change form time to time for administrative reasons as well. For preserving secrecy, tularemia's symbol UL1 and UL2 was changed to TT and ZZ at one time, and then later to SR. During the Second World War cyanogen chloride's symbol was changed from CK to CC - when it became apparent that CC marked munitions might be mistaken for CG (phosgene), the symbol was changed back.

The following designations are, or have been, used by the United States:

Blood Agents

* AC - hydrogen cyanide
* CK - cyanogen chloride
* SA - Arsine

Choking Agents

* CL - chlorine
* CG - phosgene
* DP - diphosgene
* PS - chlorpicrin
* Z -

Blister Agents

* H - mustard gas
* HD - distilled mustard gas
* T - O-mustard
* Q - sesquimustard
* L - Lewisite
* HL - mustard-lewisite mixture
* HT - mustard-T mixture
* HQ - mustard-Q mixture
* HN - nitrogen mustard
* ED - ethyl dichloroarsine
* MD - methyl dichloroarsine
* PD - phenyl dichloroarsine

Tear Agents

* CA - camite
* CN - mace
* CNB - mace-benzene mixture
* CNC - mace-chloroform mixture
* CNS - mace-chloropicrin-chloroform mixture
* CS - CS gas
* CS1 - micropulverized CS
* CS2 - microencapsulated CS
* CR - CR gas
* CH -

Vomiting Agents

* DA - diphenylchlorarsine
* DC - diphenylcyanoarsine
* DM - Adamsite

Psycho Agents

* BZ - 3-quinuclidinyl benzilate
* SN - sernyl (PCP)
* K - lysergic acid diethylamide (LSD)

Nerve Agents

* GA - tabun [EA1205]
* GB - sarin [EA1208]
* GB2 - sarin as a binary agent from mixing OPA (isopropyl alcohol+isopropyl amine) + DF [EA5823]
* GD - soman [EA1210]
* GF - cyclosarin [EA1212]
* GE - ethyl sarin
* GH - O-isopentyl sarin [EA1221]
* GS - S-butyl sarin [EA1255]
* VE - VE nerve agent [EA1517]
* VM - Edemo [EA1664]
* VS - [EA1677]
* VP - (3-pyridyl 3,3,5-trimethylcyclohexyl methylphosphonate) [EA1511]
* VR - (O-isobutyl S-(2-diethaminoethyl) methylphosphothioate)
* VX - VX nerve agent [EA1701]
* GV - (dimethylaminoethyl phosphorodimethyl amidoylfluoridate) [EA5365]
* VG - Amiton (O,O-diethyl-S-[2-(diethylamino)ethyl] phosphorothioate) [EA1508]

Mycotic Biological Agents

* OC - Coccidioides mycosis

Bacterial Biological Agents

* N - anthrax
* TR - anthrax
* LE - plague
* UL - tularemia (schu S4)
* TT - wet-type UL
* ZZ - dry-type UL
* SR - tularemia
* JT - tularemia (425)
* HO - cholera
* AB - bovine brucellosis
* US - porcine brucellosis
* NX - porcine brucellosis
* AM - caprine brucellosis
* BX - caprine brucellosis
* Y - bacterial dysentery
* LA - Glanders
* HI - Melioidosis
* DK - diphtheria
* TQ - listeriosis

Chlamydial Biological Agents

* SI - psittacosis

Rickettsial Biological Agents

* RI - rocky mountain spotted fever
* UY - rocky mountain spotted fever
* OU - Q fever
* MN - wet-type OU
* NT - dry-type OU
* YE - human typhus
* AV - murine typhus

Viral Biological Agents

* OJ - yellow fever
* UT - yellow fever
* LU - yellow fever
* FA - Rift Valley fever
* NU - Venezuelan equine encephalitis virus
* TD - Venezuelan Equine Encephalitis virus
* FX - Venezuelan Equine Encephalitis virus
* ZX - Eastern equine encephalitis virus
* ZL - smallpox
* AN - Japanese B encephalitis

Biological Vectors

* AP = Aedes aegypti mosquito

Biological Toxins

* X - botulinum toxin A
* XR - partially purified botulinum toxin A
* W - ricin toxin
* WA - ricin toxin
* UC - staphyloccocal enterotoxin B
* PG - staphyloccocal enterotoxin B
* TZ - saxitoxin
* SS - saxitoxin
* PP - tetrodotoxin


* MR - molasis residium
* BG - Bacillus globigii
* BS - Bacillus globigii
* U - Bacillus globigii
* SM - Serratia marcescens
* P - Serratia marcescens
* AF - Aspergillus fumigatus mutant C-2
* EC - Escherichia coli
* BT - Bacillus thursidius
* EH - Erwinia hebicola
* FP - fluorescent particle


The Canary (Serinus canaria), also called the Island Canary, Atlantic Canary or Common Canary, is a small passerine bird belonging to the genus Serinus in the finch family, Fringillidae. It is native to the Azores, the Canary Islands, and Madeira. Wild birds are mostly yellow-green, with brownish streaking on the back. The species is common in captivity and a number of different colour varieties have been bred.


It is 12.5 cm long, with a wingspan of 20-23 cm and a weight of 15-20 g. The male has a largely yellow-green head and underparts with a yellower forehead, face and supercilium. The lower belly and undertail-coverts are whitish and there are some dark streaks on the sides. The upperparts are grey-green with dark streaks and the rump is dull yellow. The female is similar to the male but duller with a greyer head and breast and less yellow underparts. Juvenile birds are largely brown with dark streaks.

It is about 10% larger, longer and less contrasted than its relative the Serin, and has more grey and brown in its plumage and relatively shorter wings.

The song is a silvery twittering similar to the songs of the Serin and Citril Finch.


The species was scientifically described by Carolus Linnaeus in his Systema Naturae. He named it Fringilla Canaria but it was later moved to the genus Serinus. Its closest relative is the European Serin and the two can sometimes produce fertile hybrids.


The bird is named after the Canary Islands, not the other way around, derived from the Latin name canariae insulae ("islands of dogs") used by Arnobius, referring to the large dogs kept by the inhabitants of the islands. The colour canary yellow is in turn named after the yellow Domestic Canary.

Distribution and habitat

It is endemic to the Canary Islands, Azores and Madeira in the region known as Macaronesia in the eastern Atlantic Ocean. In the Canary Islands it is common on La Gomera, La Palma and El Hierro but more local on Gran Canaria and rare on Lanzarote and Fuerteventura where it has only recently begun breeding. It is common in Madeira including Porto Santo and the Desertas Islands and has been recorded on the Salvage Islands. In the Azores it is common on all islands. The population has been estimated at 80,000-90,000 pairs in the Canary Islands, 30,000-60,000 pairs in the Azores and 4,000-5,000 pairs in Madeira.

It occurs in a wide variety of habitats from pine and laurel forests to sand dunes. It is most common in semi-open areas with small trees such as orchards and copses. It frequently occurs in man-made habitats such as parks and gardens. It is found from sea-level up to at least 760 m in Madeira, 1100 m in the Azores and to above 1500 m in the Canary Islands.

It has become established on Midway Atoll in the north-west Hawaiian Islands where it was first introduced in 1911. It was also introduced to neighbouring Kure Atoll but failed to become established.[8] Birds were introduced to Bermuda in 1930 and quickly started breeding but they began to decline in the 1940s after scale insects devastated the population of Bermuda cedar and by the 1960s they had died out. The species also occurs in Puerto Rico but is not yet established there.



It is a gregarious bird which often nests in groups with each pair defending a small territory. The cup-shaped nest is built 1-6 m above the ground in a tree or bush, most commonly at 3-4 m. It is well-hidden amongst leaves, often at the end of a branch or in a fork. It is made of twigs, grass, moss and other plant material and lined with soft material including hair and feathers.

The eggs are laid between January and July in the Canary Islands, from March to June with a peak of April and May in Madeira and from March to July with a peak of May and June in the Azores. They are pale blue or blue-green with violet or reddish markings concentrated at the broad end. A clutch contains 3 to 4 or occasionally 5 eggs and 2-3 broods are raised each year. The eggs are incubated for 13-14 days and the young birds fledge after 14-21 days, most commonly after 15-17 days.

It typically feeds in flocks, foraging on the ground or amongst low vegetation. It mainly feeds on seeds such as those of weeds, grasses and figs. It also feeds on other plant material and small insects.

Relationship with humans

This species is often kept as a pet; see Domestic Canary for details. Selective breeding has produced many varieties, differing in colour and shape. Yellow birds are particularly common while red birds have been produced by interbreeding with the Red Siskin. Canaries were formerly used by miners to warn of dangerous gases. The bird is also widely used in scientific research. Canaries are often depicted in the media with Tweety Bird being a well-known example.

Calnev Pipeline

The Calnev Pipeline is a 550 miles (885 km) long buried oil pipeline in the United States that carries gasoline, jet fuel, and diesel fuel from Los Angeles refineries in California to Nellis Air Force Base on the northeast side of Las Vegas, Nevada. It carries approximately 128,000 barrels per day (5,380,000 gallons or 20.4 megaliters). Jet fuel from the pipeline is also sent to McCarran International Airport in Las Vegas. There are two pipes with diameter of 14 inches (356 mm) and 8 inches (203 mm). The pipeline is owned by Kinder Morgan Energy Partners.

On May 25, 1989, the Calnev Pipeline ruptured in a San Bernardino, California neighborhood due to damage from the cleanup of a train derailment that occurred thirteen days earlier. The resulting gasoline fire killed two people and destroyed eleven homes.

On July 23, 2007, Kinder Morgan Energy Partners announced that it will expand the pipeline by constructing additional 16 inches (406 mm) pipeline alongside the existing pipeline. It will increase the pipeline capacity up to 200,000 barrels per day (32,000 m³/d), and with additional pumping stations even over 300,000 barrels per day (48,000 m³/d).

Buglife - The Invertebrate Conservation Trust

Buglife - The Invertebrate Conservation Trust (usually referred to simply as Buglife) is a British nature conservation charity based in Cambridgeshire, England. Its aim is to prevent invertebrate extinctions and to maintain sustainable populations of invertebrates in the United Kingdom.

Activities undertaken by Buglife fall into the following areas:

* Undertaking and promoting study and research
* Promoting habitat management aimed at maintaining and enhancing invertebrate biodiversity
* Publicising invertebrates

Wednesday, April 8, 2009


A biocoenosis (alternatively, biocoenose or biocenose ), termed by Karl Möbius in 1877, describes all the interacting organisms living together in a specific habitat (or biotope). Biotic community , biological community, and ecological community are more common synonyms of biocenosis, all of which represent the same concepts. Three related descriptors are zoocoenosis for the faunal community, phytocoenosis for the floral community and microbiocoenosis for the microbial community within an ecosystem. The extent or geographical area of a biocenose is limited only by the requirement of a more or less uniform species composition.

An ecosystem, as originally defined by Tansley (1935), is a biotic community (or biocoenosis) along with its physical environment (or biotope as defined by many known ecologists).

The importance of the biocoenosis concept in ecology is its emphasis on the interrelationships among species in a geographical area. These interactions are as important as the physical factors to which each species is adapted and responding. In a very real sense, it is the specific biological community or biocoenosis that is adapted to conditions that prevail in a given place.

Biotic communities may be of varying sizes, and larger ones may contain smaller ones. The interactions between species are especially evident in food or feeding relationships. Therefore, a practical method of delineating biotic communities is to map the food network to identify which species feed upon which others and then determine the system boundary as the one that can be drawn through the fewest consumption links relative to the number of species within the boundary.

Mapping biotic communities is particularly important when identifying sites in need of environmental protection such as the British Site of Special Scientific Interest (SSSIs). The Australian Department of the Environment and Heritage maintains a register of Threatened Species and Threatened Ecological Communities under the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act).


A BioBlitz is a 24-hour inventory of all living organisms in a given area, often an urban park. The term "BioBlitz" was coined by National Park Service naturalist Susan Rudy while assisting with the first BioBlitz at Kenilworth Aquatic Gardens, Washington D.C. BioBlitz in May 31 - June 1, 1996. Approximately 1000 species were identified at this event. This early BioBlitz was conceived and organised by Sam Droege (USGS) and Dan Roddy (NPS), and inspired many other organisations to do the same. The bioblitz name and concept is not registered, copyrighted, or trademarked; it is an idea that can be used, adapted, and modified by any group to freely use for their own purposes. The next year, 1997, the Carnegie Museum of Natural History conducted a BioBlitz in one of the Pittsburgh parks. They added a public component, inviting the public to see what the scientists were doing. At about the same time Harvard biologist E.O. Wilson and Massachusetts wildlife expert Peter Alden developed a program to catalog the organisms around Walden Pond, which led to a state-wide program known as biodiversity Days.

A bioblitz has the dual aims of establishing the degree of biodiversity in an area and popularising science. Botanists, mycologists and entomologists all play a role. Some BioBlitzes are an annual event.

Scientists establish a base at a point close to the area and provide expertise in identifying organisms found by the public as well as doing their own inspection of the area.

A full BioBlitz must take place over a full 24-hour period as different organisms are likely to be found at different times of day. Schools may organise BioBlitzes over a shorter period of time, but the results will less accurately show the variety of species in the area.

The First Annual Blogger BioBlitz is planned for the week of 21 - 29 April 2007. Participants pledge to conduct individual Bioblitzes and the results will be compiled and mapped. So, unlike traditional BioBlitzes the surveys are not likely to be deep across many taxonomic groups. However, they will serve to raise awareness about biological diversity and will provide a broad snapshot of spring diversity in many locations.

Biodynamic agriculture

Biodynamic agriculture, a method of organic farming that has its basis in a spiritual world-view (anthroposophy, first propounded by Rudolf Steiner), treats farms as unified and individual organisms, emphasizing balancing the holistic development and interrelationship of the soil, plants, animals as a closed, self-nourishing system. Regarded by some proponents as the first modern ecological farming system, biodynamic farming includes organic agriculture's emphasis on manures and composts and exclusion of the use of artificial chemicals on soil and plants. Methods unique to the biodynamic approach include the use of fermented herbal and mineral preparations as compost additives and field sprays and the use of an astronomical sowing and planting calendar.


The development of biodynamic agriculture began in 1924 with a series of eight lectures on agriculture given by Rudolf Steiner at Schloss Koberwitz in what was then Silesia, Germany, (now in Poland east of Wrocław). The course was held in response to a request by farmers who noticed degraded soil conditions and a deterioration in the health and quality of crops and livestock resulting from the use of chemical fertilizers. An agricultural research group was subsequently formed to test the effects of biodynamic methods on the life and health of soil, plants and animals. In the United States, the Biodynamic Farming & Gardening Association was founded in 1938 as a New York state corporation.

In Australia the first biodynamic preparations were made by Ernesto Genoni in Melbourne in 1927 and by Bob Williams in Sydney in 1939. Since the 1950s research work has continued at the Biodynamic Research Institute (BDRI) in Powelltown, near Melbourne Australia under the direction of Alexei Podolinsky. In 1989 Biodynamic Agriculture Australia was established, as a not for profit association. It has well over 1100 members and has local and regional groups throughout Australia. It publishes the biodynamic journal News Leaf quarterly and is the largest organic growers association in Australia.

Today biodynamics is practiced in more than 50 countries worldwide. The University of Kassel has a dedicated Department of Biodynamic Agriculture.

Biodynamic method of farming

Biodynamic agriculture conceives of the farm as an organism, a self-contained entity with its own individuality. "Emphasis is placed on the integration of crops and livestock, recycling of nutrients, maintenance of soil, and the health and well being of crops and animals; the farmer too is part of the whole." Cover crops, green manures and crop rotations are used extensively.

Biodynamic preparations

Steiner prescribed nine different preparations to aid fertilization which are the cornerstone of biodynamic agriculture, and described how these were to be prepared. The prepared substances are numbered 500 through 508, where the first two are used for preparing fields whereas the latter seven are used for making compost. Though most studies have shown relatively little direct effect of the preparations on soil structure, or on compost development beyond accelerating the initial phase of composting, some positive effects have been noted:

* The field sprays contain substances that stimulate plant growth include cytokinins.
* Some improvement in nutrient content of compost.

Field preparations

Field preparations, for stimulating humus formation:

* 500: (horn-manure) a humus mixture prepared by filling the horn of a cow with cow manure and burying it in the ground (40–60 cm below the surface) in the autumn. It is left to decompose during the winter and recovered for use the following spring.
* 501: Crushed powdered quartz prepared by stuffing it into a horn of a cow and buried into the ground in spring and taken out in autumn. It can be mixed with 500 but usually prepared on its own (mixture of 1 tablespoon of quartz powder to 250 liters of water) The mixture is sprayed under very low pressure over the crop during the wet season to prevent fungal diseases. It should be sprayed on an overcast day or early in the morning to prevent burning of the leaves.

Both 500 and 501 are used on fields by stirring about one teaspoon of the contents of a horn in 40–60 liters of water for an hour and whirling it in different directions every second minute.

Compost preparations

Compost preparations, used for preparing compost, employ herbs which are frequently used in medicinal remedies:

* 502: Yarrow blossoms (Achillea millefolium) are stuffed into urinary bladders from Red Deer (Cervus elaphus), placed in the sun during summer, buried in earth during winter and retrieved in the spring.
* 503: Chamomile blossoms (Matricaria recutita) are stuffed into small intestines from cattle buried in humus-rich earth in the autumn and retrieved in the spring.
* 504: Stinging nettle (Urtica dioica) plants in full bloom are stuffed together underground surrounded on all sides by peat for a year.
* 505: Oak bark (Quercus robur) is chopped in small pieces, placed inside the skull of a domesticated animal, surrounded by peat and buried in earth in a place where lots of rain water runs past.
* 506: Dandelion flowers (Taraxacum officinale) is stuffed into the peritoneum of cattle and buried in earth during winter and retrieved in the spring.
* 507: Valerian flowers (Valeriana officinalis) are extracted into water.
* 508: Horsetail (Equisetum)

One to three grams (a teaspoon) of each preparation is added to a dung heap by digging 50 cm deep holes with a distance of 2 meters from each other, except for the 507 preparation, which is stirred into 5 liters of water and sprayed over the entire compost surface. All preparations are thus used in homeopathic quantities. Each compost preparation is designed to guide a particular decomposition process in the composting mass.

One study found that the oak bark preparation improved disease resistance in zucchini.

Astronomical planting calendar

The approach considers that there are astronomical influences on soil and plant development, specifying, for example, what phase of the moon is most appropriate for planting, cultivating or harvesting various kinds of crops. This aspect of biodynamics has been termed "astrological" in nature.

Treatment of pests and weeds

Biodynamic agriculture sees the basis of pest and disease control arising from a strong healthy balanced farm organism. Where this is not yet achieved it uses techniques analogous to fertilization for pest control and weed control. Most of these techniques include using the ashes of a pest or weed that has been trapped or picked from the fields and burnt. A biodynamic farmer perceives weeds and plant vulnerability to pests as a result of imbalances in the soil.

* Pests such as insects or field mice (Apodemus) have more complex processes associated with them, depending on what pest is to be targeted. For example field mice are to be countered by deploying ashes prepared from field mice skin when Venus is in the Scorpius constellation.

* Weeds are combated (besides the usual mechanical methods) by collecting seeds from the weeds and burning them above a wooden flame that was kindled by the weeds. The ashes from the seeds are then spread on the fields, then lightly spray with the clear urine of a sterile cow (the urine should be exposed to the full moon for six hours), this is intended to block the influence from the full moon on the particular weed and make it infertile.

Seed production

Biodynamic agriculture has focused on open pollination of seeds (permitting farmers to grow their own seed) and the development of locally adapted varieties. The seed stock is not controlled by large, multinational seed companies.

Trademark protection of term biodynamic

The term Biodynamic is a trademark held by the Demeter association of biodynamic farmers for the purpose of maintaining production standards used both in farming and processing foodstuffs.(This is not a trademark held privately in New Zealand) The trademark is intended to protect both the consumer and the producers of biodynamic produce. Demeter International is an organization of member countries; each country has its own Demeter organization which is required to meet international production standards (but can also exceed them). The original Demeter organization was founded in 1928; the U.S. Demeter Association was formed in the 1980s and certified its first farm in 1982. In France, Biodivin certifies biodynamic wine. In Egypt, SEKEM has created the Egyptian Biodynamic Association (EBDA), an association that provides training for farmers to become certified.

Basel Convention

The Basel Convention (verbose: Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal) is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries (LDCs). It does not, however, address the movement of radioactive waste. The Convention is also intended to minimize the amount and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist LDCs in environmentally sound management of the hazardous and other wastes they generate.

The Convention was opened for signature on March 22, 1989, and entered into force on May 5, 1992. A list of parties to the Convention, and their ratification status, can be found on the Basel Secretariat's web page. Of the 170 parties to the Convention, Afghanistan, Haiti, and the United States have signed the Convention but have not yet ratified it.


With the tightening of environmental laws (e.g., RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, globalization of shipping made transboundary movement of waste more accessible, and many LDCs were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to LDCs, grew rapidly.

One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States after having dumped half of its load on a beach in Haiti, was forced away where it sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea.

Another is the 1988 Koko case in which 5 ships transported 8,000 barrels of hazardous waste from Italy to the small town of Koko in Nigeria in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland.

These practices have been deemed "Toxic Colonialism" by many developing countries.

At its most recent meeting, November 27–December 1, 2006, the Conference of the Parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships.

According to Maureen Walsh in "The global trade in hazardous wastes: domestic and international attempts to cope with a growing crisis in waste management" 42 Cath. U. Law Review 103 (1992), only around 4% of hazardous wastes that come from OECD countries are actually shipped across international borders. These wastes include, among others, chemical waste, radioactive waste, municipal solid waste, asbestos, incinerator ash, and old tires. Of internationally-shipped waste that comes from developed countries, more than half is shipped for recovery and the remainder for final disposal.

Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste".

Definition of hazardous waste

A waste will fall under the scope of the Convention if it is within the category of wastes listed in Annex I of the Convention and it does exhibit one of the hazardous characteristics contained in Annex III . In other words it must both be listed and contain a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the Convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or and of the countries of transit .

The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or revovery. The examples of disposal are broad and include also recovery, recycling and reuse.

Annex II lists other wastes such as household wastes and residue that comes from incinerating household waste.

Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships is not covered.

Annex IX attempts to define "commodities" which are not considered wastes and which would be excluded.


In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. It is of note that the Convention places a general prohibition on the exportation or importation of wastes between Parties and non-Parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-Party to the Convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries.

The OECD Council also has its own control system that governs the trans-boundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention.

Parties to the Convention must honor import bans of other Parties.

Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention.

The Convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions.

According to Article 12, Parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders.

Basel Ban Amendment

After the initial adoption of the Convention, some LDCs and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to LDCs. In particular, the original Convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention.

Lobbying at the 1995 Basel conference by LDCs, Greenpeace and key European countries such as Denmark, led to a decision to adopt the Basel Ban Amendment to the Basel Convention. Not yet in force, but considered morally binding by signatories, the Amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the Amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including the United States and Canada. As of late-2005, 63 nations have ratified the Basel Ban Amendment; 62 are required for it to enter into force. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states.

Bali roadmap

After the 2007 United Nations Climate Change Conference on the island Bali in Indonesia in December, 2007, the participating nations adopted the Bali Roadmap (also known as the Bali Action Plan) as a two-year process to finalizing a binding agreement in 2009 in Denmark.

Cutting emissions

The nations acknowledge that evidence for global warming is unequivocal, and that humans must reduce emissions to reduce the risks of "severe climate change impacts". There was a strong consensus for updated changes for both developed and developing countries. Although there were not specific numbers agreed upon in order to cut emissions, many countries agreed that there was a need for "deep cuts in global emissions" and that "developed country emissions must fall 10-40% by 2020".

Charges of hypocrisy

The December 2007 global warming conference in Bali contributed to global warming in the following ways:

* A November 25, 2007 article in Times Online reported that it was estimated that that year's conference would release the equivalent of 100,000 tons of carbon dioxide.

* A December 18, 2007 article in the Sydney Morning Herald revealed new information that brought this total even higher. According to the article, a special custom air conditioning system was installed specifically for the conference. The air conditioning system used hydrochlorofluorocarbons, an outdated refrigerant gas that is especially bad for the problem of global warming. According to the article, the air conditioning used during the conference released the equivalent of 48,000 tons of carbon dioxide. The article stated, "... the refrigerant is a potent greenhouse gas, with each kilogram at least as damaging as 1.7 tonnes of carbon dioxide. Investigators at the Balinese resort complex at Nusa Dua counted 700 cylinders of the gas, each of them weighing 13.5 kilograms, and the system was visibly leaking."


The nations pledge "policy approaches and positive incentives" to protect forests.


The nations opt for enhanced co-operation to "support urgent implementation" of measures to protect poorer countries against climate change impacts.

Technology transfer

The nations will consider how to facilitate the transfer of clean technologies from industrialised nations to the developing countries.


Work on the Bali roadmap will begin as soon as possible. Four major UNFCCC meetings to implement the Bali Roadmap are planned for 2008, with the first to be held in either March or April and the second in June, with the third in either August or September followed by a major meeting in Poznan, Poland in December 2008. The negotiations process is scheduled to conclude in 2009 at a major summit in Copenhagen, Denmark.

Baku-Tbilisi-Ceyhan pipeline

The Baku-Tbilisi-Ceyhan pipeline is a 1,768 kilometres (1,099 mi) long crude oil pipeline from the Azeri-Chirag-Guneshli oil field in the Caspian Sea to the Mediterranean Sea. It connects Baku, the capital of Azerbaijan; Tbilisi, the capital of Georgia; and Ceyhan, a port on the south-eastern Mediterranean coast of Turkey, hence its name. It is the second longest oil pipeline in the world after the Druzhba pipeline. The first oil that was pumped from the Baku end of the pipeline on May 10, 2005 reached Ceyhan on May 28, 2006.



The Caspian Sea lies above one of the world's largest groups of oil and gas fields. As the Caspian Sea is landlocked, transporting oil to Western markets is complicated. During Soviet times, all transportation routes from the Caspian region were built through Russia. The collapse of the Soviet Union inspired a search for new routes. Russia first insisted that the new pipeline should pass through Russian territory, then declined to participate. A pipeline through Iran from the Caspian Sea to the Persian Gulf is the shortest route from a geographic standpoint, but Iran was considered an undesirable partner for a number of reasons: its theocratic government, concerns about its nuclear program, and United States sanctions that restrict U.S. companies' investment in the country.

In the spring of 1992, the Turkish Prime Minister Süleyman Demirel proposed to Central Asian countries and Azerbaijan, that the pipeline run through Turkey. The first document on the construction of the Baku-Tbilisi-Ceyhan pipeline was signed between Azerbaijan and Turkey on 9 March 1993 in Ankara.

The Turkish route meant a pipeline from Azerbaijan through either Georgia or Armenia. A route through Armenia was inconvenient, due to regional tensions over Turkey's refusal to recognize the Armenian Genocide, as well as the unresolved military conflict between Armenia and Azerbaijan over Nagorno-Karabakh. This left the circuitous Azerbaijan-Georgia-Turkey route as politically most expedient for the major parties, although it was longer and more expensive to build than the other options.

The BTC pipeline project gained momentum following the Ankara Declaration, adopted on 29 October 1998 by President of Azerbaijan Heydar Aliyev, President of Georgia Eduard Shevardnadze, President of Kazakhstan Nursultan Nazarbayev, President of Turkey Süleyman Demirel, and President of Uzbekistan Islom Karimov. The declaration was witnessed by the United States Secretary of Energy Bill Richardson, who expressed strong support for the BTC pipeline. The intergovernmental agreement in support of the BTC pipeline was signed by Azerbaijan, Georgia, and Turkey on 18 November 1999, during a meeting of the Organization for Security and Cooperation in Europe (OSCE) in Istanbul, Turkey.


The Baku-Tbilisi-Ceyhan Pipeline Company (BTC Co.) began in London on 1 August 2002. The ceremony launching construction of the pipeline was held on 18 September 2002. Construction began in April 2003 and was completed in 2005. The Azerbaijan section was constructed by Consolidated Contractors International of Greece, and Georgia's section was constructed by a joint venture of France’s Spie Capag and US Petrofac Petrofac International. The Turkish section was constructed by BOTAŞ. Bechtel was the main contractor for engineering, procurement and construction.


On 25 May 2005, the pipeline was inaugurated at the Sangachal Terminal by President Ilham Aliyev of the Azerbaijan Republic, President Mikhail Saakashvili of Georgia and President Ahmet Sezer of Turkey, joined by President Nursaltan Nazarbayev of Kazakhstan, as well as United States Secretary of Energy Samuel Bodman. The inauguration of the Georgian section of the pipeline was hosted by President Mikheil Saakashvili at the BTC pumping station near Gardabani on 12 October 2005. The inauguration ceremony at the Ceyhan terminal was held on 13 July 2006.

Pumping began on 10 May 2005 and reached Ceyhan in 28 May 2006. The first oil was loaded at the Cheyhan Marine Terminal (Haydar Aliyev Terminal) onto a tanker named British Hawthorn. The tanker sailed away from the port on 4 June 2006 with about 600,000 barrels (95,000 m3) of crude oil.

Description of the pipeline


The 1,768 kilometres (1,099 mi) long pipeline starts at the Sangachal Terminal near Baku in Azerbaijan, crosses Azerbaijan, Georgia and Turkey and terminates at the Ceyhan Marine Terminal (Haydar Aliyev Terminal) on the south-eastern Mediterranean coast of Turkey. 443 kilometres (275 mi) of the pipeline lie in Azerbaijan, 249 kilometres (155 mi) in Georgia and 1,076 kilometres (669 mi) in Turkey. It crosses several mountain ranges at altitudes to 2,830 metres (9,300 ft). It also traverses 3,000 roads, railways, and utility lines—both overground and underground—as well as 1,500 watercourses of up to 500 metres (1,600 ft) wide (in the case of the Ceyhan River in Turkey). The pipeline occupies a corridor eight meters wide, and is buried along its entire length at a depth of no less than one meter. The BTC pipeline runs parallel to the South Caucasus Gas Pipeline, which transports natural gas from the Sangachal Terminal to Erzurum in Turkey. From Sarız to Ceyhan, the Samsun-Ceyhan oil pipeline will be laid parallel to the BTC pipeline.

Technical features

The pipeline has a projected lifespan of 40 years, and when working at normal capacity, it transports 1 million barrels (160,000 m3) of oil per day. It needs 10 million barrels (1,600,000 m3) of oil to fill the pipeline. Oil flows through the pipeline at the speed of 2 metres (6.6 ft) per second. There are eight pump stations through the pipeline route (two in Azerbaijan, two in Georgia, four in Turkey). The project includes also the Ceyhan Marine Terminal (officially the Haydar Aliyev Terminal, named after the Azerbaijani late president Heydar Aliyev), two intermediate pigging stations, one pressure reduction station, and 101 small block valves. It was constructed from 150,000 individual joints of line pipe, each measuring 12 metres (39 ft) in length. This corresponds to a total weight of 655,000 short tons (594,000 metric tons). The pipeline is 1,070 mm (42 inches) diameter for most of its length, narrowing to 865 mm (34 inches) diameter as it nears Ceyhan.

Cost and financing

The pipeline cost US$3.9 billion. The construction created 10,000 short-term jobs and the operation of pipeline requires 1,000 long-term employees across a 40 year period. 70% of BTC costs are being funded by third parties, including the World Bank's International Finance Corporation, the European Bank for Reconstruction and Development, export credit agencies of seven countries and a syndicate of 15 commercial banks.

Source of supply

The BTC pipeline is supplied by oil from Azerbaijan's Azeri-Chirag-Guneshli oil field in the Caspian Sea via the Sangachal Terminal. This pipeline may also transport oil from the Kazakhstan's Kashagan oil field as well as from other oil fields in Central Asia.[3] The government of Kazakhstan announced that it would build a trans-Caspian oil pipeline from the Kazakhstani port of Aktau to Baku, but because of the opposition from both Russia and Iran, it started to transport oil to the BTC pipeline by tankers across the Caspian Sea.

Possible transhipment via Israel

It has been proposed that oil from the BTC pipeline be transported to eastern Asia via the Israeli oil terminals at Ashkelon and Eilat, the overland trans-Israel sector being bridged by the Eilat-Ashkelon Pipeline owned by the Eilat Ashkelon Pipeline Company (EAPC).

Tuesday, April 7, 2009

Atomic physics

Atomic physics (or atom physics) is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. This includes ions as well as neutral atoms and, unless otherwise stated, for the purposes of this discussion it should be assumed that the term atom includes ions.

The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics—which deals with the atom as a system comprising of a nucleus and electrons, and nuclear physics—which considers atomic nuclei alone.

As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.

Isolated atoms

Atomic physics always considers atoms in isolation. Atomic models will consist of a single nucleus which may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical) nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.

While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that we are concerned with. This means that the individual atoms can be treated as if each were in isolation because for the vast majority of the time they are. By this consideration atomic physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of atoms.

Electronic configuration

Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons). The excited electron may still be bound to the nucleus and should, after a certain period of time, decay back to the original ground state. The energy is released as a photon. There are strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes.

An electron may be sufficiently excited so that it breaks free of the nucleus and is no longer part of the atom. The remaining system is an ion and the atom is said to have been ionized having been left in a charged state.

History and developments

The majority of fields in physics can be divided between theoretical work and experimental work and atomic physics is no exception. It is usually the case, but not always, that progress goes in alternate cycles from an experimental observation, through to a theoretical explanation followed by some predictions which may or may not be confirmed by experiment, and so on. Of course, the current state of technology at any given time can put limitations on what can be achieved experimentally and theoretically so it may take considerable time for theory to be refined.
Main article: Atomic theory

Clearly the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in the modern sense of the basic unit of a chemical element. This theory was developed by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were although they could be described and classified by their properties (in bulk) in a periodic table.
Main article: Basics of quantum mechanics

The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics itself. In seeking to explain atomic spectra an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy.

Since the Second World War, both theoretical and experimental fields have advanced at a great pace. This can be attributed to progress in computing technology which has allowed bigger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.

Monday, April 6, 2009

Compressed-air engine

A Compressed-air engine is a Pneumatic actuator that creates useful work by expanding compressed air. They have existed in many forms over the past two centuries, ranging in size from hand held turbines up to several hundred horsepower. Some types rely on pistons and cylinders, others use turbines. Many compressed air engines improve their performance by heating the incoming air, or the engine itself. Some took this a stage further and burned fuel in the cylinder or turbine, forming a type of internal combustion engine.


Impact wrenches, drills, die grinders, dental drills and other pneumatic tools use a variety of air engines or motors. These include vane type pumps, turbines and pistons.


Most successful early forms of self propelled torpedoes used high pressure compressed air, although this was superseded by internal or external combustion engines, steam engines, or electric motors.


Compressed air engines were used in trams and shunters, and eventually found a successful niche in mining locomotives, although eventually they were replaced by electric trains, underground. Over the years designs increased in complexity, resulting in a triple expansion engine with air to air reheaters between each stage.


Transport category airplanes, such as commercial airliners, use compressed air starters to start the main engines. The air is supplied by the load compressor of the aircraft's auxiliary power unit, or by ground equipment.


There is currently some interest in developing air cars. Several engines have been proposed for these, although none have demonstrated the performance and long life needed for personal transport.


The Energine Corporation is a South Korean company that delivers fully-assembled cars running on a hybrid compressed air and electric engine. The compressed-air engine is used to activate an alternator, which extends the autonomous operating capacity of the car.


EngineAir, an Australian company, is making a rotary engine powered by compressed air, called The Di Pietro motor. The Di Pietro motor concept is based on a rotary piston. Different from existing rotary engines, the Di Pietro motor uses a simple cylindrical rotary piston (shaft driver) which rolls, with little friction, inside the cylindrical stator.

It can be used in boat, cars, carriers and other vehicles. Only 1 psi (≈ 6,8 kPa) of pressure is needed to overcome the friction.


K'Airmobiles vehicles use a compressed-air engine known as the K'Air, developed in France by a small group of researchers

These engines have a consumption of compressed air of less than 120 L/min., although developing a dynamic push able to reach 4kN.

The technical concept of the K'Air pneumatic engines returns to direct conversion of what makes the fundamental characteristic of compressed air, namely:

* the pushing force of compressed air is exclusively exploited for conversion into kinetic energy of translation,
* itself is simultaneously converted into induced power of rotation of the axis and
* thus gives to the engine a particularly imposing torque while needing only a very low “fuel” consumption.

To simplify, one can compare the principle to that of the rotary jacks:

* the energy of the fluid (compressed air) is directly transformed into rotational movement;
* the double-acting jacks involve a pinion-toothed rack system;
* the cyclic angle of rotation can vary between 90 and 360°;
* it supports hydraulic supercharging systems.


In the original Nègre air engine, one piston compresses air from the atmosphere to mix with the stored compressed air (which will cool drastically as it expands). This mixture drives the second piston, providing the actual engine power. MDI's engine works with constant torque, and the only way to change the torque to the wheels is to use a pulley transmission of constant variation, losing some efficiency. When vehicle is stopped, MDI's engine had to be on and working, losing energy. In 2001-2004 MDI switched to a design similar to that described in Regusci's patents (see below), which date back to 1990.


The Pneumatic Quasiturbine engine is a compressed air pistonless rotary engine using a rhomboidal-shaped rotor whose sides are hinged at the vertices.

The Quasiturbine has demonstrated as a pneumatic engine using stored compressed air .

It can also take advantage of the energy amplification possible from using available external heat, such as solar energy.

The Quasiturbine rotates from pressure as low as 0.1 atm.

Since the Quasiturbine is a pure expansion engine (which the Wankel is not, nor are most other rotary engines), it is well suitable as compressed fluid engine - Air engine or air motor.


Armando Regusci's version of the air engine couples the transmission system directly to the wheel, and has variable torque from zero to the maximum, enhancing efficiency. Regusci's patents date back to 1990,

Team Psycho-Active

Psycho-Active is developing a multi-fuel/air-hybrid chassis which is intended to serve as the foundation for a line of automobiles. Claimed performance is 50 hp/litre


At least one Kart has been powered by a quasiturbine.

Efficient air engines

One could make the compressed air engines much more efficient than they are now (15%) by for example:

* Using the heat energy from the compressor (ALL energy used to run the compressor is converted to heat due to friction),
* shut down the air after a while ("cutoff"),
* Expand the air in several various stages and heat the air again between the expansions by ordinary air (in a heat exchanger),

Aviation and the environment

Aviation impacts the environment because aircraft engines emit noise, particulates, gases, contribute to climate change and global dimming. Despite emission reductions from automobiles and more fuel-efficient and less polluting turbofan and turboprop engines, the rapid growth of air travel in recent years contributes to an increase in total pollution attributable to aviation. In the EU greenhouse gas emissions from aviation increased by 87% between 1990 and 2006.

There is an ongoing debate about possible taxation of air travel and the inclusion of aviation in an emissions trading scheme, with a view to ensuring that the total external costs of aviation are taken into account.

Climate change

Like all human activities involving combustion, most forms of aviation release carbon dioxide (CO2) into the Earth's atmosphere, very likely contributing to the acceleration of global warming.

In addition to the CO2 released by most aircraft in flight through the burning of fuels such as Jet-A (turbine aircraft) or Avgas (piston aircraft), the aviation industry also contributes greenhouse gas emissions from ground airport vehicles and those used by passengers and staff to access airports, as well as through emissions generated by the production of energy used in airport buildings, the manufacture of aircraft and the construction of airport infrastructure.

While the principal greenhouse gas emission from powered aircraft in flight is CO2, other emissions may include nitric oxide and nitrogen dioxide, (together termed oxides of nitrogen or NOx), water vapour and particulates (soot and sulfate particles), sulfur oxides, carbon monoxide (which bonds with oxygen to become CO2 immediately upon release), incompletely-burned hydrocarbons, tetra-ethyl lead (piston aircraft only), and radicals such as hydroxyl, depending on the type of aircraft in use.[citation needed]

The contribution of civil aircraft-in-flight to global CO2 emissions has been estimated at around 2%. However, in the case of high-altitude airliners which frequently fly near or in the stratosphere, non-CO2 altitude-sensitive effects may increase the total impact on anthropogenic (man-made) climate change significantly —[citation needed] this problem is not present for aircraft that routinely operate at lower altitudes well inside the troposphere, such as balloons, airships, helicopters, most light aircraft, and many commuter aircraft.


Subsonic aircraft-in-flight contribute to climate change in four ways:

Carbon dioxide (CO2)
CO2 emissions from aircraft-in-flight are the most significant and best understood element of aviation's total contribution to climate change. The level and effects of CO2 emissions are currently believed to be broadly the same regardless of altitude (i.e they have the same atmospheric effects as ground based emissions). In 1992, emissions of CO2 from aircraft were estimated at around 2% of all such anthropogenic emissions, though CO2 concentration attributable to aviation in 1992 was around 1% of the total anthropogenic increase, because emissions occurred only in the last 50 years.

Oxides of nitrogen (NOx)
At the high altitudes flown by large jet airliners around the tropopause, emissions of NOx are particularly effective in forming ozone (O3) in the upper troposphere. High altitude (8-13km) NOx emissions result in greater concentrations of O3 than surface NOx emissions, and these in turn have a greater global warming effect. The effect of O3 concentrations are regional and local (as opposed to CO2 emissions, which are global).

NOx emissions also reduce ambient levels of methane, another greenhouse gas, resulting in a climate cooling effect. This effect does not, however, offset the O3 forming effect of NOx emissions. It is now believed that aircraft sulfur and water emissions in the stratosphere tend to deplete O3, partially offsetting the NOx-induced O3 increases. These effects have not been quantified. This problem does not apply to aircraft that fly lower in the troposphere, such as light aircraft or many commuter aircraft.

Water vapor (H2O)


Very large aircraft-in-flight at high altitude emit water vapour, a greenhouse gas, which under certain atmospheric conditions forms Condensation trails, or contrails. Contrails are visible line clouds that form in cold, humid atmospheres and are thought to have a global warming effect (though one less significant than either CO2 emissions or NOx induced effects) SPM-2. Contrails are extremely rare from lower-altitude aircraft, or from propeller aircraft or rotorcraft.

Cirrus clouds have been observed to develop after the persistent formation of contrails and have been found to have a global warming effect over-and-above that of contrail formation alone. There is a degree of scientific uncertainty over the contribution of contrail and cirrus cloud formation to global warming and attempts to estimate aviation's overall climate change contribution do not tend to include its effects on cirrus cloud enhancement.

Least significant is the release of soot and sulfate particles. Soot absorbs heat and has a warming effect; sulfate particles reflect radiation and have a small cooling effect. In addition, they can influence the formation and properties of clouds. All aircraft powered by combustion will release some amount of soot.

Total effect

In attempting to aggregate and quantify these effects the Intergovernmental Panel on Climate Change (IPCC) has estimated that aviation’s total climate impact is some 2-4 times that of its CO2 emissions alone (excluding the potential impact of cirrus cloud enhancement). This is measured as radiative forcing. While there is uncertainty about the exact level of impact of NOx and water vapour, governments have accepted the broad scientific view that they do have an effect. Accordingly, more recent UK government policy statements have stressed the need for aviation to address its total climate change impacts and not simply the impact of CO2.

The IPCC has estimated that aviation is responsible for around 3.5% of anthropogenic climate change, a figure which includes both CO2 and non-CO2 induced effects. The IPCC has produced scenarios estimating what this figure could be in 2050. The central case estimate is that aviation’s contribution could grow to 5% of the total contribution by 2050 if action is not taken to tackle these emissions, though the highest scenario is 15%[7]. Moreover, if other industries achieve significant cuts in their own greenhouse gas emissions, aviation’s share as a proportion of the remaining emissions could also rise. Per passenger kilometre, figures from British Airways suggest carbon dioxide emissions of 0.1kg for large jet airliners (a figure which does not account for the production of other pollutants or condensation trails).

Potential reductions

Modern jet aircraft are significantly more fuel efficient (and thus emit less CO2 in particular) than 30 years ago. . Moreover, manufacturers have forecast and are committed to achieving reductions in both CO2 and NOx emissions with each new generation of design of aircraft and engine. The accelerated introduction of more modern aircraft therefore represents a major opportunity to reduce emissions per passenger kilometre flown.[citation needed]

Other opportunities arise from the optimisation of airline timetables, route networks and flight frequencies to increase load factors (minimise the number of empty seats flown), together with the optimisation of airspace.

Another possible reduction of the climate-change impact is the limitation of cruise altitude of aircraft. This would lead to a significant reduction in high-altitude contrails for a marginal trade-off of increased flight time and an estimated 4% increase in CO2 emissions. Drawbacks of this solution include very limited airspace capacity to do this, especially in Europe and North America and increased fuel burn due to jet aircraft being less efficient at lower cruise altitudes.

However, the total number of passenger kilometres is growing at a faster rate than manufacturers can reduce emissions, and at present there is no readily available alternative to burning kerosene. The growth in the aviation sector is therefore likely to continue to generate an increasing volume of greenhouse gas emissions. However some scientists and companies such as GE Aviation and Virgin Fuels are researching biofuel technology for use in jet aircraft. As part of this test Virgin Atlantic Airways flew a Boeing 747 from London Heathrow Airport to Amsterdam Schiphol Airport on 24 February 2008, with one engine burning a combination of coconut oil and babassu oil. Greenpeace's chief scientist Doug Parr said that the flight was "high-altitude greenwash" and that producing organic oils to make biofuel could lead to deforestation and a large increase in greenhouse gas emissions.

The majority of the world's aircraft are not large jetliners but smaller piston aircraft, and many are capable of using ethanol as a fuel, with major modifications. While ethanol also releases CO2 during combustion, the plants cultivated to make it draw that same CO2 out of the atmosphere while they are growing, making the fuel closer to climate-change-neutral. The only problem is the US government's choice of using ethanol from corn since it takes more energy to produce than is returned, it displaces food crops and thus raises the price of food and causes soil degradation.

While they are not suitable for long-haul or transoceanic flights, turboprop aircraft used for commuter flights bring two significant benefits: they often burn considerably less fuel per passenger mile, and they typically fly at lower altitudes, well inside the tropopause, where there are no concerns about ozone or contrail production. For even shorter flights, air taxi service using newer, fuel-efficient four- or six-seat light piston aircraft could provide an even lower environmental impact.

Reducing travel

An alternative method for reducing the environmental impact of aviation is to constrain demand for air travel. The UK study Predict and Decide - Aviation, climate change and UK policy, notes that a 10 per cent increase in fares generates a 5 to 15 per cent reduction in demand, and recommends that the British government should manage demand rather than provide for it. This would be accomplished via a strategy that presumes "… against the expansion of UK airport capacity" and constrains demand by the use of economic instruments to price air travel less attractively. A study published by the campaign group Aviation Environment Federation (AEF) concludes that by levying £9 billion of additional taxes the annual rate of growth in demand in the UK for air travel would be reduced to 2 per cent. The ninth report of the House of Commons Environmental Audit Select Committee, published in July 2006, recommends that the British government rethinks its airport expansion policy and considers ways, particularly via increased taxation, in which future demand can be managed in line with industry performance in achieving fuel efficiencies, so that emissions are not allowed to increase in absolute terms.

Kyoto Protocol

Greenhouse gas emissions from fuel consumption in international aviation, in contrast to those from domestic aviation and from energy use by airports, are not assigned under the first round of the Kyoto Protocol, neither are the non-CO2 climate effects. In place of agreement, Governments agreed to work through the International Civil Aviation Organization (ICAO) to limit or reduce emissions and to find a solution to the allocation of emissions from international aviation in time for the second round of Kyoto in 2009 in Copenhagen.

Emissions trading

As part of that process the ICAO has endorsed the adoption of an open emissions trading system to meet CO2 emissions reduction objectives. Guidelines for the adoption and implementation of a global scheme are currently being developed, and will be presented to the ICAO Assembly in 2007, although the prospects of a comprehensive inter-governmental agreement on the adoption of such a scheme are uncertain.

Within the European Union, however, the European Commission has resolved to incorporate aviation in the European Union Emissions Trading Scheme (ETS). A new directive has been adopted by the European Parliament in July 2008 and approved by the Council in October 2008. It will enter into force on 1 January 2012.


Aircraft noise is seen by advocacy groups as being very hard to get attention and action on. The fundamental issues are increased traffic at larger airports and airport expansion at smaller and regional airports.