History Discovery In 1896, Henri Becquerel was investigating phosphorescence in uranium salts when he discovered a new phenomenon which came to be called radioactivity.[1] He, Pierre Curie and Marie Curie began investigating the phenomenon. In the process they isolated the element radium, which is highly radioactive. They discovered that radioactive materials produce intense, penetrating rays of several distinct sorts, which they called alpha rays, beta rays and gamma rays. Some of these kinds of radiation could pass through ordinary matter, and all of them could cause damage in large amounts - all the early researchers received various radiation burns, much like sunburn, and thought little of it. The new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine (as had the discoveries of electricity and magnetism, earlier), and any number of patent medicines and treatments involving radioactivity were put forward. Gradually it came to be realized that the radiation produced by radioactive decay was ionizing radiation, and that quantities too small to burn presented a severe long-term hazard. Many of the scientists working on radioactivity died of cancer as a result of their exposure. Radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters. As the atom came to be better understood, the nature of radioactivity became clearer; some atomic nuclei are unstable, and can decay releasing energy in the form of gamma rays (high-energy photons), alpha particles (a pair of protons and a pair of neutrons) and beta particles, high-energy electrons.
Nuclear fission Radioactivity is generally a slow and difficult process to control, and is unsuited to building a weapon. However, other nuclear reactions are possible. In particular, a sufficiently unstable nucleus can undergo nuclear fission, breaking into two smaller nuclei and releasing energy and some fast neutrons. This neutron could, if captured by another nucleus, cause that nucleus to undergo fission as well. The process could then continue in a nuclear chain reaction. Such a chain reaction could release a vast amount of energy in a short amount of time. When discovered on the eve of World War II, it led multiple countries to begin programs investigating the possibility of constructing an atomic bomb—a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate power.
Nuclear fusion
Main article: Timeline of nuclear fusion Nuclear fusion technology was initially pursued only in theoretical stages during World War II, when scientists on the Manhattan Project (led by Edward Teller) investigated the possibility of using the great power of a fission reaction to ignite fusion reactions. It took until 1952 for the first full detonation of a hydrogen bomb to take place, so-called because it utilized reactions between deuterium and tritium, isotopes of hydrogen. Fusion reactions are much more energetic per unit mass of fusion material, but it is much more difficult to ignite a chain reaction than is fission. Research into the possibilities of using nuclear fusion for civilian power generation was begun during the 1940s as well. Technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world.
Nuclear Weapons The design of a nuclear weapon is more complicated than it might seem; it is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. The construction of a nuclear weapon is also more difficult than it might seem, as no naturally occurring substance is sufficiently unstable for this process to occur. One isotope of uranium, namely uranium-235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium-238. Thus a complicated and difficult process of isotope separation must be performed to obtain uranium-235. Alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. Plutonium does not occur naturally, so it must be manufactured in a nuclear reactor. Ultimately, the Manhattan Project manufactured nuclear weapons based on each of these. The first atomic bomb was detonated in a test code-named "Trinity", near Alamogordo on July 16, 1945. After much debate on the morality of using such a horrifying weapon, two bombs were dropped on the Japanese cities Hiroshima and Nagasaki, and the Japanese surrender followed shortly. Several nations began nuclear weapons programs, developing ever more destructive bombs in an arms race to obtain what many called a nuclear deterrent. Nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. Throughout the Cold War, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. Generations of people grew up under the shadow of nuclear devastation. However, the tremendous energy release in the detonation of a nuclear weapon also suggested the possibility of a new energy source.
Nuclear Power
Main article: Nuclear reactor technology
Types of nuclear reaction This section may require cleanup to meet Wikipedia's quality standards. Please improve this section if you can. (August 2007) Most natural nuclear reactions fall under the heading of radioactive decay, where a nucleus is unstable and decays after a random interval. The most common processes by which this can occur are alpha decay, beta decay, and gamma decay. Under suitable circumstances, a large unstable nucleus can break into two smaller nuclei, undergoing nuclear fission. If these neutrons are captured by a suitable nucleus, they can trigger fission as well, leading to a chain reaction. A mass of radioactive material large enough (and in a suitable configuration) is called a critical mass. When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. However, if the mass is critical only when the delayed neutrons are included, the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. If nuclei are forced to collide, they can undergo nuclear fusion. This process may release or absorb energy. When the resulting nucleus is lighter than that of iron, energy is normally released; when the nucleus is heavier than that of iron, energy is generally absorbed. This process of fusion occurs in stars, and results in the formation, in stellar nucleosynthesis, of the light elements, from lithium to calcium, as well as some formation of the heavy elements, beyond Iron and Nickel, which cannot be created by nuclear fusion, via neutron capture - the S-process. The remaining abundance of heavy elements from Nickel to Uranium and beyond - is due to supernova nucleosynthesis, the R-process. Of course, these natural processes of astrophysics are not examples of nuclear technology. Because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. Hydrogen bombs obtain their enormous destructive power from fusion, but obtaining controlled fusion power has so far proved elusive. Controlled fusion can be achieved in particle accelerators; this is how many synthetic elements were produced. The Farnsworth-Hirsch Fusor is a device which can produce controlled fusion (and which can be built as a high-school science project), albeit at a net energy loss. It is sold commercially as a neutron source. The vast majority of everyday phenomena do not involve nuclear reactions. Most everyday phenomena only involve gravity and electromagnetism. Of the fundamental forces of nature, they are not the strongest, but the other two, the strong nuclear force and
the weak nuclear force are essentially short-range forces so they do not play a role outside the atomic nucleus. Atomic nuclei are generally kept apart because they contain positive electrical charges and therefore repel each other, so in ordinary circumstances they cannot meet.
Nuclear Accidents Main articles: List of civilian nuclear accidents and List of military nuclear accidents
Examples of Nuclear Technology Nuclear Power Further information: Nuclear Power and Nuclear reactor technology Nuclear power is a type of nuclear technology involving the controlled use of nuclear fission to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction which creates heat—and which is used to boil water, produce steam, and drive a steam turbine. The turbine is used to generate electricity and/or to do mechanical work. Currently nuclear power provides approximately 15.7% of the world's electricity (in 2004) and is used to propel aircraft carriers, icebreakers and submarines (so far economics and fears in some ports have prevented the use of nuclear power in transport ships).[2]
Medical Applications The medical applications of nuclear technology are divided into diagnostics and radiation treatment. Imaging - medical and dental x-ray imagers use of Cobalt-60 or other x-ray sources. Technetium-99m is used, attached to organic molecules, as radioactive tracer in the human body, before being excreted by the kidneys. Positron emitting nucleotides are used for high resolution, short time span imaging in applications known as Positron emission tomography. Radiation therapy is an effective treatment for cancer.
Industrial applications Oil and Gas Exploration- Nuclear well logging is used to help predict the commercial viability of new or existing wells. The technology involves the use of a neutron or gamma-ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography.[1]
Road Construction - Nuclear moisture/density gauges are used to determine the density of soils, asphalt, and concrete. Typically a Cesium-137 source is used.
Commercial applications An ionization smoke detector includes a tiny mass of radioactive americium-241, which is a source of alpha radiation. Tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. Luminescent exit signs use the same technology.[3]
Food Processing and Agriculture
The Radura logo, used to show a food has been treated with ionizing radiation. Food irradiation[4] is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. The radiation sources used include radioisotope gamma ray sources, X-ray generators and electron accelerators. Further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re-hydration. Irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal (in this context 'ionizing radiation' is implied). As such it is also used on non-food items, such as medical hardware, plastics, tubes for gas-pipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones. Compared to the amount of food irradiated, the volume of those every-day applications is huge but not noticed by the consumer. The genuine effect of processing food by ionizing radiation relates to damages to the DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogen activities. Spoilage causing micro-organisms cannot continue their activities. Insects do not survive or become incapable of procreation. Plants cannot continue the natural ripening or aging process. All these effects are beneficial to the consumer and the food industry, likewise.[4] It should be noted that the amount of energy imparted for effective food irradiation is low compared to cooking the same; even at a typical dose of 10 kGy most food, which is (with regard to warming) physically equivalent to water, would warm by only about 2.5 °C.
The specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization (hence the name) which cannot be achieved by mere heating. This is the reason for new beneficial effects, however at the same time, for new concerns. The treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. However, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. Food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500 000 metric tons annually world wide. [5] [6] [7] It should be noted that food irradiation is essentially a non-nuclear technology; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma-rays from nuclear decay. There is a world-wide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. Food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc.
What are the Pros and Cons of Nuclear Energy Posted: Dec 13th, 2008 | Comments: 0 | Views: 260 The applications of nuclear reactors as our main power source for the future is a huge subject of debate, named The Nuclear Debate. The generation of nuclear power from nuclear fuel for civilian purposes is a quest that 21 one companies are taking on for the first time since 1973. The Nuclear Regulatory Commission reports they will seek permission to build 34 power plants from New York to Texas. Multi billion dollar investments that were riding on the choice of an energy source are now being funneled into new nuclear energy projects costing several billion dollars for each plant. Supports claim new nuclear plants are needed because of the variable needs for different amounts of energy to be stored and released at different times. This is also known as base power. Hydroelectricity comes close with it?s man made dam control that allows us to release more power as needed but as the natural conditions must be in place the potential for stored nuclear power is so much greater. Nuclear energy supporters claim back up sources are necessary with other forms of energy like wind and solar because they fail to produce a constant supply or surplus of energy that is offered by nuclear power. The primary environment impacts of nuclear power come from Uranium mining, radioactive emissions and heat waste. The greenhouse gas emissions produced thru the nuclear fuel cycle are only a fraction of those produced by fossil fuels. However, new nuclear power plants are considered unfavorable by anti-nuclear organizations because of the initial cost of constructing them and the fact that a new plant will take 10 years to build. Because each plant costs several billion US dollars it is hard to imagine that money will be left over for research which
could make plants cheaper and more efficient. To get an idea of the scope of building that would be necessary if we wanted to count on getting 80% of our energy from nuclear fission, we would need thousands of new plants. Nuclear development is therefore conceivable on the scale necessary only if it is backed by inappropriately large economic subsidies in the form of taxpayer funded research and development and risks. Public subsidies and tax expenditures involved in research and security. The decommissioning of a nuclear facility has unforeseen potential costs as we do not know what it may cost to dispose, safely of the nuclear waste and the taxpayers might pay for this risk. With new nuclear plant building beginning again, alternative energy source development advocates are also worried about the lack of research and development for other power sources. Because of the massive power potential of nuclear energy there is a danger that there could be a lock-in effect or the creation of market entry barriers for other sources of energy like solar and wind energy. Other competing energy sources still receive large direct production subsides and tax breaks in many nations. As long as the subsidies continue to be given for alternative energy sources while we enter a new ten year nuclear energy plant construction period, energy solutions can come from many alternative sources both corporate and homespun, yet none with as much energy potential and on the massive scale of nuclear energy development.
Nuclear Power From the Era of the Baby Boomer Posted: Jan 16th, 2009 | Comments: 0 | Views: 11
Many miss the point in attributing technological breakthroughs solely to the Information Age. In fact, most of the world’s unprecedented discoveries occurred in the Baby Boomer period when people escalated in number and economic output. Among the myriad of scientific innovations that the recent age benefits from, is in the field of energy or power consumption. The baby boomer period has contributed significantly when, for the first time, it enabled electricity generation. It was in USSR that the world saw the debut of the first nuclear power plant that produced electricity for a power grid, the electric capacity of which was around 5 megawatts. The project, which was initiated by the Obninsk Nuclear Power Plant, came eight years following the start of the boomer year in 1946 and was shortly followed in the later years. The United Nations, still in its formative stage, held its “First Geneva Conference” in 1955 to tackle the nuclear power issue. A large host of scientists and engineers adept at nuclear
technology pursued initiatives to further scientific exploration. The combined efforts of world players led to the establishment of relevant global nuclear energy participants. Two years after the first UN convention, EURATOM was established together with the European Economic Community, which is now known as the European Union; manifesting the increasing significance of nuclear power, an issue that was a sure-fire topic to be discussed in the years to come with the creation of the International Atomic Energy Agency (IAEA) in 1957. Nuclear power stations began to flourish in the 1950s starting with the world’s first commercial nuclear power station in Sellafield, England— Calder Hall. When the station opened in 1956, it only had the capacity of 50 MW but it later managed to have a 200 MW capacity. In the United States, the nuclear power trend found its support with the foundation of Shippingport Reactor, the first commercial nuclear generator launched in 1957 and based in Pennsylvania. When nuclear power was developed, the U.S. Navy first saw the opportunity of tapping into the promises of nuclear power for propelling submarines and aircraft carriers. Even Admiral Hyman Rickover, credited for the nuclear marine propulsion, was actively engaged in the Shippingport Reactor. The electricity generated from nuclear power was one of the finest legacies left by the boomer period. With the sophisticated technology and the incredible human resources today, it is likely for us to maximize the benefits of nuclear power and continue what the preceding generations have started. Ok fellow boomers. How many of you remember our classroom drill where we were instructed to protect ourselves from an atomic attack by hiding under our desks? Hmm, that would have been very effective. Do you have a personal memory of the Cold War and the introduction of nuclear energy? We’d love to hear. Come share it with others at boomeryearbook.com www.boomeryearbook.com is a social networking site connecting the Baby Boomer generation. Share your thoughts, rediscover old friends, or expand your mind with brain games provided by clinical psychologist Dr. Karen Turner. Join today to discover the many ways we are helping Boomers connect for fun and profit. For www.boomeryearbook.com
Economics of Nuclear Technology Posted: May 9th, 2007 | Comments: 0 | Views: 43 The Economics of Nuclear Power Electricity Generation Nuclear Technology can also be used to produce ELECTRICITY which is very important according to economical condition of a country. Nuclear plant can produce more electricity than thermal or hydro electric plant. Isotope produced using Nuclear Technology is used in many chemical and pharma companies.
1)Nuclear power is cost competitive with other forms of electricity generation, except where there is direct access to low-cost fossil fuels. 2)Fuel costs for nuclear plants are a minor proportion of total generating costs, though capital costs are greater than those for coal-fired plants. 3)In assessing the cost competitiveness of nuclear energy, decommissioning and waste disposal costs are taken into account. The relative costs of generating electricity from coal, gas and nuclear plants vary considerably depending on location. Coal is, and will probably remain, economically attractive in countries such as China, the USA and Australia with abundant and accessible domestic coal resources as long as carbon emissions are cost-free. Gas is also competitive for base-load power in many places, particularly using combined-cycle plants, though rising gas prices have removed much of the advantage. Nuclear energy is, in many places, competitive with fossil fuel for electricity generation, despite relatively high capital costs and the need to internalise all waste disposal and decommissioning costs. If the social, health and environmental costs of fossil fuels are also taken into account, nuclear is outstanding. External costs The report of a major European study of the external costs of various fuel cycles, focusing on coal and nuclear, was released in mid 2001 - ExternE. It shows that in clear cash terms nuclear energy incurs about one tenth of the costs of coal. The external costs are defined as those actually incurred in relation to health and the environment and quantifiable but not built into the cost of the electricity. If these costs were in fact included, the EU price of electricity from coal would double and that from gas would increase 30%. These are without attempting to include global warming. The European Commission launched the project in 1991 in collaboration with the US
Department of Energy, and it was the first research project of its kind "to put plausible financial figures against damage resulting from different forms of electricity production for the entire EU". The methodology considers emissions, dispersion and ultimate impact. With nuclear energy the risk of accidents is factored in along with high estimates of radiological impacts from mine tailings (waste management and decommissioning being already within the cost to the consumer). Nuclear energy averages 0.4 euro cents/kWh, much the same as hydro, coal is over 4.0 cents (4.1-7.3), gas ranges 1.3-2.3 cents and only wind shows up better than nuclear, at 0.1-0.2 cents/kWh average. Fuel costs are one area of steadily increasing efficiency and cost reduction. For instance, in Spain nuclear electricity cost has been reduced by 29% over 1995-2001. This involved boosting enrichment levels and burn-up to achieve 40% fuel cost reduction. Prospectively, a further 8% increase in burn-up will give another 5% reduction in fuel cost. The cost of fuel From the outset the basic attraction of nuclear energy has been its low fuel costs compared with coal, oil and gas fired plants. Uranium, however, has to be processed, enriched and fabricated into fuel elements, and about two thirds of the cost is due to enrichment and fabrication. Allowances must also be made for the management of radioactive spent fuel and the ultimate disposal of this spent fuel or the wastes separated from it. But even with these included, the total fuel costs of a nuclear power plant in the OECD are typically about a third of those for a coal-fired plant and between a quarter and a fifth of those for a gas combined-cycle plant. Fuel costs are one area of steadily increasing efficiency and cost reduction. For instance, in Spain nuclear electricity cost was reduced by 29% over 1995-2001. This involved boosting enrichment levels and burn-up to achieve 40% fuel cost reduction. Prospectively, a further 8% increase in burn-up will give another 5% reduction in fuel cost. Comparing electricity generation For nuclear power plants any cost figures normally include spent fuel management, plant decommissioning and final waste disposal. These costs, while usually external for other technologies, are internal for nuclear power. Decommissioning costs are estimated at 9-15% of the initial capital cost of a nuclear power plant. But when discounted, they contribute only a few percent to the investment cost and even less to the generation cost. In the USA they account for 0.1-0.2 cent/kWh, which is no more than 5% of the cost of the electricity produced. The back-end of the fuel cycle, including spent fuel storage or disposal in a waste repository, contributes up to another 10% to the overall costs per kWh, - less if there is direct disposal of spent fuel rather than reprocessing. The $18 billion US spent fuel program is funded by a 0.1 cent/kWh levy. French figures published in 2002 show (EUR cents/kWh): nuclear 3.20, gas 3.05-4.26, coal 3.81-4.57. Nuclear is favourable because of the large, standardised plants used. The cost of nuclear power generation has been dropping over the last decade. This is because
declining fuel (including enrichment), operating and maintenance costs, while the plant concerned has been paid for, or at least is being paid off. In general the construction costs of nuclear power plants are significantly higher than for coal- or gas-fired plants because of the need to use special materials, and to incorporate sophisticated safety features and back-up control equipment. These contribute much of the nuclear generation cost, but once the plant is built the variables are minor. In the past, long construction periods have pushed up financing costs. In Asia construction times have tended to be shorter, for instance the new-generation 1300 MWe Japanese reactors which began operating in 1996 and 1997 were built in a little over four years. Overall, OECD studies in teh 1990s showed a decreasing advantage of nuclear over coal. This trend was largely due to a decline in fossil fuel prices in the 1980s, and easy access to lowcost, clean coal, or gas. In the 1990s gas combined-cycle technology with low fuel prices was often the lowest cost option in Europe and North America. But the picture is changing. Future cost competitiveness The OECD does not expect investment costs in new nuclear generating plants to rise, as advanced reactor designs become standardised. The future competitiveness of nuclear power will depend substantially on the additional costs which may accrue to coal generating plants. It is uncertain how the real costs of meeting targets for reducing sulphur dioxide and greenhouse gas emissions will be attributed to fossil fuel plants. Overall, and under current regulatory measures, the OECD expects nuclear to remain economically competitive with fossil fuel generation, except in regions where there is direct access to low cost fossil fuels. In Australia, for example, coal-fired generating plants are close to both the mines supplying them and the main population centres, and large volumes of gas are available on low cost, long-term contracts. A 1998 OECD comparative study showed that at a 5% discount rate, in 7 of 13 countries considering nuclear energy, it would be the preferred choice for new base-load capacity commissioned by 2010 (see Table below). At a 10% discount rate the advantage over coal would be maintained in only France, Russia and China. FACTORS FAVOURING URANIUM Uranium has the advantage of being a highly concentrated source of energy which is easily and cheaply transportable. The quantities needed are very much less than for coal or oil. One kilogram of natural uranium will yield about 20,000 times as much energy as the same amount of coal. It is therefore intrinsically a very portable and tradeable commodity. The fuel's contribution to the overall cost of the electricity produced is relatively small, so even a large fuel price escalation will have relatively little effect. For instance, a doubling of the 2002 U3O8 price would increase the fuel cost for a light water reactor by 30% and the electricity cost about 7% (whereas doubling the gas price would add 70% to the price of electricity). REPROCCESSING & MOX
There are other possible savings. For example, if spent fuel is reprocessed and the recovered plutonium and uranium is used in mixed oxide (MOX) fuel, more energy can be extracted. The costs of achieving this are large, but are offset by MOX fuel not needing enrichment and particularly by the smaller amount of high-level wastes produced at the end. Seven UO2 fuel assemblies give rise to one MOX assembly plus some vitrified high-level waste, resulting in only about 35% of the volume, mass and cost of disposal. For different fuel costs (fossil fuels) or lead time (nuclear plants). Assumes 5% discount trate, 30 year life and 70% load factor. While the figures are out of date, the comparison remains relevant. Note that the key factor for fossil fuels is the high or low cost of fuels (top portion of bars), whereas nuclear power has a low proportion of fuel cost in total electricity cost and the key factor is the short or long lead time in planning and construction, hence investment cost (bottom portion of bars). Increasing the load factor thus benefits nuclear more than coal, and both these more than oil or gas. (OECD IEA 1992)
What You Should Know Green Energy Posted: May 15th, 2007 | Comments: 0 | Views: 26 Green energy refers to the use of power that is not only more efficient than fossil fuel but that is friendly to the environment as well. Green energy is generally defined as energy sources that dont pollute and are renewable. There are several categories of green energy. They are anaerobic digestion, wind power, geothermal power, hydropower on a small scale, biomass power, solar power and wave power. Waste incineration can even be a source of green energy. Nuclear power plants claim that they produce green energy as well, though this source is fraught with controversy, as we all know. While nuclear energy may be sustainable, may be considered renewable and does not pollute the atmosphere while it is producing energy, its waste does pollute the biosphere as it is released. The transport, mining and phases before and after production of nuclear energy does produce and release carbon dioxide and similar destructive greenhouse gases. When we read of green energy, therefore, we rarely see nuclear power included. Those who support nuclear energy say that nuclear waste is not, in fact, released into our earths biosphere during its normal production cycle. They stress as well that the carbon dioxide that nuclear energy production releases is comparable, in terms of each kilowatt hour of electricity, to such sources of green energy as wind power. As an example of the green energy production the average wind turbine, such as the one in Reading England, can produce enough energy daily to be the only energy source for 1000 households. Many countries now offer household and commercial consumers to opt for total use of green
energy. They do this one of two ways. Consumers can buy their electricity from a company that only uses renewable green energy technology, or they can buy from their general supplies such as the local utility company who then buys from green energy resources only as much of a supply as consumers pay for. The latter is generally a more cost - efficient way of supplying a home or office with green energy, as the supplier can reap the economic benefits of a mass purchase. Green energy generally costs more per kilowatt hour than standard fossil fuel energy. Consumers can also purchase green energy certificates, which are alternately referred to as green tags or green certificates. These are available in both Europe and the United States, and are the most convenient method for the average consumer to support green energy. More than 35 million European households and one million American households now buy these green energy certificates. While green energy is a great step in the direction of keeping our environment healthy and our air as pollutant free as possible, it must be noted that no matter what the energy, it will negatively impact the environment to some extent. Every energy source, green or otherwise, requires energy. The production of this energy will create pollution during its manufacture. Green energys impact is minimal, however.