108

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View 108 as PDF for free.

More details

  • Words: 11,098
  • Pages: 16
Fall 2003

Number 108

CHALK RIVER A History of Nuclear Energy in Canada

- Storing Light - pg 6 - Colour Television - pg 7

- The 64 Bit Revolution - pg 9 - Routers - An Introduction to Linux - pg 10

Nuclear Energy in Canada by Bruce Torrie Dept. of Physics, University of Waterloo How did Canada come to have the first nuclear reactor outside the United States? Why was the reactor a heavy water-natural uranium type rather than one of the other possibilities? Why is half of the electrical power in Ontario generated by nuclear stations? Who were the driving forces behind this development? The following is mainly a historical account starting with scientific discoveries in the world at large and ending with the development of nuclear power in Canada. First let me explain what heavy water and natural uranium are. Light water is H2O with the hydrogen atoms having nuclei consisting of a single proton. There are heavier isotopes of hydrogen and the one of interest here is deuterium with a nucleus consisting of a proton plus a neutron. Heavy water is D2O with the added weight coming from the two additional neutrons not found in light water. Ordinary water is made up of 1 part of heavy water in 6500 parts of light water. Natural uranium is made up of two main isotopes with either 143 or 146 neutrons and 92 protons giving U235 (143+92=235) and U238 (146+92=238) with the former making up 0.7% of the total. In the early 1930’s there were stockpiles of both heavy water and natural uranium but the knowledge to make use of them did not exist. Heavy water was concentrated at a hydroelectric plant in Norway as a by-product of fertiliser production. Uranium was separated from radium mined in northern Canada for use in the radiation treatment of cancers. During the 1930’s a number of discoveries showed that nuclear fission was possible. The neutron was discovered by Chadwick working in Cambridge, England in 1932. By 1938 it was known that some of the isotopes at the high end of the periodic table were unstable or metastable and that they could become more stable by decaying to isotopes in the middle of the periodic table with the emission of neutrons and other particles, a process called fission. The resulting isotopes are generally radioactive and decay in one or more steps to stable isotopes. For our story the most significant discovery was that the neutrons emitted by uranium, a heavy element, could induce fission on collision with neighbouring uranium atoms. This occurs most effectively when the neutrons have been reduced in energy as a result of collisions with a moderating material, a discovery made by Lew Kawarski, Frédéric Joliot-Curie, Hans von Halban and Frances Perrin working in Paris. Kawarski and Halban are part of the Canadian story.

Phys 13 News / Fall 2003

Where does the heavy water come into the story? The fission neutrons must be slowed down and basic collision theory tells us that the most effective moderator is made up of particles having the same mass as the neutrons. Therefore hydrogen should be the perfect moderator since a neutron and a proton have almost the same mass and the electrons found in an atom are very light. Unfortunately hydrogen readily absorbs neutrons, i.e. it has a high absorption cross-section for neutrons to use the proper jargon, and it has undesirable chemical properties, meaning it explodes readily. Since hydrogen isn’t satisfactory, a search was made through the light atoms and molecules resulting in the identification of carbon (graphite) and heavy water.

Fig. 1: NRC Subcritical Assembly Only the uranium isotope U235 is a fissile material. Fission of each nucleus produces 2.5 neutrons on average and one of those neutrons must be captured by another U235 nucleus to have a chain reaction. The first efforts to produce a chain reaction used natural uranium as the fuel and graphite as the moderator. Experiments took place in Germany, in Canada and in the U.S.A. Since this is a Canadian story, the Canadian pile or reactor at the National Research Council is shown in Fig. 1 and the principal investigator was George Laurence. The neutron flux inside the pile went up and down in response to the neutron source being moved in and out. If the flux rose to a high enough level then a chain reaction could occur and the pile would go ‘critical’. Both the Canadian and German efforts failed because the moderators contained impurities that absorbed too many neutrons but the Americans, under Enrico Fermi, succeeded because they were able to obtain high purity graphite. Criticality was achieved in December of 1942. In the meantime, the French group was pursuing the heavy water route. Arrangements were made to fly the heavy water from Norway to France. This was a proper cloak and dagger operation with parallel flights going from Norway to Scotland and from Norway to Amsterdam. The

Page 3

Nazis diverted the Amsterdam flight to Hamburg but the heavy water went safely to Scotland and on to Paris. It was soon realized that Paris was not a safe place so the journey continued to Bordeaux and on to England and Cambridge where experiments continued with the involvement of Halban and Kawarski. The Allies were concerned about the heavy water plant in Norway since the Nazis had overrun Norway in the early days of the WWII. A British paratrooper group using gliders tried to knock out the plant in late 1942 but the lead glider crashed and the operation was generally a mess. A second attempt was made by a Norwegian group in February of 1943. They scaled the mountain to the plant, broke through the fence and left their explosives. This successful event is celebrated as a great achievement by the Norwegians, even to the present day, but the plant was back in operation in a very short time. General Groves, military head of the American nuclear program, was disgusted with the incompetence of the Europeans so he organized a massive bombing raid with the assistance of the British. 400 bombs were dropped with a minimum of damage and much loss of life but the operation convinced the Nazis to move the heavy water production to Germany. The resulting time loss effectively put the Nazis out of the atomic bomb race, particularly since the last shipment of heavy water from Norway was sunk by a Norwegian saboteur. Back in Britain there was a realization that the Americans were moving rapidly ahead and it would make sense to combine the American and British efforts. At first it was suggested that the British scientists join the American group in Chicago but the British group contained a number of scientists from the occupied countries in Europe and there were security concerns. This was in spite of the fact that the American effort also involved people like Enrico Fermi who came from Italy. Eventually it was decided to have a joint CanadianBritish group in Montreal under the direction of Halban and with collaboration with the Americans. The Montreal project was part of the National Research Council of Canada and the NRC president, C.J. Mackenzie, was a great promoter of the project and was looking ahead to peacetime uses of nuclear energy. In Montreal, work continued on heavy water and uranium, and graphite and uranium but the project had no clearly defined mission. Halban was not a satisfactory director, co-operation with the Americans was always iffy, and the Americans were still moving rapidly ahead, mostly on their own. A partial resolution of this dilemma came at the Quebec conference of August 1943 when Churchill and Roosevelt, the British and American leaders got together. It was agreed that the Montreal labs would concentrate on the development of a natural uranium-

Page 4

heavy water reactor. The Americans already had a 300 kw heavy water reactor in operation at the Argonne labs near Chicago but the new mission drew on the expertise of Halban and others in Montreal. Even at this time there was concern about whether there was enough uranium to support a large scale nuclear power program. Uranium is not the only fissile material. When U238 is bombarded with neutrons it converts into various isotopes of plutonium with Pu239 being the most useful as a reactor fuel and for making bombs. Also the fissile isotope U233 can be produced by bombarding Th232 with neutrons. Thorium is much more plentiful than uranium so the supply of fissile material was deemed to be adequate but there is the problem of separating the Pu239 and U233 from the radioactive material extracted from a reactor. Fortunately, at least for the short term, the supply of natural uranium is greater than expected and the messy extraction process can be left for another day, if you forget about producing bombs. Once the decision was made to build a natural uranium-heavy water reactor, it was necessary to find a home for the reactor. Various locations were considered and Chalk River was chosen for its relative isolation, proximity to a rail line, a highway and water in the Ottawa River, as well as being fairly close to the NRC laboratories in Ottawa. The current Chalk River plant, with its beautiful setting, is shown on the front cover. A new director of the Montreal project, John Cockroft, was appointed in April of 1944 and, by summer of that year, on-site work was started on both the Chalk River plant and the residential town of Deep River located a few miles up river. The new reactor was called NRX for National Research X-perimental and at an early stage it was decided that this pilot plant should have its own pilot plant called ZEEP, for ‘Zero Energy Experimental Pile’. Kowarski came to Canada to be in charge of ZEEP which became the first reactor operating outside the U.S. in September of 1945. It produced the huge power of 1 watt but served as a testing facility for materials and ideas that contributed to the construction and development of NRX. In July of 1947, NRX went critical and eventually attained a flux that was five times larger that of any of the other reactors in existence at the time. This high flux became important in at least a couple of respects. Powerful beams of neutrons could be extracted from the reactor and these allowed Bertram Brockhouse and co-workers to start their work investigating the properties of materials that led to his award of the Nobel Prize in Physics many years later. Also the Americans, under General Hyman Rickover, were interested in developing a small reactor to act as a heat source for naval ship engines. NRX contained ‘test loops’ which were ideal for testing components of this compact heat source and work began in the early

Phys 13 News / Fall 2003

1950’s. Security issues arose once again in connection with this testing. U.S. legislation specified total secrecy and Rickover was obsessed with security. On the other hand, it is not a good idea to stick unknown objects into a reactor so the NRX operators x-rayed one of the elements to be tested and deemed it to be safe. At the same time they learned some of the secrets of the new device. This caused a row but reason eventually prevailed. The naval reactor was a prototype for the light water reactors that became the standard for nuclear power generation in the U.S. Increasing the U235 content of natural uranium to a few percent increases the neutron supply and overcomes their absorption by the light water. Meantime, the Canadian program as a whole was progressing. In September of 1946, John Crockroft returned to Britain to take charge of the nuclear projects there and he was replaced by W.B. Lewis as director of Chalk River. Lewis remained as director or equivalent for many years until his retirement in 1974. In April of 1952, Atomic Energy of Canada, a crown corporation, replaced NRC as the governing body of Chalk River. The vision of a world full of nuclear reactors drove the project forward and a bigger, better version of NRX called NRU, National Research Universal, was brought into operation in 1957. NRU had a higher flux, produced more power and could be fuelled continuously during operation. This feature, incorporated into the power reactors that followed, gave them a very high duty cycle. A pilot project for the power reactors, NPD or Nuclear Power Demonstration, was built at Rolphton, near Deep River. The biggest change in design at this point was shifting from a vertical to a horizontal arrangement. Fuel bundles were shoved into channels mounted in a horizontal cylinder and the bundles moved through the channels and out the other side. Burnup rates are highest in the centre so this arrangement ensured a more efficient use of the fuel. At the time there was a debate about pressurizing the whole cylindrical vessel or calandria, or pressurizing only the channels. A high pressure raises the boiling point of water and leads to the production of steam in the heat exchanger attached to the electrical generators, see the Fig. 2 below. The final design had pressurized channels.

Fig. 2: CANDU

Phys 13 News / Fall 2003

Ontario Hydro became a partner in the development of nuclear power because it anticipated a large increase in power consumption after WWII and there were no large convenient additional sources of hydro power. Expensive coal was being brought in from the U.S. with concerns about availability in a crisis. Nuclear power seemed like the answer and NPD was rapidly followed by the Douglas Point reactor, and the Bruce and Pickering nuclear power stations. Reactors were also built in Quebec and New Brunswick and a second experimental site, a smaller version of Chalk River, was built at Whiteshell in Manitoba. CANDU (Canadian Deuterium Uranium) reactors now supply approximately 50% of the electricity in Ontario and the reactors have been sold to South Korea, Romania, India, Pakistan, Argentina and China. In the last year, two of the latest CANDU’s, each generating 900 megawatts electrical, have started operation in China. For comparison, the wind generator on the Toronto waterfront generates 1.8 MW but, of course, only when the wind is blowing. In total there are over 400 power reactors of all types operating throughout the world. Two questions are usually asked in connection with the future of CANDU’s and nuclear power generally. One has to do with the cost of nuclear electricity and the second with the disposal of radioactive waste. The answer to the first question is elusive since there are many components to the answer. Much government money has gone into the development of power reactors. More money has gone into correcting the teething and refurbishing problems with the early reactors. Reactors have low operating costs but high capital costs so it is necessary to keep them running as much as possible to pay down the capital costs. For political and technical reasons this has not always happened. Wastes are currently stored at the nuclear power stations but at some time in the future they will need to be moved to more permanent storage in stable portions of the Canadian Shield. The reactors will also have to be decommissioned at some point and the radioactive components disposed of. My reading of the literature has lead me to believe that if a new reactor was installed at the present time and kept running as much of the time as possible, then the electricity produced would be cheaper than that produced by any of the alternatives. It is doubtful if the huge initial investment will be recovered in the short term. The cost of waste disposal is small compared to other costs and waste disposal methods have been well researched. Public perception of risk, however, remains a problem. References Robert Bothwell, Nucleus, The History of Atomic Energy of Canada Limited Atomic Energy of Canada Limited, Canada Enters the Nuclear Age

Page 5

Video clip of a ball toss using a web camera showing parabolic free fall motion. Using video analysis, the position of the ball as a function of time can be extracted. Frame by frame, you can click on the ball and the x, y positions will be recorded automatically by Logger Pro. The overall scale of the video is set by dragging across a meter stick visible on the floor. Velocity and other quantities can also be graphed using the calculated columns feature of Logger Pro 3.2. Logger Pro 3.2 for LabPro offers video analysis tools and numerous enhancements.

X Analyze data directly from video. X Synchronize and replay sensor data with movies. X Includes all the features of Graphical Analysis. X Draw a prediction before data collection for improved student learning. X Collect Motion Detector data simultaneously with higher speed force data. X Perform curve fits and modeling with user-entered functions. X Save custom calibrations to your sensors for later automatic use. X Collect simultaneous analog and rotary motion data. X Import data from Texas Instruments and Palm OS® handhelds. X Video analysis data can be synchronized with sensor data for comparison. X Purchase of Logger Pro includes a school/college department site license.

Page 6

Phys 13 News / Fall 2003

Storing Light by Walter W. Duley Dept. of Physics, University of Waterloo The universe is a reservoir for radiation from near the beginning of time when, soon after the Big Bang, radiation and matter first became separate entities. Light left over from this epoch can now be detected only at radio wavelengths, as the expansion of the Universe over the intervening 12 billion years has shifted the electromagnetic spectrum of the Big Bang into the millimeter range. Today, this radio “noise” signaling the creation of the universe, represents about ten percent of the total energy stored in the space between the stars in our galaxy. With modern technology, it is now possible to create light at any wavelength, and the photons generated may find application in communication systems, as an enhancement to vision or as a medical therapy. Despite the ease with which light can be generated, there is still no way to store light and keep it in a container for future use. There are no bottles of light labeled “green”, “blue” and “red” whose photons can be extracted and mixed to form colours. Photons, once created, propagate away from their source and continue to travel until they are destroyed by absorption. At the Earth’s surface most illumination occurs with light from the Sun that is only eight minutes “old”. This sunlight disappears when it is absorbed in matter which then re-emits part of the energy so gathered in the form of longer wavelength photons. This conversion of high-energy photons to “heat radiation” is a component of a cycle that leads to the inevitable increase in entropy in the Universe. Virtually everything we see, apart from the stars, is illuminated by light generated less than eight minutes ago and, if one is indoors, by light created within the last several nanoseconds. The measured properties of light do not seem to depend on how “old” light is! The disappearance of a photon of light, together with its energy, on encountering matter is not a prediction of classical physics. Instead, classical theory predicts that light should not be exchanged in “packages”, but rather continuously via an energy-sharing process. By analogy, when a hot gas is enclosed in a low temperature container, collisions between gas atoms and the wall lead to an exchange of energy. The gas and container eventually come to an equilibrium, whereby both reach the same temperature and in which the gas has lost thermal energy. This energy is released incrementally as atoms transfer part of their kinetic energy to the walls on each collision, but there is no requirement that all excess energy be transferred in one collision. Unlike a gas atom, a photon hitting the wall would be absorbed, transmitted or reflected, so that its energy either disappears all at once or not at all.

Phys 13 News / Fall 2003

Under these circumstances, the energy of individual photons is indivisible and, in contrast to an atomic gas, the photon itself disappears as energy is transferred to the wall. Any hope of storing light then means minimizing the interaction between light and matter. This can be accomplished, although rather inefficiently, by constructing a container whose walls are highly reflective. If the loss of photons on each reflection is minimized, storage can indeed be enhanced but light waves reflecting back and forth inside such a shiny container still suffer an enormous number of reflections per second. In each interaction with the wall a tiny fraction of the total number of existing photons disappears and the light intensity will gradually decrease with each reflection. For example, if the sides of the container are separated by one meter, then a beam of light propagating back and forth between these walls would experience 300 million reflections per second. This results in an enormous attenuation. To illustrate, even if the walls had a reflectivity of 0.99999, a beam of light would be attenuated by a factor of 1 in 101300 after this number of reflections! Two other ways to enhance the storage capacity for light are to increase the distance between the walls or to slow light down in the intervening medium. The latter can be accomplished by filling the container with a nonabsorbing medium having a high refractive index but this will introduce other types of losses. In the laboratory, optical delay lines can be constructed such that light pulses are stored between mirrors for up to several microseconds. Non-linear effects have also been developed that transiently slow the speed of pulses of light to less than 10 m/s. Surprisingly, light can also be stored in the human body for a very small fraction of a second, and this has many potential uses in medical diagnostics. Many body tissues are translucent to visible and near infrared radiation, something one can verify by holding the hand close to a desk lamp. Light passing into the body is almost completely absorbed but only after it has been scattered several times. When scattering occurs, photons become deflected from their original direction effectively slowing their passage through tissue. In addition, the interaction between light and the various molecular chemical groups can be imprinted on this scattered light, so a spectral analysis of the timedelayed light contains information about the type of biological tissue encountered. If a light pulse of short duration is introduced through the skin or internally via an optical fiber, the time it takes this pulse to leave the body can be measured. Any time delay is a measure of the path taken by light inside the body and the composition of the materials along its trajectory. This information can be used as a diagnostic of tissue density and composition. Since photon storage times within the body are measured in nanoseconds the initial pulse must be as short as possible for this technique to be effective.

Page 7

Colour Televison by Guenter Scholz Dept. of Physics, University of Waterloo In issue 98 of ‘Phys 13 news’ we described the DVD (Digital Versatile Disc) that has recently become increasingly ubiquitous. One probable reason for this is that most video experts claim: “In comparisons between the master tape and DVD, we see virtually no differences “. To understand the reason for this we need to understand how video information is stored and transmitted. In the past our predominant entertainment formats were, in increasing quality, broadcast TV, VHS, Beta, LaserDisc and S-VHS. After introducing the early history of black and white television in issue 93 of ‘Phys 13 news’, let’s continue as promised with the development of colour television and in particular the associated NTSC, National Television System Committee, format (a.k.a. ‘Never Twice the Same Colour’) developed in the USA and implemented in Canada in 1966. Colour Television differs from black and white in that inside the picture tube there are now three ‘guns’ firing electrons at the phosphor coated TV screen. The screen is divided into tiny individual ‘dot pitch’ areas that are separated by a physical grid called a ‘shadow mask’. In case you’re wondering why the term ‘dot pitch’ is used, I believe that the explanation is as follows. Remember that any kind of ‘field’ such as soccer or baseball is described in the Queen’s English as a ‘pitch’ and the field on a TV screen where the tiny dots of light are created is therefore a dot pitch. Each of these tiny screen areas in a colour TV’s is further subdivided into three areas containing different phosphours that are made to emit light either in the red (R), green (G) or blue (B) when struck by electrons. Each gun, therefore, targets only one of the ‘RGB’ colour phosphors within each dot pitch. A small area of a picture is called a picture element or ‘pixel’ for short and, normally, the dot pitch area is approximately matched to the pixel area for optimal picture resolution without wasting the costly dot pitch resolution that a screen has available. I am sure we’ve all had the experience on our computer monitor when the dot pitch and pixels were mismatched and produced an inappropriately resolved image. From far enough away, say about 1 foot, each of these primary RGB colour pixels (remember, they can be more or less than the dot pitch) that the electron guns excites in appropriate proportions, will coalesce in our eyes and yield an endless colour pallet of hues, of which we can differentiate about 7 million. Generating the colour signal for each gun, however, gets a bit tricky. It would have been reasonable to record a colour picture with three cameras each with a red, green or blue filter in front of the lens and then transmit (ie. broadcast or record) these three RGB colour channels to control,

Page 8

respectively, each of the three TV guns. In fact, this is how your computer monitor operates and, for a quality colour picture, you will appreciate the importance of the RGB input on high-quality monitors, after reading below the electronic contortions required to make a colour picture black and white compatible. Back in the 1950’s everyone had black and white TV’s and compatibility with these monochrome TV sets was one of the main concerns for the developers of colour TV. Consequently, rather than simply broadcast the RGB channels, the colour information had to be formatted somehow so that it was also compatible with the black and white TV standard. Three such formats were developed in various parts of the world. In North America it was the NTSC format, PAL was jointly developed in England and Germany, and SECAM in France. Both of these European formats were also implemented in the late 1960’s and create the picture with 625 vertical and 50 horizontal lines of resolution. The PAL, Phase Alternating Line, format is the most widely used format in the world and is humorously also known as ‘Perfection At Last’ or because of the cost of the enormously complex circuitry as ‘ Pay A Lot’. The SECAM, Système Électronique pour Couleur Avec Mémoire, is alternatively known as ‘Something Essentially Contrary to the American Method’ or ‘SEcond Colour Always Magenta’. In essence, all of these formats are carefully set up in an attempt to preserve all the original monochrome information and then to add the colour information on top of that. The NTSC format scans the screen with 625 horizontal and 60 vertical lines. This is accomplished by placing three composite video signals into the 6 MHz bandwidth that was originally allocated to each of the black and white channels. These three composite video signals consist of one ‘Luminance’ and two ‘Chrominance’ channels. The luminance, or ‘Y’ channel, contains the brightness and detail of the picture and, therefore, has all of the ‘monochrome’ information of the picture. It ideally occupies the first 4.3 MHz of the bandwidth and consists of a combination of RGB information weighted by our eye’s colour sensitivity according to: Y = 0.30R + 0.59G + 0.11B The two chrominance, or ‘Q’ and ‘I’, channels are colour difference signals and contain the colour information. Specifically: Q = 0.21R – 0.52G + 0.31B I = 0.60R – 0.28G – 0.32B Both chrominance channels are modulated onto a separate 3.58 MHz sub-carrier frequency in quadrature to each other. Summing the Q and I signals yields the ‘C’ or chrominance signal. The chrominance signal has the

Phys 13 News / Fall 2003

interesting property that its magnitude represents the picture’s color saturation (the ‘colour’ adjust on your TV) and its phase represents the picture’s hue (the ‘tint’ adjust on your TV). A black and white TV receives only the luminance channel since this Y signal contains all the necessary information for the TV to display a monochrome picture. A colour TV receives all three channels (Y, Q and I) and then extracts the individual RGB information by appropriate (depending on the format) but always complicated electronic manipulation of these signals. Extracting the RGB information from the luminance and chrominance is difficult and the results are always less than perfect. The more sophisticated PAL system fares better in this respect when compared to NTSC and SECAM. Early NTSC colour TV sets used a notch filter to limit the Y channel bandwidth from 4.3 MHz to 3.2 MHz to prevent chrominance from interfering with luminance information. One undesirable side effect of this filter is that it also reduces the realizable horizontal resolution from 330 to 240 lines. However, since video cameras in the early 1950’s only had a resolution of 225 lines, no one cared about the loss in resolution until later in the 1970’s. By the early 1980’s video cameras had now improved so very significantly and consumer TV screens also had become so much larger that, to keep up with these developments, the resolution of consumer TV’s now definitely also needed to be improved. Consequently, simple ‘Comb Filters’ began to appear in up-market TV’s to provide a significantly improved, sharper TV picture. Comb Filters improve resolution by exploiting the fact that the sidebands containing chrominance and luminance information are of a different frequency and that a phase inversion of the chrominance signal occurs following every line scan. This allows colour and high frequency horizontal information (the ‘sharpness’ control on your TV) to be separated with, however, some loss of diagonal detail. In short, comb filters operate by subtracting the current line scan from a delayed subsequent line scan and thereby canceling out luminance information while preserving chrominance. If, on the other hand, the adjacent line scans are summed the luminance is preserved and chrominance is canceled. This procedure works well as long as the adjacent lines have similar content. At colour edges, however, the information is radically different and annoying artifacts such as ‘hanging or crawling dots’ are created by the comb filter. Improved ‘Three-Line’ comb filters alleviate this annoyance by turning off the summing and subtraction of the line scans when adjacent lines are significantly different, and modern ‘3-D’ filters even look at information in the line scans of that area in the previous picture field. Today, the best comb filters are ‘Digital 3D’ filters that digitally store scan lines from the present and past picture fields for electronic manipulation and, although they do reduce NTSC artifacts tremendously, they never completely eliminate them.

Phys 13 News / Fall 2003

NTSC artifacts can only be avoided if the luminance and chrominance signals are kept separate and that is exactly the purpose of the Super or ‘S’ output found on, for example, ‘S-VHS’ players and camcorders. Interestingly, most LaserDisc players use the composite NTSC signal format even though S-connectors are provided. This was a marketing ploy because, if the player is connected to the TV via the S-connector, it merely uses the LaserDisc player’s comb filter rather than the TV’s. In fairness, the latter's comb filter was often significantly better than that of a possibly older TV and therefore often improved the picture quality. In S-VHS players the chrominance and luminance signals are indeed recorded separately and the S-connector output to the TV will always give the best picture. Problems, however, still remain because even though luminance and chrominance are separate, they are modulated onto a single sub-carrier and consequently other NTSC weaknesses do still remain. Ultimately it is desirable to avoid multiplexing information altogether and, instead, to transmit in component RGB form. The DVD does store video information digitally in component form and the superiority of the DVD picture, now completely free of noise (always present in analog media) and NTSC artifacts, is immediately obvious. As well, the DVD disk contains 5+1 channels of digital high quality sound. A DVD can provide 480 lines of resolution without any high frequency losses but, in practice, the output of consumer players is often steeply filtered and is down about 6dB at 5.5 MHz (440 lines). The steep, ‘brick wall’ filter is needed to remove the quantization noise that results when the analog signal is reconstructed from the digital data. Higher quality oversampling video digital-toanalog converters (DAC’s) avoid the need for brick-wall filters and do provide the full 480 lines of resolution. Further improvements to our video enjoyment will next come about with the advent of ‘HDTV’ (High Definition TV) sets which will finally free us from the NTSC, black and white compatible, format limitations. As you might expect, HDTV broadcasts will provide a RGB signal. Moreover, HDTV will increase the picture resolution (pixel density) by a factor of almost 1000, but all of that is a story for a future ‘Phys 13 news’ issue. For further, more detailed information and reading about colour TV’s, VCR’s and NTSC standards as well as our eye’s perception of colour, why not visit the very comprehensive site at: http://www.ntsc-tv.com as well, visit the educational link there to: http://www.williamson-labs.com/home.htm that provides an elementary introduction to electronics (beginning with volts, amps and ohms all the way to transistors) in a fun and informative way by using everyday analogies and lots of ‘*.gif’s’ and ‘pop-ups’ that help in understanding.

Page 9

The 64 Bit Revolution by David Yevick Dept. of Physics, University of Waterloo While we will primarily focus in this series on tips and techniques for scientific programming, in this issue we consider the impending transition to 64 bit computing and look briefly at its impact on scientific software development. Recently several competing 64 bit strategies have emerged, with varying degrees of commercial success. IBM 64 bit Power PC chips have been available in high-end RS/6000 UNIX workstations for several years and are employed in numerous scientific and industrial facilities – for example, a large $2M facility was donated several years ago to the Science Faculty at the University of Waterloo and has been expanded considerably over the last year. Recently, a 64 bit RS/ 6000 chip has also

become available in the G5 Apple personal computer (see Fig. 1) where it offers better performance than the Intel Xenon processors, which are designed for high-end systems. Other vendors such as SUN and SGI have also marketed 64 bit solutions. Intel’s high-end Itanium processors have been available for some time as well; however, even the new Itanium 2 processor, while displaying outstanding performance, does not offer 32 bit compatibility. The promise of universal 64 bit computing is, however, only now being realized with the advent of the AMD Opteron and the Athlon 64 chips. These will very shortly be available in inexpensive Windows and Linuxbased desktop and notebook computers. This chip is fully 32 bit compatible with existing operating system and application software unlike other 64 bit solutions that require new versions of both the operating system and all application programs. Even restricted to 32 bit software, published specifications indicate that the Athlon 64 chip is at least as powerful as the highest-rated 32 bit consumer alternatives. To understand the impact of the AMD design, recall that products that take the low road of increased user accessibility over the high road of technological superiority almost always succeed in gaining consumer acceptance. The most relevant example here is Windows 3.1, which was clearly inferior to the cooperative multitasking, objectoriented OS/2 operating system. However, the incompatibility of OS/2 with the existing installed program base (which was small by today’s standards) insured the success of Windows. While numerous factors affect the effective speed of a processor, we will discuss below the most fundamental relationships between the number of processor bits and scientific programming. Our discussion employs the C++ language for reasons that will become clear in the next article, but similar comments apply to other computer languages as well.

Fig. 1: The 2 GHz Apple "Power Mac G5" personal computer based on the 64 bit RS/6000 chip. This is the fastest personal computer available at the moment outperforming a 3 GHz Pentium 4 based computer in many applications. 1,100 of these G5's have been connected in parallel to create the world's second most powerful computer.

Page 10

A 64 bit processor acts on 64 bits, each of which has the value 0 or 1, of information in a single clock cycle. (This does not mean, however, that only 64 bits are processed in a cycle, since the processor may be able to perform operations on more than one data set simultaneously). A unit of 8 bits is called a byte, notation stemming from the early days of 8-bit computers; one byte can therefore store 28 or 256 different values. Different variable types are stored in different amounts of memory space. For example, in C++, a char type is stored in 1 byte so that there are only 256 basic (built-in) characters known as the ASCII character set. (Unicode characters currently include nearly all languages through a 8 byte (2

Phys 13 News / Fall 2003

bit) system but can only be accessed through a specialized C++ library.)

ROUTERS - An Introduction to Linux

A more significant issue from a scientific standpoint is the representation of integers and floating point variables. Starting with integer (int) variables, the original size of an int in C++ was 2 bytes, e.g. 16 bits, which reflected the capabilities of early computer hardware. Unfortunately, such a variable can only take on 216 = 65,536 values. This gave rise to countless programming errors, especially since int variables are mapped to bit patterns in such a way that adding one to the largest possible integer generates the smallest possible integer. With the advent of 32 bit computing, compilers slowly adapted to a 4 bytes = 32 bit int size, enabling 4,294,967,296 different int values (although errors still are relatively frequent, since squaring 46,340 already leads to the largest possible int value). In 64 bit machines, the number of bits used to implement an int will double again, which will as well allow far greater resolution in applications (such as graphics programs) that rely on integer calculations for greater speed.

by Dale Atkin Dept. of Physics, University of Waterloo

Floating point variables, of which the main types are float and double in C++ are computer representations of exponential notation. That is, of the total number of bits allocated for a variable, most of the bytes are reserved for the mantissa and the remainder are allocated for the exponent (recall that in e.g. 2.034 x 1023, 2.034 is the mantissa while 23 is the exponent). A float variable currently occupies 4 bytes of memory space in standard compilers while a double occupies 8 (in reality most float variables are, however, internally stored as doubles). This allows values of up to about 10±38 with approximately 7 bits of precision for float variables and 10±308 with 14 bits of precision for double variables respectively. Since scientific calculations are generally performed with double variables that require 64 bits, the intrinsic advantage of 64 bit computing, which processes a double variable in a single operation is significant. Finally, a key advantage with 64 bit computing is the ability to address a far larger memory space. Every memory location in a computer must have a unique memory address, which implies that a 32 bit machine can address a maximum of 4,294,967,296 or 4GB of memory. In contrast, the amount of memory a 64 bit machine is 18Ebytes where 1Exabyte is 1018 bytes (although 64 bit operating systems will limit memory access to a far smaller value) This again provides a decisive advantage for large scale computing applications. While there are many other features of 64 bit architectures, the above discussion should at least serve to indicate the significance of 64 bit computing. In the next issue, we consider the relevance of various programming languages to scientific applications.

Phys 13 News / Fall 2003

Linux: you’ve all heard about it. For many people the word alone is enough to have them run away screaming or perhaps cursing anti-Microsoft technocrats. The truth, however, is that in its modern incarnation, Linux appears to be not much different on the surface from its Microsoft counterpart. The main difference is that with Linux you can nearly always get the source code and, if you have the knowledge, can make repairs when things go wrong. Linux, therefore, has a high degree of customizability that makes it ideal for running computationally intensive applications. Furthermore, unlike Windows, Linux also gives you the option of stripping peripherals, everything that you don’t need or want to run, and focus all of your efforts on the task at hand. Linux and UNIX environments are the operating systems of choice for most ‘Computational Physics’ applications. The power of Linux is in its flexibility; hence knowledge of Linux will undoubtedly come in useful regardless of whether or not you intend to study physics. This introduction will guide you through some of the basic philosophies of Linux with the specific goal towards setting up an internet router. First, what is Linux? Linux - to put it simply - is UNIX for the PC. The basic idea behind any UNIX system is a very small (and by itself pretty much useless) core program, called the ‘Kernel’. Running other applications will provide whatever functionality is required. This attitude has led to the development of many different bundles of Linux that include different packages of software, and different configuration options set by default. The different bundles are generally called flavours. Among the more popular flavours of Linux are ‘RedHat’, ‘Mandrake’, ‘Debian’ and ‘Slackware’. My personal favourite flavour is RedHat, because it provides the most user friendly default installation and I recommend it to anyone just starting out in the world of Linux. Next, where do I get Linux? The easiest way is to go down to your local computer store and buy it, but wait! Why buy, when you can get it for free. Your other options are either to download it, or to copy it – legally - from a friend. Downloading is actually not as simple as it may seem. Last time I checked, RedHat was up to 3 program CD’s and 2 CD’s of source code (you probably don’t need the source code CD's). If you would like to download your copy, I suggest: www.linuxiso.org.

Page 11

Now that you have your copy, for brevity, we’ll skip the actual install of the operating system since there are plenty of online references to help you out. The easiest option is to install Linux on to a blank hard drive and let it install the defaults. One notable exception is that when you get to the stage where it asks you to review the packages that you want to install, I would select the textbased Internet packages. This will install ‘Pine’, which is a common UNIX program that is not only something you will see often in your travels, but also comes with my personal favourite text editor – ‘pico’. Don’t worry if you miss this during the install, you can always do so later. Now that you have Linux installed, a brief overview of the actual system is called for. One of the first things you will notice is that drive letters don’t exist. Instead, all of your files are simply located under ‘/’ (such a slash is pronounced ‘root’). User data directories are generally located under ‘/home/userid’. To access files on your other hard drives, or over the network, you have to mount the file system that they are on (note: a file system is just a fancy way of saying - hard drive, network connection, Windows hard drive, etc). In order to mount a hard drive, you must have full access to the system. A normal user of a UNIX type system will have access restrictions to the system. This protects you because it stops you or some malicious code you might run inadvertently from damaging the system. To gain full access to your system you have to log-on with the user id root. Then, you can do all the damage you like. The other major difference with UNIX type environments is that they are always case sensitive. This means that files called uNixisFuN and UNIXisFun can coexist in the same directory and can be accessed independently. Of course, this also means that if you are trying to log in as ‘root’ but your Caps Lock key is on, the ‘ROOT’ you will have typed won’t be recognized by the system. Next on our list is to actually do something with the operating system. As an example, we will set up our Linux system to function as a ‘Router’. A router is a device that connects two or more networks in a controlled fashion. When someone talks about a router today, they generally mean a way of sharing their Internet connection between multiple computers on their home network. You can purchase a piece of hardware to do this (that generally looks like a hub with a special port for your Internet connection). The only problem with these devices is that they can not be as flexible as a computer can be. Before we delve into setting up a router, we really need to know what a router does. This requires a basic level of understanding about how computers on the Internet communicate with each other. Every computer on the Internet has an IP address which is a 4 byte number that uniquely identifies your

Page 12

computer on the Internet to the others inhabiting the net. Incidentally, those of you in the know will therefore be able to figure out that as a consequence of this there can be no more than 4,294,967,295 computers on the Internet at any one time. IP addresses are generally specified in ‘dotted quad notation’. This means that each byte of the number is specified in decimal and is divided from the others by a decimal place. Back when the Internet was in its infancy, these addresses were divvied up pretty much to whoever wanted one. Some of the early participants were actually allocated address blocks of sixteen million addresses, a full 1/256 of the overall address space on the Internet. It’s obvious of course that this couldn’t continue forever. One stopgap measure that was implemented early on was to allocate various blocks of addresses for private use. This means that corporations can use the same technology that connects their computers to the Internet to also connect to each other. Three blocks of private addresses specified by the dotted quad notation are: 192.168.xxx.xxx, 10.xxx.xxx.xxx and 172.16.xxx.xxx. The addresses are listed in order of most common usage. Anyone can use these addresses on their home network without worrying about accidentally conflicting with an Internet connection. Of course, there is nothing stopping you from using a public address on your private network; it just isn’t terribly useful. A stand alone computer that connects to the Internet is allocated an IP address by its ‘ISP’, the Internet Service Provider. A few well known providers in Ontario are Sympatico, Rogers and AOL. The ISP has a set of addresses that it will allocate to you (that were in turn allocated to it by a higher authority) and which will identify you uniquely on the Internet. Packets of information that you send out on to the Internet are always tagged with your IP address and the responses come back to you tagged in a similar way. If, however, you are using multiple computers at home and are allocated more than one address from your ISP, the chances are you will still only have one connection to your ISP. This means that you then need some way to send packets from the multiple computers back to your ISP along a single connection and, furthermore, to direct any responses back to your particular computer. This is the function of the router. So, exactly how does a router help? By itself it doesn’t. In order for the packets you send out to come back to you they need to be altered so that the responses can find their way back to you. If the router substitutes the sending computer’s IP address for the public IP address, the packets can find their way back to your router. Your router can then substitute the destination address of the packet for the address of the sending computer and, in this way, the packet will make its way back to your computer. How do we get Linux to do this? Currently the

Phys 13 News / Fall 2003

program used for this in RedHat Linux is called ‘iptables’. This program applies rules to the incoming and outgoing packets and allows you to decide what to do with them. When a packet comes in, it starts in the INPUT table and rules are applied to either ACCEPT or DENY the packet. Also, similarly, before a packet gets sent out from the computer, it goes through the OUTPUT table. When a packet arrives at the computer with the request that it be sent on to another computer, it goes through the FORWARD table. The first thing you will need to do is install the module that allows the packets to be modified, as they go in and out of your computer: modprobe ipt_MASQUERADE

Next we want to make sure that the if something doesn’t match any of the rules then we drop the packet: iptables -P INPUT DROP Finally we create an explicit rule to reject any packets that come in, and go out the public network card: iptables -A FORWARD -i eth0 -o eth0 -j REJECT

There are some things that aren’t handled by the default installation (like a file transfer process, ‘ftp’) and to get around this you need to install further modules: ‘insmod ip_nat_ftp’ and

‘insmod ip_conntrack_ftp’.

Then we flush any pre-existing rules that might mess things up: iptables -F; iptables -t nat -F; iptables -t mangle -F Next we tell the computer to perform a ‘nat’;ie. a Network Address Translation. This will be done after the packet has been routed and the public interface (the one with the public IP address) is called ‘eth0’. If these things are true, then we will masquerade the packet: iptables -t nat -A POSTROUTING -o eth0 -j MAS QUERADE As a security precaution your computer will not forward packets, even if you tell it to. This is done in part so that if the ‘iptables’ should be compromised, the results will not be as severe. So, we need to set the variable ip_forward to 1: echo 1 > /proc/sys/net/ipv4/ip_forward At this point the routing capability will work but it is not yet secure. After all, you don’t want other people to be able to forward packets through your server, pretending to be you. We only want to accept incoming packets that are related to a previously established connection: iptables -A INPUT -m state —state ESTABLISHED, RELATED -j ACCEPT We also want to accept anything that comes in from anything other than the public interface: iptables -A INPUT -m state —state NEW -i ! eth0 -j ACCEPT

Phys 13 News / Fall 2003

You will now have a fully functional Linux router. However, in case you haven’t noticed, the client computers still won’t be able to connect to the Internet because they don’t know yet where to look to find the computers on the Internet. In fact, they don’t even have an IP address unless you have given them one. You can do one of two things to correct this situation and either set up a DHCP server (beyond the scope of this article) or by specifying the information manually on each of the client machines. In the latter case you will need to specify a unique IP address for each of the client machines on your local network (I suggest 192.168.10.xxx), a subnet mask (255.255.255.0), a gateway (this is the IP address of your Linux router) and a DNS server IP address. Note, this will only work if you can set up a DNS server on your Linux router; otherwise you need to check with your ISP for the IP address of their server. This document is by no means exhaustive. There are many online sources of information on Linux and the following are very useful: http://en.tldp.org/HOWTO/Masquerading-SimpleHOWTO/intro.html http://linux-newbie.sunsite.dk/

Once you have some specific questions to ask, by far the best source of information is a newsgroup. There are many Linux newsgroups out there for a wide variety of problems. I encourage you to install and play with Linux regardless of whether or not you intend to use it as a router. Like it or not, Linux is the first choice for many computer users and it is only sensible to be familiar with it.

Page 13

READER FEEDBACK

Dear Allan:

Dear Editor:

Thank you for your letter and the alternative setup for the demo. As the saying goes: "there is more than one way to skin a cat". Your more mechanical version will undoubtedly be welcomed news and preferred by some of the readers.

Your Spring issue of ‘Phys 13 news’ profiled Canadian Nobel Laureates. I read with interest and amazement that Banting’s history was somewhat incomplete. The article stated that Banting was a Liason Officer during WWII. In actual fact, he spearheaded the research for biological warfare in Canada (1). Banting was also heavily involved with chemical warfare studies and even volunteered to use his leg to have mustard agent chemical spread on it (2). Activities such as the production of Anthrax on Grosse Ile in the St. Lawrence were also directly related to Banting’s research. It is documented that he wasn 't even put off by the possibility of spreading disease to humans (3). If ‘Phys 13 news’ is going to profile the honours of a scientist, the whole story should be stated and not just the good parts. Both aspects are vital parts of their character and legacy. (1) “Deadly Allies - Canada’s Secret War”, by John Bryden Chapter Two - Banting's Crusade (2) Ibid pg. 47 (3) Ibid pg. 49 Vince Weeks, physics teacher Jacob Hespeler HS, Cambridge, ON Dear Vince: Thank you for your interesting coments. We stand corrected but would like to add that our biographical sketch was not meant to be comprehensive. It was actually taken directly from the Nobel site and for, possibly, obvious reasons they only included positive aspects. Dear Editor: The 'Dart and Falling Target Demo' as described in the Spring issue undoubtedly works but is unnecessarily complicated. When I was teaching Physics we did it without the electronics. In our setup, when the dart exited the metal tube, it moved a little wire, held against the end of the tube by its springyness, breaking the connection. The circuit consisted of a battery, the metal tube, the springy wire which projected about 2 mm across the open end, the electromagnet holding up the target, and back to the battery. That’s all. And it worked too. Best regards, and thanks for a good publication that I still enjoy. Alan Craig, retired teacher Brampton, ON

Page 14

Dear Editor: I have noticed from time to time you include humorous articles such as limericks, poems, and anecdotes. I would like to submit one as well. It is about a "proposed" new student organization on campus. I was a physics graduate student at Northern Illinois University last year when I wrote it.

Free the Quarks! The truth is being held captive! Around the world, quarks are held in ridiculously small quarters, often smaller than the legally allowed space in American prisons. All they can do is move up and down. They have to share this space with one to two other quarks. What is their crime? For being a little strange or different! These charming beings are bound strongly to each other, never able to leave their confines and see the light of day! The world must get to the bottom of this and bring these strong oppressors to justice!

Reach for the top! I would also like to take the opportunity to say how much I have enjoyed 'Phys 13 news' over the last five years. The articles have always been very well written and understandable and have made wonderful references for my students. Thank you. Hans Muehsler, physics teacher Aurora, Illinois Dear Dr. Anderson: Thank you very much for the fabulous certificate and the copy of "Seven Ideas That Shook the Universe". It will be very interesting summer reading. My OAC class was very proud of me for winning the Deranged Scientists Contest! Susan Hewitt, physics teacher

Phys 13 News / Fall 2003

THE SIN BIN A problem corner intended to stimulate some reader participation. The best valid solution to the problem will merit a book prize. We will always provide a book prize for the best student solution. Send your favourite problems and solutions to our BINkeeper, John Vanderkooy, [email protected]. Problem 108 We keep on the theme of energy in this issue and now want to supply Ontario’s electrical power requirements by using water from Lake Erie. It is about 80 m above Lake Ontario. If we assume that Lake Erie has a surface area of about 200 km × 50 km, and is 80 m deep, how long will the water in the lake supply Ontario’s power needs of about 20 gigawatts? Assume that Lake Ontario keeps its level as water drains out the St. Lawrence River, and that no other water drains into Lake Erie, so that the lake will be depleted to produce the power. I had the feeling that this would give an almost inexhaustible supply of power, but numbers do have to be respected…

courtesy of NASA

Problem 107 from last issue: Hydro One, Ontario’s troubled electrical power company, has found a way to reduce the energy crunch by using energy from the earth’s rotation. A top-secret installation well north of Sudbury has a generator that couples to spacetime. In order to remain undetected, the device slows down the earth so that the length of a day is only longer by 1 second in each century. What average power can they produce by this? You may assume that the density of the earth is uniform and is 4 times that of water. The Solution! This problem was concocted before the blackout

Phys 13 News / Fall 2003

that occurred on August 14th 2003! Maybe I had a premonition about that event. There have been very few responses, but I thank Howard Spence for sending in a good solution. He writes: “I would say that the use of tidal energy for the production of electricity sounds like the simplest way of coupling to space-time, and will cause a slowing of the rotation – not that this production will slow the rotation more than is already happening, but it just adds a step along the way that is useful to us before the creation of heat”. I’m not sure that represents a coupling to spacetime, though…. The Earth can here be considered a solid sphere of mass M with a moment of inertia I = (2/5)MR2. Its rotational energy is (1/2)Iω2, where ω is its angular velocity. If T represents the number of seconds in a day, 86,400, then in 100 years the length of the day would be T+1=86401 seconds. Thus the angular speed of the Earth will have been reduced by a factor 86400/86401, with a concomitant reduction in rotational energy which has to be amortized over 100 years to calculate the power. The Earth has a radius R = 6.37 × 106 m, and for a density of 4000 kg/m3, being 4 times that of water, the volume (4/3)πR3 of Earth results in a mass M = 4.33 × 1024 kg. This is about 72% of Earth’s actual mass. Its moment of inertia then becomes 7.03 × 1037 kg-m2. The angular speed ω can be calculated from its one rotation per day. We really should take the sidereal day (with respect to the stars), which is shorter by 1 part in 365, or about 4 minutes less, giving ω = 7.25 × 10-5 radians/second. The rotational energy of the Earth then turns out to be 1.848 × 1029 joules. The fraction of energy lost in the reduction of the Earth’s rotational speed can be calculated as 1−(86400/ 86401)2, which is 2.315 × 10-5. You can work this out by a binomial expansion if you wish. Thus we are now poised to calculate the average power over 100 years, which is 3.156 × 109 seconds, as 1.848 × 1029 × 2.315 × 10-5/ 3.156 × 109 = 1.356 × 1015 watts. The actual use of electrical power in Ontario is roughly 20 GW, or 2 × 1010 watts, so there will be plenty of power to spare! Initially, the problem's intention was that the length of a century would not require a single time correction of more than one second. In that case, the relative reduction in angular speed would be 1 part in 3.156 × 109, and that would have made the fraction of energy lost 6.34 × 10-10. The average power would then have been 37 GW, about double the present Ontario power demand. However, the actual wording of the problem can only be interpreted to mean a change in the length of a day by 1 second in a century, as analyzed earlier.

Page 15

FIND THE LAUREATES PUZZLE #1

B O H R N I E T S N I E

R R C R W I E N P A M H

A O O M O T T E B M R T

G B N C R D R E L N E E

G A H H K L I L O Y F B

by Tony Anderson

N G A O R H A G C E R R

O U H C O A O N H F A N

T N S A L A M U D E B A

P L A N C K N A S A I M

M I C H E L S O N E U E

O S Z T N E R O L A U E

C A R I D E B Y E R U Z

Hidden in the above table of 144 letters are the surnames of 32 Nobel Prize winning physicists and chemists, who are listed alphabetically below. You will find them by reading in one of eight ways: left right, up, down, or four different 45° diagonals (upper left lower right, lower left upper right). If you check off the names as you find them and also circle all the letters in these names (many are used more than once, so don’t cross them out!), you will eventually discover some unused letters. Now read these sequentially in the usual way (left to right, top row first) and they will form the surname of another Nobel Laureate. Who is he? Submit his name to Rohan Jayasundera for a book prize. In the list below, P indicates Physics and C indicates Chemistry, with the year of the award in brackets: Hans BETHE, P (1967); Felix BLOCH, P (1952); Niels BOHR, P (1922); Max BORN, P (1954): William and Lawrence BRAGG, P (1915); Bertram BROCKHOUSE, P (1994); Steven CHU, P (1997); Arthur COMPTON, P (1927); Peter DEBYE, C (1936); Paul DIRAC, P (1933); Albert EINSTEIN, P (1921); Enrico FERMI, P (1938); Richard FEYNMAN, P (1965); Dennis GABOR, P (1971); Otto HAHN, C (1944); Walter KOHN, C (1998); Lev LANDAU, P (1962); Max von LAUE, P (1914); TsungDao LEE, P (1957); Hendrik LORENTZ, P (1902); Albert MICHELSON, P (1907); Neville MOTT, P (1977); Louis NEEL, P (1970); Martin PERL, P (1995); Max PLANCK, P (1918); Isidor RABI, P (1944); Venkata RAMAN, P (1930); Martin RYLE, P (1974); Abdus SALAM, P (1979); Harold UREY, C (1934); Wilhelm WIEN, P (1911); Pieter ZEEMAN, P (1902). Biographies and further information on these Nobel Laureates can be found on the website of the Nobel Foundation: http://www.nobel.se A draw for a book prize will be made from all correct entries received before the end of December 2003. This contest is open to all readers of Phys 13 News. The solution and the winner's name will be given in a future issue of the magazine. Please include your full name, affiliation and address with your solution and mail to: R. Jayasundera, Physics, University of Waterloo, Waterloo, ON N2L 3G1 or fax: (519) 746-8115- attention Rohan or simply email: [email protected]

Page 16

Phys 13 News / Fall 2003

Phys 13 News is published four times a year by the Physics Department of the University of Waterloo. Our policy is to publish anything relevant to high school and first-year university physics, or of interest to high school physics teachers and their senior students. Letters, ideas, and articles of general interest with respect to physics are welcome by the editor. You can reach the editor by email at: [email protected]. Alternatively you can send all correspondence to: Phys 13 News, Physics Department University of Waterloo Waterloo, Ontario N2L 3G1

Phys 13 News / Fall 2003

Editor:

Guenter Scholz

Editorial Board:

Tony Anderson, Rohan Jayasundera, Jim Martin, Chris O'Donovan, Guenter Scholz, Thomas Thiemann, John Vanderkooy and David Yevick

Publisher:

Judy McDonnell

Printing:

UW Graphic Services Department

Page 17

Related Documents

108
August 2019 67
108
November 2019 53
108
October 2019 55
108
August 2019 65
108
November 2019 32
108
November 2019 48