Kompendium

  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Kompendium as PDF for free.

More details

  • Words: 56,210
  • Pages: 117
CHAPTER 1 INTRODUCTION THE TRACER CONCEPT A brief historical view Big Bang

All matter in the universe has its origin in an event called the “Big Bang”, a cosmic explosion releasing an enormous amount of energy about 14 billion years ago. Particles like protons and neutrons, the building stones of the nucleus, were condensed as free particles during the first seconds. However, it took several minutes until the decreasing temperature of the expanding universe reached a level that allowed the formation of particle combinations like deuterium (heavy hydrogen) as well as helium (see figure below).

O-fusio n

16

Number of protons

32

S 16 31 P 15 Heavy ion fusion in red super giants 28 29 30 Si Si Si 14 12C-fusion 27 Al 13 24 25 26 12 n Mg Mg Mg 23 Na 11 α 20Ne 21 Ne 22Ne 10 19 F 9 16 17 18 O O O n 8 He-fusion in 14 15 N N 7 red giants 12 13 n α C C 6 10 11 B B 5 α 8 Be 9 Be 4 6 7 Li Li 3 4 He 2 p+n d (deuterium), d+d He n 1 p d H-fusion in gas clouds and stars 0 n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Number of neutrons

For several 100 million years the universe was a large plasma composed by hydrogen, deuterium 1

and helium ions and free electrons. With time the temperature decreased and after 700 million years the electrons were able to attach to the ions forming neutral atoms and the plasma was transformed to a large cloud of hydrogen and helium gas. In the neutral gas gravity was able to locally condense the gas, a process that again increased the temperature, and the first stars, the red giants were born. The temperature and the density in the stars increased the probability of other nuclear reactions like the formation of beryllium-8 by two helium nuclides. This nucleus could then pick up another helium ion (or α-particle) to form carbon-12. In steps of four mass units (2 protons and 2 neutrons) light elements were formed, oxygen-16, neon-20, manganese-24, sulphur-32 etc in a process called fusion. Other reactions such as neutron capture, splitting of the target nucleus in nuclear reactions and radioactive decay contributed to create other combinations of proton and neutrons. The fusion process worked for elements up to iron but for more heavy elements other mechanisms took over. The iron nucleus picked up neutrons creating instable proton and neutron combinations that decayed emitting a negative charge in the form of a beta particle (β ). A new element (cobalt) with an extra proton in the nucleus was then created. The cobalt nucleus captured new neutrons and formed nickel nuclei via radioactive decay. In such a way, step by step, all the heavy elements we know were created. When the red giants grow old they exploded. All their content of heavy material was spread around in universe. By gravity other stars picked it up and formed planets. Actually, all matter around us, including our-selves, was once created in one of these red stars. Billion years have passed since our planet Tellus was formed. Most of the unstable protonneutron combinations once created has undergone transformations (radioactive decay) into stable combinations. However, some with very long half-lives remain: potassium-40, lead204, thorium-232 and the natural occurring isotopes of uranium.

The pioneers These “glowing” pieces after the cosmic cookery event were discovered by Henri Becquerel 1896 and were further investigated by Marie Curie and her husband Pierre Curie. It was the scientific sensation of that time, which still has great implications in our lives. Suddenly several new elements were discovered that emitted radiation of several types. First was polonium named after Poland, Marie Curie’s country of birth. The second was radium, which was soon found to have properties useful in medicine (cancer treatment). However, some elements were chemically identical with already known elements, such as lead, but emitted this strange new radiation. This was puzzling for the scientists at that time. They had only vague knowledge about the structure of the atom, and the neutron was still to be discovered. Soddy, in 1913, introduced the concept of isotopes (Greek: iso= same, tope= place). Isotopes occupied the same place in the periodical system, were chemically identical, but differed from each other in atomic weight and emission of radioactivity. When Chadwick in 1932 discovered the neutron this concept was given a rational explanation.

Antoine Henri Becquerel

Almost at the same time (1913), George de Hevesy demonstrated the practical implication of the isotopic theory. He and his colleagues used a radioactive isotope of lead, 210Pb, as a tracer (or indicator) when they studied the solubility of sparingly soluble lead salts. Hevesy was also the first to apply the radioactive tracer technique in biology when he investigated lead uptake in plants (1923) using 212Pb. Only one year later, Blumengarten and Weiss carried out the first clinical study. They injected 212Bi into one arm of a patient and measured the arrival time in the other arm. From this study, they concluded that the arrival time was prolonged in patients with heart disease.

Induced radioactivity So far, nature was the supplier of the radioactive nuclides that were used. Isotopes of uranium and thorium generated a variety of radioactive heavy elements such as lead, but radioactive isotopes of light elements were not known. Marie Curie’s daughter Irène, together with her husband Frédéric Joliot took the next step. Alpha-emitting sources had for long been used to bombard different elements. Rutherford studied e.g. the deflection of alpha particles in gold foils and discovered the small, charged nucleus of the atom. But Joliot-Curie also showed that the alpha particles induced radioactivity in the bombarded foil (in their case aluminum foil). The induced radioactivity had a half-life of about 3 minutes.

Marie Curie

Frédéric Joliot

Irène Joliot-Curie

Correctly, they identified the radiation to be emitted from 30P created in the nuclear reaction. Frederick Soddy

27

George de Hevesy

2

Al (α, n) 30P

They also concluded that “These elements and similar ones may possibly be formed in different nuclear reactions with other bombarding particles: protons, deuterons and neutrons. For example, 13N could perhaps be formed by capture of a deuteron in 12C, followed by the emission of a neutron.”

Ernest Orlando Lawrence

These new discoveries excited the scientific community. From having been a rather limited technique, the radioactivity tracer principle could suddenly be applied in a variety of fields and especially in life sciences. De Hevesy immediately started to study the uptake and elimination of 32P-phosphate in various tissues of rats and demonstrated, for the first time, the kinetics of vital elements in living creatures. 128I was applied in the diagnosis of thyroid disease. This was the start of the radiotracer technology in biology and medicine, as we know it today.

Enrico Fermi

The same year this was proved to be true by Ernest Lawrence in Berkeley, California and Enrico Fermi, Rome. Lawrence had built a cyclotron capable of accelerating deuterons up to about 3 MeV and it was by pure chance that this group had not yet discovered induced radioactivity. He soon reported the production of 13N with a half-life of 10 minutes. After this the cyclotron was used to produce several other biologically important radionuclides such as 11 C, 32P and 22Na. Fermi realized that the neutron was advantageous for radionuclide production. Since it has no charge, it could easily enter into the nucleus and induce a nuclear reaction. He immediately made a strong neutron source by sealing up 232Ra-gas with beryllium powder in a glass vial. The alpha particle emitted from 232Ra caused a nuclear reaction in beryllium and a neutron was emitted. 9

the following weeks, they bombarded some 60 elements and found induced radioactivity in 40 of them. They also observed that the lighter elements were usually transmuted into radionuclides of a different chemical element, whereas heavier elements appeared to yield radioisotopes of the same element as the target.

Be (α, n) 12C

Fermi started a systematical search by irradiating all available elements in the periodic system with fast and slow neutrons to study the creation of induced radioactivity. From hydrogen to oxygen, no radioactivity was observed in their targets, but in the ninth element, fluorine, their hopes were fulfilled. In

3

One early cyclotron-produced nuclide was 11C. It is of special importance since carbon is fundamental in life science. 11C has a half-life of only 20 minutes but by setting up a chemical laboratory close to the cyclotron, organic compound labeled with 11C can be obtained in large amounts. Photosynthesis was studied using 11CO2 and the fixation of carbon monoxide in humans by the use of inhalation of 11 CO. However, 20 minutes was short and the use of 11C was limited to the most rapid biochemical reactions. One must remember that the radio-detectors used at that time were primitive and that the chemical synthetic and analytic tools were not adapted to such short times. A search to find a more long-lived isotope of carbon resulted 1939 in the discovery of 14C produced in the nuclear reaction 13

C (d, p) 14C

However, 14C produced this way was of limited use since the radionuclide could not be separated from the target. During the bombardments, a bottle of ammonium nitrate solution had been standing close to the target. By pure chance, it was discovered that this bottle also contained 14C, which had been produced in the reaction 14

N (n, p) 14C

The deuterons used in the bombardment consist of one proton and one neutron with a binding energy of about 2 MeV. When high-energy deuterons hit a target, it is likely that the binding between the particles breaks and that a free neutron is created in what is called a “stripping reaction”. The bottle with ammonium nitrate had thus unintentionally been neutron irradiated. Since no carbon was present in the bottle (except small amounts from solved airborne carbon dioxide) the 14C produced this way was of high specific radioactivity. It was also very easy to separate from the target. In the nuclear reaction, a “hot” carbon atom was created, which formed 14CO2 in the solution. By simply bubbling air through the bottle, the 14C was released from the target. The same year, and in the same place, tritium was discovered by deuteron irradiation of water. One of the pioneers Martin Kamen stated: “Within a few months, after the scientific world had somewhat ruefully concluded that development of tracer techniques would be seriously handicapped because useful radioactive tracers for carbon, hydrogen, oxygen and nitrogen did not exist, 14C and 3H were discovered, and the situation greatly improved”. This is indeed true. The role of 3H- and 14Clabeled tracers in biochemical research is a story in itself. Before the Second World War, the cyclotron was the main producer of radionuclides since the neutron sources at that time were very weak. But with the development of the nuclear reactor, the situation changed. Suddenly, a tremendously strong neutron source was available, which easily could produce almost unlimited amounts of radioactive nuclides including such biological important elements as 3H, 14C, 32P, 35 S and clinically interesting radionuclides as 60 Co (for external radiotherapy) and 131I for nuclear medicine. After the war, a new industry was born which could deliver a variety of radiolabeled compounds at a reasonable price. However, accelerator-produced nuclides have a special character, which makes them differ from reactor-produced nuclides. Today, their popularity is increasing again. Generally, one 4

may say that reactor-produced radionuclides are most suitable for laboratory work, whereas accelerator-produced radionuclides are more clinically useful. Some of the most used radionuclides in nuclear medicine, such as 111In and 201Tl, are accelerator produced. When using very short-lived radionuclides (11C, 13N, 15O and 18 F) for positron emission tomography (PET) the need for an in-house accelerator is obvious. Radioactive nuclides with short half-life may play an important role in the future laboratory use of the tracer technique. We know that biological systems, such as pheromones, may work at extremely low concentrations. The radiotracer technique, as we know it today, is often not useful to study such systems since it is not sensitive enough. The drawback with 14C is its very long half-life of 5370 years. If we have one million 14C-labeled molecules, in the mean only one will decay in an experiment lasting a few days. Usually, the practical detection limits are in the pico-mole area. When using the more short-lived 11C, almost all labeled molecules contribute to the signal during an experiment lasting one hour. The sensitivity increases considerably to the femto-mole or even attomole level. DEFINITION OF IONIZING RADIATION Ionizing radiation is high-energy particles or photons that can cause ionization, in other words, cause the emission of electrons from atoms or molecules. Subatomic particles with a mass (such as electrons, protons and neutrons) transfer their kinetic energy, in different processes, to the orbit electrons of the atom. Photons (electromagnetic radiation) with wavelength of atomic dimensions or less behave as discrete ”packets” of energy (quanta). These quanta, in different processes, also transfer their energy to matter creating ionization. In 1895, Wilhelm Conrad Roentgen discovered X-rays, the first man made ionization radiation. When fast electrons are slowed down in dense material, they create radiation of electromagnetic nature similar to that of ordinary visible light. However, these quanta, roentgen or X-rays, have considerably higher energy than

light. This type of continuous radiation is also called bremsstrahlung.

Figure 1 A collimated radium source in a magnetic field.

The direction of the field is perpendicular to the plane of the page. The heavy, positive α −particle is deflected slightly to the left whereas the light negative βparticle is deflected markedly to the right. The uncharged γ photons being insensitive to the magnetic field carry straight on.

In 1896, Henri Becquerel demonstrated that uranium mineral emits penetrating radiation with properties similar to X-rays. The couple, Marie and Pierre Curie, showed in 1898 that certain uranium minerals contained a previously unknown element, radium, which emits intense ionizing radiation of different types. A magnetic field can separate the radium radiation into three components (Figure 1). On closer analysis, it became apparent that α−radiation was identical with helium nuclei, β-radiation consisted of electrons, and γ-radiation was of electromagnetic nature. Shortly afterwards, ionizing radiation was used in medicine for diagnostic radiology and the irradiation of tumors. However, at that time there were little knowledge about the effects of radiation on matter, how radiation transferred energy to the region irradiated and how to quantify radiation. Ionizing radiation is of course invisible and humans have no sensory organ that can detect it. Consequently, a series of tragic events occurred in the early decades of the use of ionization radiation. The need for measuring techniques (dosimetry) and protective procedures for those working with ionizing radiation was soon perceived.

5

Subsequent development was rapid. The technology to accelerate charged particles to high energy was developed. In the 1930’s, a new ionizing particle, the neutron, was discovered. Like roentgen and gamma radiation, it had no charge. It was soon demonstrated that the neutron caused nuclear reactions including nuclear fission in heavy elements. During the1940’s, physicists learnt to control the fission process, which in nuclear weapons can liberate instantaneously enormous amounts of energy in the form of ionizing radiation. In nuclear reactors, nuclear fission can be controlled and used to produce energy and large amounts of radioactive material. The explosion of nuclear weapons, especially in the biosphere, disperses enormous amounts of radioactive material, which resulted in many people being exposed to radiation. Nuclear power plants also contributed to an increase in radioactive nuclides in the environment even if during normal operation the increase is small. HISTORICAL SUMMARY 1895 1896 1900 1920 1920

1930

1939 1945 19501960-

Discovery of X-rays Discovery of natural radioactivity Period of serious personal injuries amongst radiologists due to lack of knowledge. National and international radiation protection standards begin to be instituted. The neutron and induced radioactivity discovered. Construction of particle accelerators begins. Nuclear fission discovered Nuclear bombs used (Hiroshima and Nagasaki). Development of nuclear reactors Use of reactors for production of energy, development of trace element techniques, increased use of radiation in medicine, research and industry

Energy of ionizing radiation When ionizing radiation (in the form of highenergy photons, neutrons or charged particles) collides with matter, the full energy or part of it can be transferred to the matter in various types of interaction. It is by such energy transfer that radiation exerts its damaging effect on living creatures. Part of the transmitted energy causes ionization or excitation in the material (Figure 2). Ionization means that electrons in the matter receive so much additional energy that they are liberated completely from the atom or molecule to which they are bound. By excitation, the additional energy is insufficient for the electrons to be liberated from the molecule. Figure 2 Ionizing radiation transfer energy to the atom's orbital electrons. They can either be displaced to a distal orbit (excitation) or they can be free electrons (ionization). The electron is displaced only to an orbit with a lower binding energy. The atom or molecule does not change its charge number but the process may change its chemical properties. +1 volt

e-, electron Ee= 1 eV ve= 600 km/s

To cause ionization or excitation, it is essential that the incoming radiation has certain energy. This is expressed as electron volt eV. One eV is the kinetic energy an electron or a proton receives after acceleration in a potential drop of 1 volt (Figure 3). Generally, charged and non-charged particles can be allotted a certain kinetic energy expressed in eV or keV (kiloelectron volt = 103 eV), MeV (megaelectron volt = 106 eV) and GeV (gigaelectron volt = 109 eV). X-ray and gamma radiation does not have mass or kinetic energy in the real sense. However, when their wavelength diminishes and begins to approximate the dimensions of the atom and the nucleus, the electromagnetic radiation can be regarded as particles (photons or radiation quanta). Each photon has certain energy (E) E=hν where ν is the frequency of the electromagnetic oscillations of the radiation, h is "Planck's constant", and E is energy in eV. On average, about 30 eV are needed to cause one ionization in most materials. We define ionizing radiation to have energy higher than 100 eV. Radiation with lower energy is called non-ionizing radiation. PRODUCTION OF IONIZING RADIATION In electromagnetic fields, charged particles can be accelerated to such energies that they can penetrate matter. Important types of accelerators are:

e+, proton Ep=1 eV vp=14 km/s 0

Figure 3 An electron is accelerated from the negative towards the positive electrode. At arrival, the electron has acquired a kinetic energy of 1 eV. A particle with positive charge, e.g. a proton, undergoes a similar event from the positive to the negative electrode. However, since the proton mass is about 2000 times the mass of the electron, the proton velocity is considerably lower.

6

a

for acceleration of electrons 1 X-ray tubes (0 - 300 keV) 2 Linear accelerators (2 - 100 MeV) 3 Betatrons (15 - 100 MeV) 4 Microtrons (5 - 50 MeV)

b for acceleration of heavy, charged particles (protons, heavier ions) 1 Van de Graaff accelerator (1-25 MeV)

2 Linear accelerator (1 - 800 MeV) 3 Cyclotron (1 - 1,000 MeV) 4 Synchrotron (1 - 300 GeV) Secondary ionizing radiation arises when the accelerated particles meet a radiation target. a X-ray photons, when for example electrons are decelerated in an X-ray tube. Radiation protection when working with Xray equipment is dealt with in Chapter 5. b Protons, neutrons, photons or other radiation particles when protons, heavier particles or photons cause nuclear reactions.

Figure 4 Cyclotron that accelerates protons to 17 MeV. Generally used for radionuclide production.

Figure 5 Modern diagnostic X-ray laboratory for tomographic information. Important parameters specifying an accelerator's performance are the type of accelerated particles (electrons, protons etc.), maximum particle energy (most often given in keV, MeV or GeV) 7

and beam current (number of particles per unit of time or in microamperes). From the radiation protection point of view, an essential feature of accelerators is that radiation production can be turned off easily. The only risk involved is the possible occurrence of induced radioactivity. Radioactive sources, which are the commonest source of radiation in the laboratory, do not have this convenient attribute. Other examples are neutron and gamma radiation, which arise as a result of nuclear fission processes in a fission reactor.

CHAPTER 2 NUCLEAR STABILITY AND INSTABILITY

Chemists learnt during the late 19th century to organize chemical knowledge into the periodic system. Radioactivity, when it was discovered, conflicted with that system. Suddenly various samples, apparently with the same chemical behavior, were found to have different physical qualities such as half-life, emitted type of radiation and energy. The concept of isotopes or elements occupying the same place in the periodic system (from the Greek: iso- = same and topos = place) was introduced by Soddy 1913, but a complete explanation had to wait for the discovery of the neutron (Chadwick 1932). The periodic system was organized according to the number of protons in the nucleus. This related to the number of surrounding electrons that determined the chemistry of the element. Since the number of neutrons in the nucleus is of importance as well, the physicist’s way to organize matter, the nuclide chart, is somewhat different. Each proton-neutron combination is identified in this chart. A proton-neutron combination is identified by

NUCLIDE CHART

A

X

Some relations of the numbers of protons and neutrons have special names such as Isotopes = number of proton is constant (Z = constant) Isotones = number of neutrons is constant (N = constant) Isobars = mass number is constant (A = constant)

8

60 40

On this side neutron excess nuclides which decay by β-

20

0

50

100

150

Number of neutrons Figure 1 Chart of nuclides. The black dots represent 279 naturally existing combinations of protons and neutrons (stable or almost stable nuclides). Around this stable line are about 2300 proton/neutron-combinations that are radioactive. Number of protons

The expression above is over-determined. If the element name X is known, so is the number of protons in the nucleus, Z. Usually, the following simplified expression is used.

80

0

A ZX

X = element name (e. g. C for carbon) Z = number of protons in the nucleus N = number of neutrons in the nucleus A = mass number (A=Z+N)

protons = neutrons On this side neutron deficient nuclides which decay by β+ and EC

100

Number of protons

ORGANIZATION OF MATTER

Of these expressions only the isotope concept is generally used. It is important to understand that whenever we use the expression isotope, we must always relate it to a specific element or a group of elements, e.g. isotopes of carbon (e.g. 11 C, 12C, 13C and 14C) or isotopes of the halogens.

9 8 7 6 5 4 3 2 1 0

13

O

C

10

C

8

1

H

H

F

18

F

19

F

20

F

21

F

22

O

17

O

18

O

19

O

20

O

21

F

23

O

22

F

24

F

25

O

23

O

24

14

15

N

16

N

17

18

19

20

21

12

13

14

C

15

C

16

17

18

19

20

B

14

B

N C

N C

10

11

12

13

9

10

11

12

7

8

B

6

He He

17 16

13

C

B

B

Be Be Be

9

N C

N

15

B

14

C

N C

N C

N

22

N

F

O

23

N

C

17

B

Be

11

Li Li Li Li

4

2

O

11

Be Be Be

6 3

15

8

B

7

O

12

N

9

14

Li

8

He

He

3

H

n

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16

Number of neutrons

Figure 2 A part of the nuclide chart where the lightest elements are shown. The dark squares represent stable nuclides that are important for our existence. Nuclides to the left of the stable ones are radionuclides deficient in neutrons and those to the right, rich in neutrons.

The force that keeps the nucleons (protons and neutrons) together in the nucleus is called the “strong force” or the “nuclear force”. Other

forces, e.g. the Coulomb force, tend to separate the charged particles in the nucleus. In the stable and in the radioactive nucleus, there is a balance between binding and separating forces. The interplay between the forces is illustrated in Figure 4. A proton and a neutron are held together by the strong force and create a stable isotope of hydrogen, deuterium. Two protons do not create a stable system since the Coulomb force exceeds the nuclear force.

proton

neutron

proton

proton

Figure 4 Binding and separating forces in the nucleus. A proton and a neutron can form a stable nucleus whereas two protons can not.

Liquid drop model

electron proton neutron

Figure 3 Here, three isotopes of the element hydrogen are illustrated using a simple atomic model. 1H has only one proton, deuterium 2H has a proton and a neutron in its nucleus, and tritium 3H has a proton and two neutrons. Isotopes behave in the same way chemically since it is the electron shell that determines what happens in chemical reactions. Physically they are completely different. Between two neutrons, no Coulomb force is present and one should expect this system to be stable. However, this is not the case since besides the strong force and the Coulomb force a symmetry rule exist saying that a higher nuclear stability is achieved when there are about the same number of protons and neutrons in the nucleus. This symmetry rule prevents two neutrons from sticking together, but allows two neutrons and a proton to form a radioactive nucleus, 3H. A useful measure is the binding energy that keeps the nucleons together. If the energy of 2.2 MeV is added to a deuterium nucleus, the two nucleons are parted as a free proton and a free neutron. Another way to express this is to measure the differences in weight. The deuterium nucleus is somewhat lighter than the sum weight of the free neutron and proton. This mass difference can be converted into energy using the Einstein formula E = mc2 and is found to be 2.2 MeV.

9

It is far beyond the ambition of this book to give a comprehensive understanding of these processes. However, some penetration of the topic is necessary to get a general feeling of the fundamental aspects of stability and nuclear decay. We use a model describing the exchange of nuclear forces that is referred to as the “liquid drop model”. This model gives an empirical formula of the binding energy B of the nucleus.

Figure 5 The range of the nuclear force is short. Only neighbors are interacting.

We explain the model step by step. B ≈ a1*A The binding energy is in the first approximation proportional to the number of nucleons in the nucleus. This is because the nuclear force has a very short range. Only neighbors are affected and can bind to each other. The nucleons in the central part of the nucleus all have the same number of neighbors. In a large nucleus the total binding energy, B, is then approximately equal to a constant*A where A is the number of nucleons. The same is true between molecules in a liquid, thus the model’s name. B ≈ a1*A - a2*A2/3

Nucleons at the surface have fewer neighbors and binding sites and are more loosely bound. The total binding energy is decreased by a factor that is proportional to the surface of the nucleus. The number of nucleons, A, is proportional to the volume of the nucleus. Thus, the nucleus radius is proportional to A1/3 and the surface to A2/3.

well. A nucleus with an odd number of protons or neutrons is less stable than a nucleus with an even number of both protons and neutrons. The most unstable nuclei have an odd number of both protons and neutrons, e.g. 14N (Z=7, N=7). In fact, only seven odd-odd stable nuclides exist, all of them among the light elements. This last factor a5 takes care of this effect in the following way

B ≈ a1*A - a2*A2/3 - a3*Z2/A1/3 When the numbers of protons start to increase, the Coulomb repulsing force starts to become more important. The Coulomb force is the sum of all proton combinations, F =Σ(Zi*Zj)/(rij) 2. If we integrate this function as a function of distance, we obtain a measure of the energy related to the force. It can then be shown that the negative contribution to the binding energy is proportional to Z2/A1/3 B ≈ a1*A - a2*A2/3 - a3*Z2/A1/3 - a4*(A/2-Z)2/A Nature seems to prefer to have about the same number of protons and neutrons in the nucleus. This is seen in Figure 1 where for light nuclides the nuclide chart is very close to a straight line (equal number of protons and neutrons). For heavier elements there seems to be a tendency towards an excess of neutrons. This is understandable if one considers the Coulomb force contribution. Adding neutrons will increase the distances between the protons, which will lower the Coulomb force and total binding energy in the nucleus increases. The expression a4*(A/2-Z)2/A is only empirically found but gives the wanted relations. If Z=A/2 i.e. the number of protons and neutrons is equal the binding energy is at a maximum. The contribution of this term is also less important for heavier nuclides, which agrees with experimental findings. B ≈ a1*A - a2*A2/3- a3*Z2/A1/3- a4*(A/2Z)2/A+a5 Paired electrons in the outer electron shell give chemically stable compounds, whereas unpaired electrons give radicals. The most chemically inert atoms are the noble gases (He, Ne, Ar, Kr, Xe, Ra), which have closed electron shells. The same phenomenon is found in the nucleus as

10

a5 =

apair A-3/4 for even-even nuclei 0 for even-odd nuclei -apair A-3/4 for odd-odd nuclei

When this model is fitted to experimental data the following parameters are obtained a1=15.76 MeV a2=17.81 MeV a3 =0.7105 MeV a4 =94.81 MeV apair = 34 MeV An interesting parameter is the mean binding per nucleon, B/A.

Figure 6 Mean binding energy per nucleon as a function of mass number A. The liquid drop model is fitted to measured data. The area from 0
This value, which gives the mean energy needed to separate one nucleon from the nucleus, varies from a few MeV to about 9 MeV as illustrated in Figure 6. The mean binding energy per nucleon is a smoothly varying curve that has a maximum around the iron part (A ≈ 60). If two light elements are fused together they gain energy. This effect is used in the fusion principle to obtain nuclear power. If one heavy element is split in two, the total binding energy increases as well. The excess in energy is then released and used in the fission principle to obtain nuclear power. As seen in Figure 6, the liquid drop model fits experimental data quite well. However, at a closer look, especially for the light elements there are some disagreements. This is because the nucleus can have closed proton and neutron shells that are especially stable structures. Critical numbers are 2, 8, 20, 28, 50, 82 and 126. This should be interpreted as if the nucleus has eight protons (oxygen) there is a closed proton shell and 20 neutrons will give a closed neutron shell, etc. Of special interest are nuclei with both a closed proton shell and a closed neutron shell e.g. 16O (Z=8, N=8) or 40Ca (Z=20, N=20). It has long been known that some nuclides (so called magic nuclides) were especially stable, but not the reason why. In the close-up of Figure 6, these nuclides seem to have extra high binding energy in comparison with the surrounding nuclides.

The carbon isotope 12C is stable. The element carbon has six protons (Z = 6) and the same number of neutrons (N = Z = 6). If one neutron is removed, the carbon isotope 11C (Z=6, N= 5) is obtained. This nuclide is not stable and will sooner or later undergo restructuring. If a neutron is added, 13C (Z=6, N = 7) is obtained, which is stable. However, its natural abundance (1.1 %) is low compared with 12C. Further addition of one neutron gives 14C(Z=6, N=8), which also is radioactive. It is a general rule (with a lot of exceptions) that the further you move from the stable combination of neutrons and protons, the more unstable the nucleus becomes, which means that restructuring to a stable condition occurs more quickly (shorter half-life). Radioactivity and decay time The radioactive process, i. e. the decay of a nucleus, is a statistical process. This means that the decay time of an individual radioactive nucleus can vary. But, for a large number of radioactive atoms (N) the same fraction (dN) always decays during the same time (dt). We can write this mathematically as dN = -λ N dt where λ is called the decay constant and is the probability for a nucleus to decay per time unit. We can integrate this equation to obtain N = No e- λ t

RADIATION FROM RADIOACTIVE NUCLIDES Since radioactive material is the source of radiation commonly encountered in a laboratory (labeled compounds, calibration preparations), radioactive decay is reviewed fairly thoroughly in this section. The concepts of nuclide and isotope are not related with whether the nucleus is stable or unstable. There are stable (non-radioactive) nuclides and isotopes, just as there are unstable (radioactive) nuclides and isotopes. The number of neutrons and protons in the stable nucleus obeys certain laws of symmetry.

11

where No is the number of radioactive atoms at t=0. The number of decay per time unit (A = - dN/dt) is called radioactivity. The relation between the radioactivity and the number of radioactive atoms is then A = λ N. Replacing this in the formula above gives A = Ao e- λ t where Ao is the radioactivity at t=0. Ao is also equal to λNo. Every radioactive decay is associated with a characteristic decay time. A common way to

express this is to use the half-life (T½) i.e. the time it takes to half the number of radioactive atoms. If we insert this time in the decay formula, we get the following relation between the half-life and the decay constant

The conversion factor from the old unit to the new unit is 1 Ci = 3.7 * 1010 Bq = 37 GBq (G = giga)

Ao/2 = Ao e- λ T½ or

Table 3 Conversion from Curie (Ci) to Becquerel (Bq) and the prefixes in the SI-system

T½ = ln(2)/ λ

Prefix

Ci

M=mega MCi

We can also write the decay formula as

k = kilo

kCi Ci

A = Ao 2-t/T½

m = milli mCi

After one half-life the radioactivity is reduced to a half, after two half-lives to a quarter, after three half-lives to an eight, and after four halflives to a sixteenth. Since

µ=micro

µCi

n = nano

nCi

p = pico

pCi

f = femto fCi a = atto

( 1/2 )10 = 1/1024 ~ 10-3 one may use as a rule of thumb that ten halflives reduces the radioactivity by a factor thousand and 20 half-lives by a factor million of its initial value. The half-life varies considerably for the various radioactive nuclides. It can be fractions of seconds, to hundreds of years or more. We usually express the half-life in seconds (s), minutes (m), hours (h), days (d) and years (a). Note that a is the SI-unit for the mean year = 365.24220 days. The unit of radioactivity in the SI-system is the Becquerel, after the French physicist who in 1896 discovered natural radioactivity. The unit is abbreviated to Bq. 1 Bq = 1 decay / s Decay per second is also written as dps, after the initial letters of the English words. An alternative way to express radioactivity is dpm, decays per minute. Curie (Ci) is an old unit for radioactivity that is still used. From the beginning it was defined as the radioactivity of 1 g radium, but was later standardized to 3.7 * 1010 decays per second.

12

aCi

6

10 103 1 10-3 10-6 10-9 10-12 10-15 10-18

Bq

Prefix 16

3.7*10 3.7*1013 3.7*1010 3.7*107 3.7*104 3.7*101 3.7*10-2 3.7*10-5 3.7*10-8

37 PBq

P=peta

37 TBq

T=tera

37 GBq

G=giga

37 MBq M=mega 37 kBq

k = kilo

37 Bq 37 mBq 37 µBq

m = milli µ=micro

37 nBq

n = nano

Surface radioactivity is given in Bq /m2, radioactivity concentration in Bq /m3 and specific radioactivity in Bq /kg. Radioactive decay Unstable nuclides have a surplus of energy compared with the surrounding nuclides. By making a transformation within the nucleus, this surplus energy decreases and the remaining nucleus will be more stable (the binding energy increases). There are many ways to rearrange the nucleus to obtain a more stable nucleus. As an example, if we remove one neutron from e.g. 14 C we obtain the stable 13C. However, this is usually not a spontaneous process. As seen in Figure 6, one needs 5-8 MeV to remove one nucleon. Even if the nucleus gains energy in the end it does not have access to the primary energy needed to run this process. For very heavy nuclides the binding energy of the neutrons decreases and here we can find spontaneous neutron emission. However, in general this process is rare.

where ν is the neutrino. The bar above the ν indicates that the neutrino is an antiparticle (antineutrino).

The β - decay A process that can occur spontaneously is the beta-decay. The number of nucleons is kept constant in the decay but a neutron is changed into a proton, or vice versa. A particle of positive (positron or β+ ) or negative charge (negatron or β− ) is removed from the nucleus along with energy (the β particle's rest mass + kinetic energy). The β particle has the same weight and the same particle properties as an electron. In fact, it can not, when it has lost its kinetic energy, be distinguished from an electron. Still, we name it a beta particle to mark that it was created in a nuclear process in the nucleus. We will below see that also electrons can be emitted in radioactive decays but they all will come from the electron shells around the nucleus. In a radioactive nucleus rich in neutrons, the decay occurs as if a neutron (n) were converted to a proton (p): n -- rel="nofollow"> p + β−

+ energy

Symmetry is an important feature of nature. For most particles, there is an associated antiparticle which has the same mass. Other properties like charge will be the opposite. Charged particleantiparticle pairs can annihilate, i.e. the masses will disappear and the released energy will be converted to photons. Formula 1b also fulfils another rule in particle physics that you can not create particles out of nothing. The number of heavy particles (the nucleons) is the same on both sides. The same is true for light particles like the electron and the neutrino. Since particles and antiparticle cancel out the number of light particle is then also the same on both sides (zero). The decay of 14C is an example of beta minus decay. 14

C(Z=6, N=8) -->14N(Z=7, N=7)+β−+ ν +energy (2}

(1a)

This reaction formula fulfils some criteria. On both sides 1. the mass is about equal since the neutron weight is the sum of the proton and the electron weight 2. the charge is the same 3. the numbers of nucleons are the same However, although the energy released per decay is a constant it was found that the kinetic energy of the beta particles varied between zero and the decay energy. To explain the missing energy the physicist Wolfgang Pauli 1930 suggested an additional particle now known as the neutrino. This uncharged particle with little or no mass was also emitted in the decay and shared the decay energy with the beta particle. 25 years later the neutrino was experimentally verified. The complete reaction formula for the beta decay can then be written as

The β+ decay can be regarded as a proton that is converted to a neutron.

n --> p + β− + ν + energy

p --> n + β+ + ν + energy

(1b)

13

Figure 7 Decay scheme for 14C. The x-axis gives the number of protons in the nucleus and the y-axis its energy content. By converting a neutron to a proton, the number of protons in the nucleus increases and at the same time it becomes more stable. In the process, a negatively charged beta particle is emitted.

(3)

The β+ is now the antiparticle and cancel out the “real” neutrino to sum up light particles on the right side to be zero. An example of the positron decay is: 11

C(Z=6, N=5) -->11B(Z=5, N=6) +β+ +ν +energy (4}

at the mass balance in equations (1b) and (3). The mass of the neutrino can be neglected. The mass of a neutron (Mn) is approximately equal to the mass of a proton (Mp) and a beta particle (Mβ). Equation (1b) is then in a mass balance

Binding energy

Mn ≅ Mp + Mβ 11C

In equation (3) we have the mass of a proton on the left side and the mass of a neutron and a beta particle on the right side. The only way to get the equation in balance is to add two beta masses on the left side.

Z=6, N=5 ∆E=2 MeV 11B

Z=5, N=6 2* Mβ + Mp = Mn + Mβ

5 6 Number of protons in the nucleus Figure 8 Decay scheme for 11C. The x-axis gives the number of protons in the nucleus and the y-axis its energy content. By converting a proton to a neutron, the number of protons in the nucleus decreases and at the same time it becomes more stable. In the process, a positively charged beta particle is emitted. The radioactive nuclides used in laboratory work usually have an excess of neutrons and consequently emit their energy in the form a βparticle, which is ejected from the nucleus with a specific kinetic energy and decelerates in the surrounding material.

According to the Einstein formula E = Mc2. The mass of two beta particles corresponds to 1,022 MeV. In the 11C decay, a total energy release of 2 MeV is available. Of this energy about 1 MeV is taken to account for the extra mass created in the decay, leaving 1 MeV to be shared by the positron and the neutrino. In the decay scheme in Figure 8, this is indicated by the broken arrow from 11C to 11B. The first vertical part indicates the energy needed within the 11C nucleus to create the two electron masses. The diagonal part of arrow indicates the part of the decay energy that is converted into kinetic energy of emitted particles. The decay energy

The kinetic energy of the emitted particles is determined by the energy released in transformation of the nucleus. This may be illustrated in an energy diagram or a decay scheme. Figure 7 shows such a diagram for 14C and Figure 8 for 11C.

The decay of 14C always releases 156 keV. Other beta decays, i.e. 3H and 32P (see Table 1), are also associated with a unique energy release.

The 14C-decay releases energy of 156 keV and the beta particle has a maximum kinetic energy of 156 keV. The 11C decay releases 2 MeV but the maximum energy of the β+ is only about 1 MeV. This is because two new electron masses are created in the β+ decay, which corresponds to 1 MeV. We can understand this by looking

Nuclide Half -life Residual Decay energy, Eo T½ KeV 3 3 H 12.26 a He 18 14 14 C 5 760 a N 156 32 32 P 14.3 days S 1710

14

Table 1 Decay parameters of some common radionuclides.

The β-particles share this energy with the neutrino randomly. The sum of the β−particle

and the neutrino kinetic energies are then equal to the decay energy, Eo: Eβ + Eν

Each nuclear transition is characterized by a fixed energy release or a Q-value. If the Q-value is positive, surplus energy is emitted in the transformation. If the Q-value is negative, additional energy is needed to enable the transformation. A spontaneous radioactive decay has always a positive Q-value, whereas a nuclear reaction in e.g. a nuclear reactor often is associated with a negative Q-value. The energy released in a nuclear decay can be used to create new mass or to give emitted particles kinetic energy.

= Eo

In most beta decay the relative energy distribution of the β-particles has the same shape (Figure 9). Relative number of β -particles

Figure 9 Energy spectrum of β −particles at radioactive β-decay. The energy is given on the x-axis and the number of particles per energy bin on the y-axis. The distribution of energy to the particles is determined statistics. The neutrino is an uncharged particle of unknown, but very small, mass. It seldom interacts with matter and its energy can in practice be ignored. Only the β−energy, Eβ, can be transferred to the surrounding material. The maximal beta energy and the mean beta energy can be written as:

E β ≈ Eo/3

(Eβ)max = Eo

Q-value

When calculating the maximum range of the β −particles, (Εβ)max is used. Table 2 Maximum range of β -particles in air and water. (Eβ)max Nuclide Range (mm) (keV)

Air

Water

3H

18

6.5

0.007

14C

159

300

0.3

32P

1710

7700

7.9

15

Pure beta decays go from the ground level of the radioactive nucleus to the ground level in the daughter nuclide. Three bodies share the decay energy, the beta particle, the neutrino and the recoiling rest nucleus. However, since the mass of the recoiling nucleus is much larger than for the other particles, the energy of the recoil nucleus is virtually zero. The total reaction energy is then statistically distributed between the beta and the neutrino, and the Q-value is numerically close to (Eβ)max . More generally, the decay may produce excited states in the daughter nuclide. The nucleons, i.e. the protons and the neutrons, create quantified states of the nucleus. In the ground state, movements and energy contents are at a minimum. In an excited or deformed state, the energy content is higher. The nucleus can suddenly change its state, from a more excited state to a less excited or deformed state. In this process electromagnetic photons, i.e. gamma rays, are emitted. The gamma rays carry away parts of the total decay energy. For each photon (gamma quantum) emitted, the excitation energy of the nucleus diminishes until it finally reaches the ground energy state. An example is the decay of 60Co what can be illustrated in the decay scheme below (Figure 10). It is a beta minus decay emitting a beta particle with a maximal energy of 2.824 – 2.506=0.318 MeV. The decay created an excited daughter nucleus 60Ni. This excited levels emits a gamma of 2.506-1.332=1.173 MeV. A new excited level is created which also

emits a gamma of 1,332 Mev. If we ad up the total energy emitted 0.318+1.173+1.332 we end up with the Q-value of the decay (2.824 MeV). E n e rg y (M e V ) 2 .8 2 4 2 .5 0 6 1 .1 7 3 M e V gam m a e m itte d 1 .3 3 2 1 .3 3 2 M e V gam m a e m itte d 0

Figure 10 A simplified decay scheme for 60Co. Only dominant radiation is included. Electron capture The total energy release (the Q-value) in a positron decay has to be more than 1 MeV to compensate for the increase in mass (see above). An alternative route of decay that does not need this extra energy is electron capture. Instead of converting a proton to a neutron and a positron, the nucleus can attract an electron from the surrounding electron shells and join it with a proton to create a neutron. -

e + p

--> n + ν

The effect is the same as in the positron decay; a proton is converted to a neutron. However, since no beta particle is emitted the full excess energy in the decay of the nucleus has to be given to the neutrino. Accordingly, in this type of decay monoenergetic neutrino particles are emitted. The decay of 125I is a typical example of this process (Figure 11 and 12). Although no beta particle is emitted the electron capture is a variant of the beta decay since the same physics is involved. There is then three branches of the beta decay, the beta minus, the beta plus and the electron capture.

common way to emit the energy in the excited state is by emitting a gamma ray. This is also done in the 125I decay, but only in 7 % per decay. The excited state, 35.5 keV, is very close to the binding energy of the K-shell electrons (33 keV). There is a probability that a K-shell electron is inside the nucleus due to the quantum mechanical nature of the electron. There is also a probability that the excited state interacts with and transfers its energy to the Kshell electron. The transferred energy is higher than the binding energy, and we obtain a monoenergetic electron, a conversion electron, in 93 % of all decays. The excited state does not only interact with the K-shell but also with the L-shells and the Mshells although the probability is less (the electrons are further away from the nucleus and thus the probability of being inside the nucleus is smaller). The energy of the conversion electrons varies according with the formula below E = 35.5 keV - Eshell where Eshell is the binding energy of the shell from which the electron is emitted. In the 60Co decay a small amount of conversion electrons is found as well. In fact, an excited state in a nucleus always has two ways to deexcite, either by emitting a gamma or by converting the excited energy to a shell electron. The heavier the nuclide is, the higher the percentage of conversion electrons. A heavy nucleus has many protons in the nucleus and a strong attractive force on the shell electrons that are close to the nucleus. Thus, the probability that the excited state will interact with the electrons increases.

Conversion electrons

Another important factor that affects the probability of emitting conversion electrons is the energy of the excited level. Energies close to the binding energy of the shell electrons give the highest probabilities.

The decay of 125I gives a new example of a decay process ending in an excited state of the daughter nuclide (Figure 11). As said above, the

The electron capture and the conversion electrons both create holes in the electron shell. The process to fill these holes created a cascade

16

of events and the emission of electrons called Auger electrons and characteristic X-rays. These processes are exemplified and explained in more details in the decay of 125I.

energy is transmitted as X-rays or free electrons that are dislodged from the atom (Auger electrons). This process is seen in more detail in Figure 12.

Decay of 125I 125

I is a neutron deficient nucleus (the stable isotope of iodine is 127I). It decays to 100% by electron capture since its Q-value of 186 keV is far to low for positron decay. The decay scheme is seen in figure 11 and shows that the decay in 100% yields an excited level in the daughter nuclide Tellurium-125. This excited level of 35.5 keV is just above the binding energy of the k-shell electrons (33 keV). This excited level is de-excited by gamma emission but only in 7% per decay. More likely (93% per decay) is that a conversion electron (energy 35.5-33=2.5 keV) is emitted.

Step 1

Step 4

Cascade of characteristic X-rays and Auger

Conversion electron

Step 2

Step 5

125Te

125Te

in ground state

in an excited state

125I (Z = 53, N = 72)

Step 3

Step 6

Figure 12 Details of the 125I-decay

149 keV EC (Electron capture) 125Te in an excited stage

35.5 keV

Decay of excited 125Te to ground state by internal conversion

Electron capture, EC

emitted gamma (7 %) or conversion electron (93 %) 125Te (Z = 52, N = 73)

Figure 11 Decay scheme of 125I. The number of protons in the nucleus is on the x-axis and the energy content of the nucleus on the y-axis. By capturing an orbital electron, a proton is converted to a neutron and at the same time, the nucleus becomes more stable. In the process, a neutrino is emitted and carries away the energy which is formed. The new nucleus is formed in a excited state, but after a time it is de-excited to a more stable state. In this process, either a gamma ray or a conversion electron is emitted. Both at the electron capture, when an electron is drawn into the nucleus, and at the forming of conversion electrons, holes appear in the electron shell around the nucleus. These holes become filled by electrons from the outer shell. In this process, the disparity in the binding

17

Hence, the main radiation from 125I are K shell X-rays of 27 keV and Auger electrons of low energy. The range of the electrons is comparable to that of β − particles from 3H. Generally, the photon radiation is brought to a standstill practically by two millimeters of brass, whereas several decimeters of water are required.

Alpha decay Alpha emitting radionuclides are not commonly used in the laboratory and will only briefly be treated here. The physics involved is different from the beta decay and can actually be described as a tunneling effect through the potential barrier around the nucleus. The following description is just to get an idea of what this potential barrier is and the main features of the decay. Consider a free He++ ion outside a heavy nucleus (see Figure 13). If you try to move the He++ ion closer, an increasing Coulomb force

in-between occurs. This force tries to repel the He++ ion until it is so close to the nucleus that the strong force starts to act. The He++ ion is then trapped inside the so-called nuclear potential, which is created in a balance between the nuclear force and the Coulomb force. The barrier the He++ ion has to climb before it reaches the attracting nuclear force is called the coulomb barrier. Its height depends on the number of protons in the nucleus, but is in the order of 5-7 MeV for heavy nuclei. When particles are trapped in the nucleus they loose their identity. However, the He++ ions consist of two protons and two neutrons. Both the proton shell and the neutron shell are closed and the He++ ion is a very stable combination of nucleons. One might consider that this stable combination can exist in the nuclear “soup” in the form of a virtual α-particle. Also, these virtual α particles may populate different energy levels in the nucleus. In the nucleus, the energy content is quantified and different quantified energy levels exist. In Figure 13, two such levels are indicated. In the bound level 1, the virtual α particle cannot escape the nucleus since it is bouncing against an infinitely thick potential wall. In the bound level 2, however, a potential wall exists that keeps the virtual α particle trapped but it is not infinitely thick. The α-particle may penetrate the wall by, what is known in quantum mechanics as, tunneling. Suddenly, the αparticle appears in a materialized form outside the Coulomb barrier with energy corresponding to the excited level.

He++ + +

repulsive coulomb force

Figure 13 Schematic view of the alpha decay The probability of decay is highly determined by the thickness of the barrier, and the half-life of useful radioactive sources varies from hours to many thousands of years. Alpha particles appear in one or more energy groups that are, for all practical purposes, monoenergetic. For each distinct transition between initial and final nucleus, a fixed energy difference or Q-value characterizes the decay. This energy is shared between the alpha particle and the recoil nucleus. The alpha particle energy correlates strongly with the half-life of the parent nuclide; the highest energies have the shortest half-lives. Above 7 MeV, the half-life can be expected to be less a few hours and. On the other hand, if the energy drops below 4 MeV, the barrier penetration probability becomes very small and the half-life of the isotope will be very large. Probably the most commonly used source for alpha particles is 241Am (T½=433 a) with an αenergy of 5.5 MeV. Because alpha particles lose energy rapidly in materials, alpha particle sources must be prepared in very thin layers. To contain the radioactive material, typical sources are covered with a metallic foil or other material that must also be kept very thin if the original energy and monoenergetic nature of the alpha emission is to be preserved. Almost all naturally occurring alpha emitters are heavy elements with Z >83. The principle features of alpha decay can be learned from the example of 226Ra:

226 Ra  88

bound level 2

222 4 Rn + He 86 2

The energy released (Q-value = 4.88 MeV) in the decay arises from a net loss in the masses MRa, MRn and MHe of the radium, radon, and helium nuclei:

bound level 1

Q = MRa - MRn- MHe

Nucleus Z>80 heavy positivly charged

The general equation for the energy release in the alpha decay is then

18

Qα=Mparent-Mdaughter-MHe Spontaneous fission The fission process splits the nucleus into two parts of almost equal size. In the process free neutrons are also emitted. Fission fragments are used to calibrate and test detectors intended for general application in heavy ion measurements. The most widely used example is 252Cf, which undergoes spontaneous fission (3% per decay) and alpha decay (97% per decay) with a half-life of 2.65. Each fission gives rise to two fission fragments, which, by the conservation of momentum, are emitted in opposite directions. Because the normal physical form for a spontaneous fission source is a thin deposit on a flat backing, only one fragment per fission can escape from the surface, whereas the other is lost by absorption within the backing. The spontaneous fission of 252 Cf also liberates a number of fast neutrons. Natural radioactivity Some heavy elements (232Th, 235U and 238U) has such long half-lives (109-1010 a) that they have survived the time since the where created in the Big Bang and are still to be found in appreciable amounts. In there decays they produce a series of more short-lived radioactive daughters. Some of these daughters we are very much aware of e.g. the alpha emitter Radon-222, which is a noble gas and that can penetrate into our house 1461 keV

EC (11 %)

40K (T½=1.28 109 a)

β- (89 %)

482 keV

Another important, naturally occurring radioisotope is 14C, used in carbon dating. It is produced by the bombardment of nitrogen in the atmosphere by cosmic rays, causing the reaction 14 N(n, p)14C. Its half-life is 5730 yr. The radioisotope, existing as CO2 in the atmosphere, is used by e.g. trees and becomes fixed in their structure through photosynthesis. When an tree dies the incorporation of new 14C atoms stop and the number will start to decrease due to the radioactive decay. By, in a sample, comparing the number of 14C with the number of stable 12C one can then calculate the time that has passed since the tree was cut and used for e.g. building a house. Thus, the age of such objects can be determined. The atmospheric testing of nuclear weapons has significantly added to the world's inventory of 14C. DATA ON RADIOACTIVE NUCLIDES

There are several books available, with all relevant information on decay-time, type and number of particles emitted per decay, energy per particle, etc. However, the fastest and cheapest way to obtain information today is by the Internet. One good site is http://www.nndc.bnl.gov/nudat2/indx_dec.jsp

1312 keV

β+ (0.001 %)

Few lighter elements have naturally occurring radioisotopes with that long half-live. The most important from the standpoint of human exposure is 40K, which is about 0.01% abundant and has a half-life of 1.26 x 109 years (Figure 14). The nuclide can decay by β- emission (89%), EC (11%), or β+ emission (0.001%). The maximum β- energy is 1.314 MeV. This isotope is an important source of human internal radiation exposure, because potassium is a natural constituent of plants and animals.

Below there is an example of the 125I decay and with some explanations.

40Ca

40Ar

Figure 14 Decay scheme for 40K

19

20

CHAPTER 3 INTERACTION WITH MATTER Matter is composed by a number of different atoms in chemical combinations. A piece of iron is in a macroscopic sense solid and difficult to penetrate. However, at the dimensions of ionization radiation, matter is rather empty and without any major mechanical obstructions to prevent radiation to pass.

energy exceeds the binding energy and we obtain a free electron, ionization. The same thing is true for an electron or a beta particle that will experience a force of the same strength but repulsive.

Consider a neutron entering a piece of iron. The iron atom has a dimension of 10-10 meters but the iron nucleus is only 10-15 meters, and the neutron itself is even smaller. The electrons have no known extension and since the neutron has no charge it will not feel any electric forces. The neutron will probably pass through a thin iron foil without noticing the material. However, if the foil grows thicker it will increase the likelihood that the neutron will hit one of the atom nuclei, cause a nuclear reaction and transfer its energy to the matter. If we exchange the neutron for a proton, a dramatic change occurs. The proton will experience that matter is full of charge. Each orbit electron in the vicinity of the proton will feel a force of attraction, the Coulomb force (Figure 1).

Figure 2 An α-particle transfers so much energy that an ionization occurs. Since the αparticle and the orbital electron have opposite polarity, there is an attractive in-between.

electron mass m charge -q distance r closest distance d F = q*q/r 2

proton mass M velocity v charge +q

Figure 1 A proton that enters into matter experiences an attractive force from all the orbit electrons in the vicinity. We can say that we have a collision between two electrical fields and in which energy is transferred to the orbit electron. Sometimes the

21

Figure 3 The electrical field of a beta particle collides with that of an outer orbit electron. So much energy is transferred that the orbital electron is dislodged (ionization). Since the incoming electron and the orbital electron have same polarity, a repulsive force is created.

However, the electron and the proton differ considerably; they have different masses and different speeds. An electron with the energy of 1 MeV has a speed close to the speed of light (3*108 m/s). The time it will spend in the vicinity of an atom and an orbit electron will be 10-18 - 10-19 seconds. The force will act during a very short time and the impulse ∫F*dt transferred from the incoming electron to the orbit electron will be small. A proton of the same energy (1 MeV) will have a speed 20 times lower than the electron. The time spent in the vicinity of the orbit electrons will be correspondingly longer and the transferred impulse larger. From this simple consideration we can draw two important conclusions that are true for charged particles:

straightforward to the end of their range, deviating only slightly from a straight line. Thus, heavy, charged particles have a welldefined range and monoenergetic particles travel about the same distance in a given material. When heavy, charged particles are obstructed their speed diminishes. Consequently, they spend a longer time close to the next orbital electron and transfer more energy to it. Then, the number of ion pairs formed per unit of length increases (see Figure 5).

1. The heavier the particle is, the more energy is transferred per collision. 2. The lower the kinetic energy of the particle is, the more energy is transferred. The expression "ionizing radiation" includes many types of "radiation". The most important kinds of radiation are classified according to how they transfer energy to matter in the diagram below (Figure 4). IONIZING RADIATION

Electro-magnetic radiation, photons (X-rays and gamma)

Particles (protons, electrons, alpha and neutrons)

Uncharged particles

Charged particles

(neutrons)

(protons, electrones and alpha)

Light, charged particles

Heavy, charged particles

(electrones)

(protons and alpha)

Figure 4 Schematic subdivision of the different types of ionizing radiation. The direction of heavy, charged particles is scarcely affected by collisions with orbital electrons. They continue to travel mainly

22

Figure 5 Deceleration of an α −particle in air. The α -particle at high energy has high speed. The time it spends near each orbit electron is short; hence, the transferred energy is small and the probability for ionization is relatively small. When the energy diminishes so does the velocity. The probability of ionization increases and a typical ionization peak at the end of the particle track (the Bragg peak) occurs. If the incoming charged particle is an electron, the rest mass of this particle is the same as for the orbital electron. When two particles with the same mass collide, a great deal of energy (almost all) can be transferred in a direct impact and the incoming particle will come to a complete stop. For two electrons we have a problem; they are identical. We cannot with any means know which one is the incoming particle and which one is the orbital electron. For practical reasons, therefore, we define the electron with the highest energy after the collision to be the incoming particle.

At each collision, light charged particles can change direction very markedly and by repeated large directional changes can even be scattered backwards to the direction of their origin (Figure 6). Hence, the range of electrons differs considerably. However, a maximum range can be given (Figure 7).

Bremsstrahlung When electrical charges are accelerated or decelerated, they can emit electromagnetic radiation. We call this roentgen, continuous Xrays or bremsstrahlung. Synchrotron light is of the same nature.

At these collisions, the energy that is transferred is frequently sufficient to ionize an orbital electron. The energy might even be so high that the free electron, in turn, can cause new ionization. Hence, part of the energy of the primary charged particles is deposited at some distance from the original track of the particle. Secondarily formed electrons, δ−particles, can give rise to further ionization tracks.

Back-scattered electron Bremsstrahlung S

2

0

4

0

6

0

8

0

1

0

e

r

i

e

s

1

0

Figure 8a An incoming electron feels the Coulomb force from the nucleus and is deflected. The electron speed and kinetic energy decreases in the process - the electron decelerates. The difference in kinetic energy between in-going and out-going electron is emitted as an X-ray photon.

Figure 6 Electrons, which penetrate material, collide with orbit electrons of the same mass as the incoming particles. They can effect marked directional changes including scattering in the direction opposite to the original course.

Energy loss (MeV/g*cm2)

10 Coulomb collision

1.0 Water Lead 0.1

Bremsstrahlung 0.01 0.1

1.0

10

100

Electron energy (MeV)

Figure 7 A schematic figure illustrating the range of monoenergetic electrons. Most of the electrons are absorbed within a distance shorter than the maximal range.

23

Figure 8b Loss of energy by electrons per track length (expressed in mass per unit area) as a function of electron energy. A heavy and a light material show how the atomic number of the material influences the two ways by which electrons transfer energy to material. Heavy particles lose only little energy per collision and are only decelerated in small steps.

Thus, for heavy particles the loss by bremsstrahlung is of little importance. Electrons, on the other hand, often lose a great deal of their energy by deceleration in the strong fields of heavy atoms; hence, the loss by bremsstrahlung is of more importance. The result of continuous radiation increases with the electron energy and atomic number of the absorber. For high electron energies and heavy elements (e.g. lead), the highest X-rays yield is obtained (Figure 8b). Stopping power Charged particles gradually lose their energy (they are decelerated) by many close "collisions" with orbital electrons in the irradiated material. The ability of different materials to decelerate charged particles varies considerably depending on the charge and energy of the charged particles, and the density and atomic number of the irradiated material. The deceleration capacity attributed to a given material is denoted by how much energy (∆Ε) is expended in a given length (∆l) of the charged particle’s track.

and tabulated. Stopping power data for electrons in lead and water are given in Figure 8. Stopping power for water is given for electrons, protons and α- particles in Figure 9. Linear energy transfer, LET The energy that charged particles lose by deceleration is transferred to the irradiated material in different ways. The energy can be deposited more or less concentrated around the primary particles' track. Direct ionization results in energy deposition in the vicinity of the particles' track. However, secondary particles, such as bremsstrahlung and neutrons (in nuclear reactions) with no charge, may deposit their energy far away from the particle track. To understand the effect of radiation on living material, it is also important to have a measure of how densely the radiation's energy is deposited around the particles' track. Hence, the concept Linear Energy Transfer (LET) was introduced. It denotes the amount of energy the specified particle emits per unit track length and that is distributed near it. DNA double helix

Stopping power = (∆Ε) / (∆l) cylinder

diameter 2r

STOPPING POWER AS A FUNCTION OF PARTICLE ENERGY 5.3 MeV α - track

Stopping power (MeV/cm)

10000

1000 electron 100

proton alfa

10

1 0.01

0.1

1

10

Particle energy (MeV)

Figure 9 Stopping power in water for electrons, protons and alpha particles as a function of energy. Stopping power is a parameter that describes the material’s ability to stop charged radiation. For different combinations of particles and material, the ability to stop the radiation can be calculated

24

Figure 10 Illustration of the ionization pattern around an α-particle track. Each black spot represent one ionization. Around the track, the ionization density is highest. The limited stopping power or the linear energy transfer, LETr, is given as the energy per track length deposited inside a cylinder with the radius r. The radius r can vary, which gives different LET-values. Often, the radius r is set to be the diameter of the DNA double helix. Instead of giving the radius length in nanometers, it is given as the energy of an electron with the range r. When living tissue is irradiated, LET usually denotes the energy absorbed per unit length around the particles' track within a cylinder. The diameter of this cylinder is chosen to be in the

same order as a biological critical structure, e.g. the cell nucleus or the DNA-molecule. High LET and low LET It is of interest to understand how many ionization are created in the DNA molecule of different types of radiation. A rough estimate is made in the following. The stopping power for electrons, protons and alphas (see Figure 9) is recalculated and expressed in number of ionization/nm (Figure 11). It is seen that for electrons of all energies the number is below 0.1 ionization per nm. The more correct use of LET, not stopping power, gives an even lower number. Since the DNA molecule has a diameter of 2 nm, this indicates that it is very unlikely that electrons of any energy will cause two ionizations so close that a double strand break occurs. High energy protons (E > 10 MeV) have the same pattern but low energy protons with energies <1 MeV yield more than 1 ionization/nm. Alpha particles over a wide range of energies also yield more than 1 ionization/nm. If the probability of causing a double strand break is low, we have low LET radiation whereas if the probability is high, high LET.

Photon interactions Non-charged particles such as photons and neutrons do not feel the Coulomb force and cannot transfer their energy to matter this way. However, in occasional interactions they may transfer all or much of their energy to charged particles in the irradiated material. Thus, these secondary charged particles (photons give rise to secondary electrons and neutrons chiefly to secondary protons) can continue to ionize through the charge interaction. Since the ionization of the primary interactions is small compared with the ionization of the secondary radiation, we sometimes refer to photons and neutrons as indirect ionization radiation. Only the interaction of photons is described here. Regarding the interaction of photons with matter, three processes can be distinguished. The photoelectric effect implies that a bound electron, mainly a K-shell electron, absorbs the photon in one of the innermost atom shells. This electron, the photoelectron, obtains a kinetic energy that corresponds to the photon energy minus the electron's binding energy in the shell. An electron from one of the outer shells then fills the “hole” in the electron shell, and characteristic X-rays (or Auger electrons) are emitted. The energy balance in the transfer is Auger electron

IONIZATION DENSITY

Characteristic X-ray

10

Incoming photon Energy h* ν

1

Ions/nm

0.01

0.1

1

Photo electron

10 electron

0.1

proton alfa

0.01

0.001

0.01

0.1

1

10

Particle energy MeV

Figure 11 Stopping power in water given as number of ionizations per nanometer for electrons, protons and alpha particles and as a function of particle energy.

25

Figure 12 The photoelectric effect. A photon interacts with a K-shell electron. All the photon's energy is used to displace the electron and give it kinetic energy. When the "hole" of the electron is filled with an electron from an outer shell, energy is liberated as a photon (characteristic X-radiation) or is transferred to another orbit electron (Auger electron) that leaves the atom. hν = Εel + Βe

where hν is the photon energy, Eel is the electron's kinetic energy and Be is the binding energy. The photoelectric effect is not significant in air and soft tissue (which includes material of low atomic number) for photon energies over 200 keV. In the Compton effect, the electron binding energy is always insignificant in relation to the photon energy (Figure 13). The Compton effect can be interpreted as an impact process in which the photons collide with a free electron.

energy, which is transmitted to the Compton electrons, is absorbed within a tiny volume around the region where the process occurs. The amount of energy transmitted to the scattered photons, however, is usually not absorbed in the immediate vicinity. The photon can cover a considerable distance before being absorbed. The pair production effect (Figure 14) signifies that the photon interacts with the electrical field in the neighborhood of an atom's nucleus. The photon disappears and its energy is converted to an electron and a positron (see Figure 15). The electron mass is equivalent to 0.511 MeV. Since two electron masses are created the incoming photon must have energy of at least 1.02 MeV. Surplus energy is given as kinetic energy to the electrons.

Incoming photon

e-

Energy h*ν

Figure 13 Compton effect. A photon interacts with an electron in an outer orbit. Only part of the photon's energy is used to dislodge the electron and deliver kinetic energy. The remaining energy is transmitted to a new photon that has a different direction and less energy than the incoming photon. Hence, in this process the main electrons involved are the outer, loosely bound electrons. The physical laws of impact apply, since the system's total energy and momentum are the same before as after impact. This results in a photon with lower energy and indeed greater wavelength. The energy and the angle between the scattered photons and electrons can be calculated by using the laws on conservation of energy and impulse. The probability of “Compton scattering" per electron of per unit mass is about the same for all material, but it decreases somewhat with increasing photon energy. The Compton effect is significant for photon energies between 100 keV and 10 MeV. Part of the incoming photon

26

Recoil

e+

Figure 14 Pair production. The incoming photon energy is so great that it can create two new particles, an electron and a proton. The entire energy of the incoming photon is required to create the mass of these particles (2*511 keV) and give these particles their kinetic energy. A positron can be said to be a positive electron. It has the same mass as an electron but a positive, not a negative unit charge. On decelerating in matter, the positron behaves in the same way as the electron except that the positron at the end of the track attracts an electron and is "annihilated" by it. This means that the energy that is equivalent to two electron rest masses (2*511 keV) is converted to two 511 keV photons which are emitted in two directly opposite directions. This radiation is called annihilation radiation.

Atomic number

100 80

Photoelectric effect

Pair production

60

energies of the inner shell electron. Just below, the photon energy is too low to interact and the electrons in this shell do not contribute at all to the attenuation processes. Just above, the probability of photon absorption is very high. 105

Probability of the photo effect (m-1)

Pair production and annihilation are thus two diametrically opposite processes. In pair production, a photon is lost and two light particles are formed, a negatron (ordinary electron) and a positron. Both of these particles are decelerated whereupon the positron is annihilated by an orbital electron during the emission of two 511 keV photons.

40 20 0 0.01

103

102

Compton effect

0.1 1 10 Photon energy (MeV)

104

100

Figure 15 Dependent on the energy of the photons and the atomic number, the probability of different photon interactions varies. In the figure is shown a special case when the probability for the Compton effect is equal to the probability of the photo-effect (the curve to the left) and when the probability for the Compton effect is equal to the probability of pair production (curve to the right). The relative significance of the different processes can be denoted as functions of photon energy and absorbing material. The combinations of energy and atomic number by which the probability of obtaining the photoelectric or the Compton effect are equally large (Figure 15). For water and soft tissue with "effective atomic numbers" lower than 10, the photo-electric effect dominates for photon energies < 100 keV, the Compton effect for > 100 keV and < 10 MeV, and pair production for photon energies > 10 MeV. For low photon energies and for high Z material where the photoelectric effect dominates, absorption discontinuities occur (Figure 16) at photon energies equivalent to the binding

27

0

100 200 300 Photon energy (keV)

400

Figure 16 The probability for the photo effect in lead as a function of photon energy. Some discreet jumps in the curve are seen at energies corresponding to binding energies of the orbit electrons. Heavy elements, where the binding energy of the K-shell electrons is high, are therefore likely to interact with high-energy photons. For example: In lead (atomic number = 82) the photoelectric effect dominates for photons with energies of < 0.5 MeV, the Compton effect at photon energies > 0.5 MeV and < 5 MeV, and pair conversion for photons with energies > 5MeV. Attenuation of photons For photons, as for neutrons, matter looks empty. Photons can pass a long way through a piece of material before they interact. They may even penetrate a piece of matter without any interaction. However, when the photons interact they lose all or most of their energy, which is transferred to charged particles. The formula for the attenuation (damping) of photon radiation in material is: N = No e -µ x where

N is the number of non-scattered photons that leave the absorber No the number of photons incident on the absorber and µ is the linear attenuating coefficient (m-1), which is a measure of probability per unit length for a photon to interact x is the thickness of the absorber.

Figure 17 A narrow beam of photons, No, hit a piece of material with thickness x. Some photons are absorbed or attenuated out of the beam. The number of photons hitting the detector is then N.

Probability for different photon interaction effects (m-1)

104

The linear attenuation coefficient (µ) is the sum of the photoelectric effect (τ), the Compton effect (σ) and pair conversion effect (κ). The photoelectric effect dominates at low photon energies, the Compton effect for moderate energies and the pair conversion effect for high photon energies. The contributions of different attenuating coefficients in aluminum are shown in Figure 18. The thickness of the absorber that decreases the number of incoming photons by a factor of two is of special interest and has a special name, the half layer value (HLV). It is defined as 1/2 No = No e −µ

HLV

It is related to the attenuation coefficient in the following way

103

HLV = ln 2/µ = 0.693/µ

µ=σ+τ+κ

102

So far, only the linear attenuating coefficient has been mentioned. It is often more convenient to deal with attenuation coefficient per unit mass, which is given by µ divided by ρ, the density of the material. This gives the mass attenuation coefficient, which is also denoted by µ/ρ.

10

1

exponential attenuation and there is always a certain finite probability that some photons will penetrate even thick material. What we can calculate is the fraction of photons that will interact in a certain thickness of irradiated material.

σ τ

κ

σ

0.1 0.1

1 10 100 Photon energy (MeV)

1000

Figure 18 The probability in aluminum for the three main ways in which photons interact as a function of photon energy. The different effects are denoted. τ = photo effect σ = the Compton effect κ = the pair production effect µ = the sum of the three separate effects gives the total attenuation coefficient. Unlike charged particles, photon radiation has no fixed range or maximum range. The interaction of photon radiation leads to an

28

When using the mass attenuation coefficient per unit mass, it is necessary to multiply by both the thickness of the material and its density to obtain a non-dimensional exponent in the expression. N = No e - µ/ρ ∗ ρ · x where µ has the dimension m-1, ρ kg/m3. (µ/ρ) has the dimension m2/kg and x m, hence it follows that the exponent has no dimension. Variations of

the mass attenuation coefficient for lead, copper, water and air are illustrated in Figure 19.

Mass attenuation coefficient (m2/kg)

1 lead

10-1 copper water air

10-2

10-3 0.01

0.1 1 Photon energy (MeV)

10

Figure 19 Mass attenuation coefficients for different materials. There are other types of attenuation coefficients besides the mass and linear coefficients. An example is the mass energy absorption coefficient, µen/ρ. Scattered photons (X-ray, Compton and annihilation photons) can travel some distance from their production site before donating their energy in the form of secondary electrons. They may even escape the absorbing material completely. Secondary electrons (photo-, Auger-, Compton-, and pair-electrons), however, expend their energy in the immediate vicinity of the primary interaction and contribute to the dose absorbed in this region (absorbed dose is defined in the next section). To calculate the dose absorbed by these charged particles, the mass energy absorption coefficient, µen/ρ can be used. It is the product of the mass attenuation coefficient µ/ρ and the fraction of the photons' energy f, which is converted to secondary electrons that lose their energy locally. µen /ρ = (µ/ρ) f It is evident by definition that µen/ρ is less than µ/ρ (µen /ρ < µ/ρ).

29

CHAPTER 4 Dosimetry The absorbed dose D has the unit gray (Gy) equal to 1 J per kg. An old unit is

Absorbed dose Ionization radiation carries energy. In different processes this energy is transferred to matter. Some of this energy is deposited locally; some travels away and might even escape the irradiated mass volume. Since some ionizing radiation can participate in nuclear reactions, energy may also be created (or lost) inside the irradiated volume.

Eout

Generally we may write ∆Ei = Ei,in -Ei,out +Qi (1) where Ei,in = energy of an incoming ionizing particle (excluding rest mass) Ei,out = sum of energies of all ionizing particles, created by incoming particle, leaving the place of interaction (excluding rest mass) Qi = accounts for changes in the rest mass energy of the nucleus and of all elementary particles involved in the interaction. The energy, ∆E imparted to a volume of matter with weight ∆m (this volume is regarded as the place of interaction) is then (2)

and the specific energy imparted, which is also called absorbed dose, is D = E/m

Relative Biological Effect, RBE The absorbed dose defined in equation 3 is a macroscopic unit representing the mean energy to a rather large mass volume. It does not consider the stochastic ionization pattern of the energy delivery. However, low LET and high LET ionizing radiation can for the same absorbed dose cause quite different biological effects due to their different ability to cause single and double strand breaks. Another concept, Relative Biological Effectiveness (RBE) is introduced to describe this situation.

Figure 1 Absorbed energy ∆E in a mass element ∆m

E = Σ Ei

One may reflect on the size of the unit Gy. A whole body absorbed dose of 4 Gy to man is a lethal dose, but raises the body temperature by only 0.001 oC. Obviously, 1 Gy is a rather large unit. Local absorbed doses of several Gy are given in radiation therapy whereas in laboratory work and in radiation protection, mGy and µGy are more commonly used.

(3)

30

Number of chromosome abberations

∆m

Ein

1 rad = 0.01 Gy

α 8

γ 6 4 2

Dα 0

1

2 3 Absorbed dose (Gy)

Dγ 4

Figure 2 A comparison of two different radiation qualities, α and γ. Photon radiation is always used as the reference since all the produced electrons are low LET-radiation. A chosen biological system is irradiated and a biological effect is measured (in Figure 2 this is the number of chromosomal aberrations). Two qualities of radiation are used to irradiate the system (in Figure 2, α and γ) so that the same

biological effect is obtained. The relative Biological Effectiveness (RBE) is then defined as a ratio between the absorbed doses delivered RBE = Dγ/Dα

(4)

Photon irradiation (X-rays > 2-3 MeV or 60Cogamma rays) is usually used as a reference. This is because photons produce a wide range of electrons in a reproducible way. Electrons of all energies can be considered as low-LET radiation. Hence, one should expect RBE-values close to one for all photon irradiation. This is also found to be true, maybe except for very low X-ray irradiation since low energy electrons (< 100 keV) have a somewhat higher stopping power than higher electron energies (> 100 keV). Dose equivalent

Quality factor, WR

The RBE-values vary with radiation quality and chosen biological system. In radiation therapy, where acute radiation effects are expected, the RBE value has to be carefully measured for each situation. The biological effects of neutrons, protons and heavy ions have to be compared with established photon and electron therapy not just in one biological system, but in many. In radiation protection where radiation doses are lower and mainly late effects (such as the risk for induced cancer) are expected, we may generalize somewhat more. 20

If we multiply the absorbed dose, D, with WR we obtain a new dose variable, the dose equivalent, which includes both the energy concentration of the radiation and also its capability to cause late biological effects. H = WR * D

(5)

The unit for the dose equivalent is Sievert (Sv) if absorbed dose is expressed in Gy. Effective dose The radiation sensitivity of different organs in the body varies. Tissues containing proliferating cells, such as the bone marrow and the intestine, are generally more radiation-sensitive than e.g. the liver. If the radiation dose is unevenly distributed in the body, we need to consider this when risks of biological damage are estimated. For radiation protection purposes, relative values for different organs are given below (Table 1). These weight factors mainly relate to the ability of ionizing radiation to create late effects such as radiation-induced cancer. Table 1 Weighting factors recommended by ICRP 30 for calculation of effective dose. Organ Gonads Breast Red marrow Lung Thyroid Bone surface Remaining

15 10 5 01

values (WR) that can be used for radiation protection purpose is shown in Figure 3.

10 100 1000 Stopping power in water (keV/µm)

Figure 3 The quality factor WR as a function of stopping power in water The measured RBE-values vary from 1 to 20 as a function of the stopping power of the radiation. A weighted curve of measured RBE

31

Weighting Factor, WT 0.25 0.15 0.12 0.12 0.03 0.03 0.30

We can then define the effective dose as He = Σ WT * WR * Dorg

(5)

where we sum over the different organs. The unit for the effective dose is also Sievert (Sv) if the absorbed dose, D, is expressed as Gy. The calculation of the effective dose is illustrated in the following example.

Example 1. We want to compare the risk of late effects of two persons exposed to radiation of different quality and distributions. One is exposed to 1 Gy of gamma rays, whole body and the other to 1 Gy of alpha particles by inhaling radon gas. Type of irradiation Absorbed dose (Gy) WR WT

Effective dose (Sv)

γ

α

1 1 1

1 20 0.12

1

2.4

(7)

where E β is the mean energy in MeV of the beta energy emitted per decay, and n is the total number of decays in the mass m.

Comments

For radioactive sources, the absorbed dose rate, not the absorbed dose is of interest. Eq 7 can easily be rearranged to

WR for α is 20 Ra-gas gives a local irradiation of lung

dH/dt = 1.6*10-13 * WR * n/h * E β /m

The Ra-exposed person has a 2.4 higher probability to express late radiation effect

The effective dose is a theoretical concept used to calculate risks of various types of irradiation situations. In practical radiation protection work one need to define other measurable variables. Radiation protection instruments are calibrated in a sphere (tissue equivalent material) of diameter 30 cm. They are placed 10 cm or 0.07 cm from the surface to give Hp (10) = individual deep dose equivalent Hs (0.07) = individual surface dose equivalent Relation between dose and radioactivity

The relation between the two energy units Joule and eV are

dH/dt is the dose equivalent rate in Sv/h WR is the weight factor for radiation quality n/h is the number of absorbed particles per hour E β is the mean energy in MeV per particle and m is the mass unit in kg. The weight factor WR is 1 for radioactive decays emitting only beta, electrons and photons. A more general treatment of the radioactive decay is given in the following. Consider a radioactivity concentration A in an organ. The radioactive decay emits particles of different types i = 1, 2, 3 … The mean energy of particle type i emitted per decay is then

where k is a unit dependent constant ni is the mean number of particle i per decay Ei is the mean energy of particle i per decay ∆i has the dimension energy and can by the proper choice of k be expressed in joules (J). However, usually the dimension is given as

(6) (kg * Gy)/(Bq * h) or (g * rad) / (µCi * h)

From this relation, we can derive a simple formula for particle irradiation, such as pure beta-emitting radionuclides

32

(8)

where

∆i= k ni Ei

The energy emitted in radioactive decay can, of course, be expressed in units of Gy and Sv. However, since the radioactivity is usually known in Bq and the energy of emitted particles in MeV, we need to relate the absorbed dose from radionuclides in these units.

1 eV = 1.6 * 10-19 J

Dint (Gy) = 1.6*10-13 *n E β /m

The time-integral of the radioactivity concentration in the organ, ∫A(t)dt has the

dimension Bq/kg * h. If we multiply the time integral with ∆i we get the absorbed dose of that organ if the absorption of the emitted particle is 100 % localized in the organ.

detailed calculation of integrated radioactivity concentrations in different organs. Mathematical standard phantoms have been derived which allow the use of Monte Carlo calculation techniques.

Sources

Figure 4 Distributed radioactivity in the body gives a rather complicated irradiation pattern. The radioactivity gives radiation that is locally absorbed (low energy beta, electrons and Xrays) and radiation that is partly locally absorbed (gamma, high-energy beta and electrons). This radiation may irradiate other organs (Figure 4) or may completely escape the body. A complete absorbed dose calculation after the administration of radioactivity in vivo also has to consider the kinetics of the labeled compound and its metabolism and excretion. If the emitted particle is not locally absorbed we can give a factor fi (the fraction locally absorbed) by which we multiply the ∆i -value. In the body, the situation becomes rather complicated. To facilitate dosimetry calculations in nuclear medicine and in radiation protection, a dosimetry concept (MIRD) has been developed. MIRD stands for Medical Internal Radiation Dose. An expert committee has agreed on certain standard ways to express and calculate data. Measured physiological and kinetic data from human studies are used as input into theoretical kinetic models, which allows a

33

Figure 5 A MIRD phantom with standardized organ sizes and positions. Each organ is given a mathematical definition that makes it easy to represent it in a computer. The ∆i-values can be calculated for each radionuclide using nuclear spectroscopic data. The fraction of absorbed energy per organ and the fraction of energy absorbed in other organs can be calculated as a function of particle type and energy using Monte Carlo techniques. From physiological models, the time integral radioactivity curves can be calculated for each organ. The dose equivalent can be calculated for each organ and the effective dose by summing over all organs. Using such techniques, one can rather precisely estimate the effective dose corresponding to a given radioactivity of a certain radionuclide. However, one has to remember that the MIRD concept still uses a number of approximations, such as standardized organ geometry and a homogeneous distribution of the radioactivity in each organ.

Table 2 ALI values (Bq) for some common radionuclides. The ALI-values correspond to an effective dose of 50 mSv. Nuclid 3 H 14 C 18 F 22 Na 24 Na 32 P 33 P 35 S 36 Cl 38 Cl 42 K 43 K 45 Ca 47 Ca 51 Cr 52 Mn 52m M 54 n Mn 56 Mn 52 Fe 55 Fe 59 Fe 56 Co 57 Co 58 Co 60 Co 63 Ni 64 Cu 67 Cu 62 Zn

ALI 3*109 9*107 2*109 2*107 1*108 1*107 1*108 8*107 9*106 6*108 2*108 2*108 3*107 3*107 7*108 3*107 1*109 3*107 2*108 3*107 7*107 1*107 7*106 2*107 3*107 1*106 6*107 4*108 2*108 5*107

Nuclid 65 Zn 69m Zn 67 Ga 68 Ga 73 As 74 As 75 Se 76 Br 77 Br 82 Br 81m Rb 81 Rb 86 Rb 88 Rb 89 Rb 85m Sr 85 Sr 87m Sr 89 Sr 90 Sr 90 Y 99 Mo 99m Tc 109 Cd 115 Cd 111 In 113m In 124 Sb 123 I 125 I

ALI 1*107 2*108 3*108 6*108 6*107 3*107 2*107 1*108 6*108 1*108 9*109 1*109 2*107 7*108 1*109 8*109 6*107 1*109 5*106 1*105 2*107 4*107 3*109 1*106 3*107 2*108 2*109 9*106 1*108 1*106

Nuclide 129 I 130 I 131 I 132 I 129 Cs 130 Cs 131 Cs 134 Cs 134m Cs 137 Cs 131 Ba 133m Ba 135m Ba 140 La 169 Yb 192 Ir 198 Au 197 Hg 203 Hg 201 Tl 204 Tl 210 Pb 212 Pb 210 Po 226 Ra 232 Th 238 U 241 Am 244 Cm 252 Cf

ALI 2*105 1*107 1*106 1*108 9*106 2*109 8*108 3*106 4*109 4*106 1*108 9*107 1*108 2*107 3*107 8*106 4*107 2*108 2*107 6*108 6*107 9*103 1*106 2*104 2*104 4*101 2*103 2*102 4*102 1*103

The effective dose has been calculated for many radionuclides using the MIRD concept. The International Committee on Radiation Protection (ICRP) has introduced a radiation protection variable, ALI (Annual Limit of Intake) which has the unit of Bq. One ALI is the amount of radioactivity that corresponds to an effective dose of 50 mSv. Approximate dose calculations In the practical laboratory work, simpler calculation methods are needed to obtain

34

approximate answers to radiation protection problems and to plan experiments. A number of simple relations have been derived from equation 8 for different, typical irradiation situations. Dose rate on the surface of a beta source The irradiation situation is defined in Figure 6.

C Bq/g

Dose rate, Hβ/h on the surface

Figure 6 A thin-walled tube contains a radioactivity concentration of C Bq/kg. What is the surface dose rate Hβ/h? We use equation 8 dH/dt = 1.6*10-13 * WR * n/h * E β /m WR is equal one for beta particles, E β is about Emax/3 and n/h = C*m*fi.*3600. We then obtain dH/dt = 1.92*10-10 * C * fi * Emax

(9)

This is the dose rate if all the particles emitted are absorbed. However, on the surface of the source about half of the particles are emitted inwards and do not contribute to the dose equivalent. Therefore, we obtain dH/dt = 10-10 * C * fi * Emax (10) where dH/dt is the dose equivalent in Sv/h C is the radioactivity concentration in Bq/kg Emax is the maximum beta energy in MeV fi is the number of emitted beta per decay that have the maximum energy of Emax

Point source

A beta point source

A common situation is that you work with a concentrated radioactive source that can be regarded as a point source. At some distance, this source gives a dose rate that is related to the radioactivity amount, and the type and energy of the emitted particles.

∆E/∆d is the stopping power of the electrons. As seen from Figure 8, the stopping power for high-energy electrons is about constant, about 2 MeV/cm (200 MeV/m). Only for low energies (< 300 keV) does the stopping power increase. However, these energies are not interesting from the external radiation point of view since their depth penetration is low (< 0.6 mm). Only the high-energy electrons can penetrate skin and protection gloves and represent a biological hazard.

d ∆d

300 keV 100 MeV/cm

PP

10

2 MeV/cm 1 0.01

0.1

1

10

MeV

Figure 7 A schematic figure of the radiation geometry around a point source P, which could either be a beta source or a gamma source. The radioactivity is A Bq. The distance from the point source to the body is d meter. Particles are not absorbed until they hit the body surface. A sphere is drawn with the point source at the center and with the radius d. Generally, the emitted particles are distributed on a sphere with the surface S=4πd2. They lose some of their energy ∆E in a thin layer ∆d with density ρ. This thin layer has the mass of 4π*d2*∆d*ρ. The number of particles emitted during one hour is f*A*3600. WR =1 for beta and gamma. The density of tissue is 1000 kg/m3. If we put these values in equation 8 we obtain dH/dt=1.6*10-13 * f*A*3600*∆E/(4π*d2*∆d*ρ) Rearranging this equation will give dH/dt=0.458*10-13 *(f*A / d2)*∆E/∆d (11)

35

Figure 8 Stopping power for electrons as a function of particle energy. Electrons with energy > 300 keV have about the same stopping power, 2 MeV/cm. If we put the value 200 MeV/m of the electron stopping power into equation (11) we obtain the approximate relation dH/dt = 10-11* A*f*/d2

(12)

The dose rate is given as Sv/h, A in Bq and d in meters. A gamma point source If we in Figure 7 change the external irradiation from beta to gamma, we then have to change the stopping power for charged particle to something relevant for photons. A radioactive decay can also emit gamma rays with different energies (Ei) and number per decay (fi), which changes equation (11) accordingly. The gamma γi with the energy Ei will have some probability to lose its energy when passing through the thin layer ∆d. We denote the total energy that is entering the surface S with Eini

and the energy coming out from the surface with Eouti. We can then derive the following relation Eouti = Eini e-µeni* ∆d

(13)

Table 3 Gamma constants for some important radioactive nuclides, 10-12 Sv*h-1*Bq-1*m-2 Γ 0.32 0.47 0.23 0.19 0.035 0.29 0.0043 0.13 0.22 0.16 0.14 0.34 0.083 0.032 0.073 0.064 0.051 0.064 0.039 0.013 0.11 0.038

We can gather the decay-dependent parameters into a constant Γ=ΣΓi = 4.58*10-14 * ΣEi * fi* µeni (18)

Some considerations about the attenuation coefficient

The energy absorption coefficient, µeni, used in the equation is related to the transfer of energy from photons to charged particles that are locally absorbed. Since µeni*∆d is a small value, we can approximate equation 13 to Eouti = Eini – Eini*µeni*∆d

(14)

The energy released in the tissue volume is then ∆Ei = Eini - Eouti = Eini*µeni*∆d

(15)

and ∆Ei/∆d = Eini*µeni

(16)

From equation (11) we then obtain ∆Hi/dt =4.58 10-14 *A*fi*µeni *Ei/d2 (17)

where Ei is the energy of gamma i in MeV fi is the number of gamma i emitted per decay µeni is the energy absorption coefficient in m-1 and rewrite equation (17) as dH/dt = Γ ∗ A / d2

Nuclide 99mTc 103Ru 110mAg 122Sb 124Sb 125I 131I 134Cs 137Cs 140Ba 140La 141Ce 160Tb 177Lu 182Ta 187W 192Ir 194Ir 198Au 201Tl 203Hg 226Ra

Γ 0.016 0.090 0.38 0.064 0.026 0.037 0.056 0.23 0.086 0.33 0.30 0.013 0.14 0.0027 0.18 0.080 0.12 0.040 0.062 0.012 0.035 0.22

Nuclide 22Na 24Na 28Al 38Cl 42K 46Sc 51Cr 54Mn 56Mn 59Fe 58Co 60Co 65Ni 64Cu 65Zn 69m-69Zn 75Se 76As 82Br 86Rb 95Zr 99Mo

The energy absorption coefficient used in the calculation of the Γ-constant gives the transfer of energy from photons to charged particles that are locally absorbed (Figure 9). The following example demonstrates how the Γ−constant is calculated. Example 2. Calculate the Γ−constant for 60Co, which has two gamma rays, 1.173 and 1.332 MeV. Both gamma rays are close to 100 % per decay. The mass energy absorption coefficient in water obtained from Figure 9 is 0.03 cm2/g. The density in water is 1 g/cm3 which gives an energy absorption coefficient of 0.03 cm-1 or 3 m-1. We use formula (18) to obtain

(19)

where the equivalent dose rate dH/dt is in Sv/h, the radioactivity A is in Bq, and the distance d is in meter.

36

10000

Γ=4.58*10

-14

* 2.505* 3 = 0.34 ∗ 10

−12

LEAD

The Γ− constant can be calculated for a number of radionuclides as seen in Table 3. The theoretical background for the attenuation coefficient is given in Chapter 3. The diagram in figure 9 and figure 10 gives a somewhat simplified information with two curves, which gives the sum of a number of involved processes. Here we will give some examples how these curves are used in practical calculations. For each element, we need to have a special set of information. Here we will only deal with two elements, water and lead. 10000

100

10

1

0,1

0,01 0,001

0,01

0,1

1

10

100

Photon energy (MeV)

Figure 10 Mass attenuation coefficients for photons in lead. Two different attenuation coefficients are given. The upper curve is the total mass attenuation and the lower the mass energy absorption coefficient

WATER

1000

Mass attenuation (cm2/g)

Mass attenuation (cm2/g)

1000

In the calculation of the Γ-constant for 60Co we used the lower curve in figure 9. Water is usually a good approximation for tissue. We wanted to know the amount of locally deposit energy and used in this case the mass energy absorption coefficient.

100

10

1

0,1

0,01 0,001

0,01

0,1

1

10

100

Photon energy (MeV)

Figure 9 Mass attenuation coefficients for photons in water. Two different attenuation coefficients are given. The upper curve is the total mass attenuation and the lower the mass energy absorption coefficient

Figure 11 The degradation of photon energy in a thick radiation shield. 37

The thick shield can be divided into a number of thin layers. In each layer we can apply the mass energy absorption coefficient to calculate how much photon energy is transferred to the next layer. However, this energy now consists of the original gamma energy and of lower energy Compton photons. The Compton photons will be, due to their lower energy, more easily absorbed than the higher original energy in the next layer. So coming out from the thick shield will finally mainly be the original gamma photons, which have undergone no interaction in the shield and some Compton photons mainly produced in the last layers. The energy coming through a thick shield is then the number of undisturbed high-energy gamma (can be calculated using the total mass attenuation coefficient) times the gammaenergy. Example 3 A strong 60Co source (100 GBq) needs to be protected in a lead shield. An acceptable dose rate from the protected source would be 2 µSv/h at 1 meter distance. How thick should the lead wall be?

10000

LEAD

1000

Mass attenuation (cm2/g)

If we would like to calculate the thickness of a lead container to decrease the radiation from a strong 60Co-source 100 or 1000 times, then we have another situation.

100

10

E=1,2 MeV

1

0,1

M=0,06 cm2/g

0,01 0,001

0,01

0,1

1

10

100

Photon energy (MeV)

The two gamma photons from 60Co are rather similar in energy so we use a value in-between, 1.2 MeV. From the diagram we get (using the total mass attenuation coefficient) the value 0.06 cm2/g. We multiply with the lead of 11, 3 g/cm3 to get the total linear attenuation coefficient 0.68 1/cm. We can then use the exponential relation between incoming and outgoing photons. N/No = 1/17000 = e-0.68*X X is the wanted lead thickness. X = ln(17000)/0.68 = 14.3 cm

The dose rate (0.034 Sv/h) of an unprotected source at 1 meter can be obtained using the Γconstant found in table 3. This is 34000/2=17000 times larger than the wanted dose rate. We need then to reduce the number of 60 Co-gamma by this factor.

38

We have made some approximations to obtain this answer. One is the use of a single energy instead of doing separate calculations for the two gamma photons. The other is the use of the total attenuation coefficient, which gives a somewhat to small value. The answer could then be around 15 cm.

CHAPTER 5 THE EFFECT OF RADIATION ON HUMANS Control of the cell is managed by the nucleus of the cell which contains deoxyribonucleic acid (DNA), the hereditary material or "memory store" of the cell. DNA functions as a template for the formation of ribonucleic acid (RNA), which in turn acts as the template for the formation of various protein molecules which are workers within the cell and perform specific functions.

LOSS OF HAIR 3 -4 Gy DEATH OF BONE MARROW TISSUE Renewal of blood cells stops 3-4 Gy

Further, there is flow of information from DNA to RNA to protein. Both DNA and RNA are carriers of information. DNA keeps information in the form of a special sequence of nucleotides – the building blocks of DNA. There are four different types: adenine, thymine, guanine and cytosine, designated A, T, G and C respectively. In a human cell, the total number of nucleotides in DNA is about five thousand million.

DEATH OF BOWEL TISSUE Renewal of bowel cells stops 10-15 Gy STERILITY 1-5 Gy REEDENING OF SKIN 6 Gy BLISTERS and WOUNDS 10 Gy

Acute effects of irradiation in humans Figure 1 Schematic drawing of a cell and its most important organelles.

Biological System Higher organisms are composed of numerous cells, which co-operate and share the various functions that they must fulfil. A human being is formed of some 1014 cells. Some of them, the nerve cells, are responsible for signal communication within the organism, while blood cells and other cells of the circulatory system transport oxygen and nutrients and some cells such as kidney cells deal with excretion. Cells are the smallest units concerned with this division of labor and for an organism to function, it is essential for particular cells to be correctly located and perform their function.

In the double spiral of DNA, A is combined with T, and G with C. Thus, the information is found in two arrays, one being complementary to the other. This can be regarded as "back up" of the information – something which nature practiced long before humankind began using it in our computers. In RNA, nucleotides are combined in a similar way but with the difference that uracil (U) is used instead of T. Three consecutive nucleotides in RNA correspond to a specific amino acid for forming protein molecules. Some examples of different functions of the cell are described below. Mitochondria are cell

39

organelles that specialize in converting energy. They break down energy-rich sugar molecules and convert the energy into small packets of adenosine triphosphate (ATP). The different functions of the body like muscle and cellular processes are then using ATP for energy support. Protein synthesis in the cell occurs in the ribosome with RNA as the template. Storage of proteins for export occurs in the Golgi complex. Detoxification of harmful chemicals takes place in the endoplasmic reticulum and various substances are broken down in the lysosomes.

Energy transfer – primary damage The effect of radiation on living cells is usually caused by charged particles like electrons (beta, Compton or photoelectrons) or alpha particles. The transfer of energy occurs very quickly – within pico-seconds. Two types of primary biological effects can be distinguished: direct and indirect. In the direct type, the radiation interacts with biomolecules to cause an initial modification of their DNA, RNA or proteins. In the indirect type, which is the more usual in living cells, the primary changes are in water. Transfer of energy to the cells' molecules occurs almost randomly. Hence, it can be expected that in 60-70 % of all primary events, water receive the energy. The amount of energy transferred to the water molecule is indeed small but significant since it undergoes ionization and loses an electron. By restructuring, the water radicals ·OH and ·H, and hydrated electrons are formed. Direct action e-

Indirect action e-

Figure 2 The flow of information in a cell and comparison with ionization density in sparsely ionizing (e-) and densely ionizing (α) radiation. Many other functions are fulfilled within this small unit with a diameter of 10 – 20 µm. Different environments in the organelles can be maintained by membrane separation. Hence, appropriate conditions exist for simultaneous energy turnover in the mitochondria, duplication of DNA in the cell nucleus and break down of proteins in the lysosomes. Of the cell content are 60-70 % water and about 15 % protein. The RNA content is a few percent and less than 1 % is DNA. Large molecules – macromolecules – which form structures in a biological system, include DNA, RNA and proteins. From the radiation point of view, DNA is the most important biological molecule. Each cell contains some 5 · 10-12 grams or 1.5 meters of DNA whose diameter is only 2 nm.

40

Changes in DNA

Figure 3 Transfer of radiation energy to material in a cell Radicals are very reactive and react with each other or with biomolecules within a short time. If they react with each other they form water, hydrogen peroxide or hydrogen gas. Water radicals can react with all biomolecules and after irradiation, a spectrum of various changed biomolecules is obtained. The reactions with the water radicals occur in less than milliseconds. The energy deposited in a cell is not distributed uniformly but often in "clusters" (in clumps) of numerous ionization events and accompanied by primary damage within a small volume. When an electron passes a cell, the close dose is about 3 mGy and the number of primary injuries

to biomolecules is about 100 in the cell nucleus (the critical organelle of the cell where DNA occurs). In Sweden, where the background radiation is 1 mGy (1 mSv) per year, only every third cell is hit yearly by an electron and the other cells are unaffected. At very low doses, a step-by-step increase of radiation dose occurs at the cellular level, i.e., each cell is hit zero, one, two or three times etc. The radiation dose to a larger part of the body, such as an organ, is then the mean dose for all included cells.

High-LET ionizing radiation, such as alpha particles, creates several ionization events per nm and therefore very probably creates double strand breaks. In practice, high-LET ionizing radiation increases the biological response and creates more biological damage per physical dose unit. Permanent double strand breaks can occur in DNA (fragmentation) or the information on both strands can be changed in a small segment of DNA, a mutation. 10

-17

10

seconds

-15

– 10

-14

-3

10 to minutes

100 ionizations in the nucleus

Energy absorption with subsequent ionization or excitations Changes in biomolecules caused by direct or indirect effects Biological mechanisms of repair splicing affect the molecular damage The time scale for following processes is considerably longer

Figure 4 The distribution of energy transferred (ionization events) along an electron track and when an electron passes through a cell.

Information changes in the cell

Submicroscopic injuries

Mutation

Microscopically visible injuries

Hours

Genetic damage

Somatic damage Cell death

Days Death of organisms Weeks

Years

Generations

Figure 5 Repair of DNA in a cell and the ability to repair after different types of irradiation. As DNA is the critical molecule, the damaging effect of different types of radiation varies according to how the energy is distributed in the molecule. Low-LET radiation creates mainly single strand damage, which most cells can repair effectively and accurately. At a low dose, it is unlikely that such low-LET radiation creates, simultaneously, two ionization events within a few nanometers to create a double strand break.

41

Leukaemia Cancer Genetic damage to the offspring

Figure 6 Time sequence of processes induced in a cell by radiation. A repair system exists, which functions more or less effectively for most injuries to DNA, when the injury is confined to one strand. This strand is cut and a small portion broken down and is then repaired using the other complementary strand as template whereupon the split in the DNA is closed and the DNA repaired. Besides this simple repair system, several more or less sophisticated systems for repairing damage to the DNA exist. Since DNA is particularly important for the cell, many endogenous

systems have been developed during evolution to prevent changes or loss of information (see also Chapter 6). Relative Biological Effect RBE The same dose, irrespective of the radiation type, produces different effects depending on the density of ionization. The relative biological effectiveness (RBE), defined in Chapter 4 is a way to express this. The RBE can be determined in many ways and often gives different values depending on which effect is studied. Thus, the RBE for a given type of radiation is not constant and can, for alpha radiation, vary from 2 up to 10 - 20. From the different RBE values that are reported, the weighting factors for different types of radiation are derived. Often these are based on some of the highest RBE values that have been observed. This weighted RBE is called the quality factor and usually denoted as WR (Q in older literature).

Acute effects on organisms Acute effects appear relatively soon after an irradiation. They can arise only after high doses of radiation, and in the case of human beings generally, within two months. This occurred in the Chernobyl disaster in which 29 people died because of acute radiation damage. In the 1940s and 1950s, some people died from radiation after receiving relatively high radiation doses in connection with experiments with fissionable material. A few have died because they were exposed to radiation from sealed radioactive sources that had gone astray. In Hiroshima and Nagasaki, many died from irradiation although most died from burns or mechanical injury. Because of the chaotic experiences and numerous events following the nuclear bomb explosions, people’s acute response to irradiation could not be confirmed. People involved in radiological work and obeying safety rules are exposed to doses well below those giving any acute effects. Persons exposed to radiation doses below 0.5 Gy will hardly notice it if they do not carry a dosimeter.

42

If a whole body radiation dose exceeds 1 Gy and is given during a relatively short time (less than 1 hour) some people will feel sick and dizzy and have bouts of vomiting. The duration of these symptoms depends on the radiation dose. Early symptoms may give some idea of its magnitude. Acute radiation damage in multi-cellular organisms is due to more or less widespread cell inactivation. Some cell types in the human body are more prone to cell inactivation than others, which results in some organs being unable to function. The law of Bergonie and Tribondeau often applies; "Cells undergoing division (proliferating cells) are more radiosensitive than non-proliferating cells". The organs in which the formation of new cells is important include the red bone marrow where blood cells are formed, the intestine – particularly the small bone, the spleen and lymph nodes, the skin and testicles. These organs are also the most radiosensitive in our bodies. In the red bone marrow, some 1011 cells are produced daily and about the same number are produced in the intestine. After intense radiation with some Gy, new formation of cells in the bone marrow will be reduced drastically and furthermore many bone marrow cells will die through apoptosis. The same is true for the small intestine, although at higher doses. The cause of death in people exposed to whole body irradiation for a short time with doses between 3 and 10 Gy is found in the bone marrow. These doses inhibit the development of cells and cause marked reduction in the number of cells, which have a short life span. Thrombocytes Stem cells

Granulocytes and lymphocytes

Red blood cells

Figure 7 Blood cells develop in the red bone marrow. The number of granulocytes and thrombocytes in the blood are influenced markedly by high doses of radiation.

The life span of red blood cells is about 120 days, which means that their reduction in number is small. White blood cells such as granulocytes and lymphocytes have a much shorter life span (some days). Hence, any arrest of their formation very quickly results in marked reduction in their number in circulating blood. Lymphocytes are highly radiosensitive and become markedly reduced after irradiation, partly due to reduction of stem cells in the red bone marrow but also due to extensive apoptosis. These cells are important for the immuno-defence of the body and irradiated individuals become therefore very infection prone.

Figure 8 Reduction in the number of granulocytes and the sequel in humans after high doses of radiation. The small intestine is the critical organ at doses of 10-15 Gy or more. Cells covering the inner part of intestine continuously migrate upwards on the intestinal villi. If the producing stem cells in the crypts are irradiated the development of these cells protecting cells stops. The cells already formed continue to migrate but are not replaced and increasing ulcers develop in the small intestine. This leads to massive loss of water and salts, and to an invasion of the intestine content.

Thrombocytes are important for the coagulation of blood. Small ulcers and considerable hemorrhages arise in the mouth, intestine and skin. The number of thrombocytes is also markedly and quickly reduced after irradiation. People exposed to whole body radiation doses of 3 to 5 Gy may die within 3 - 4 weeks. About 50 % of those irradiated with 4 Gy survive. Bone marrow transplantation may improve the prognosis after exposure to doses of between 3 10 Gy and was carried out in a number of people irradiated in the Chernobyl disaster, but with limited success. Bone marrow transplantation necessitates that an immunologically compatible bone marrow donor is available. In some hospitals, bone marrow transplants are used routinely after radiation therapy for the treatment of certain leukemia. Purified stem cells from the patient are then often used.

Figure 9 Transverse section of the small intestine and details of the villi and crypts. Other acute effects also are generally related to the influence of proliferating cells, but there are some exceptions such as the reaction of reddening and feeling of warmth of the skin, which depends on increased blood flow.

Effects on the fetus The dose of radiation required to cause acute effects in humans is apparently high. Since proliferating cells are radiosensitive, the embryo and fetus are particularly sensitive to radiation. Fetal development is divided into three phases. The pre-implantation phase extends over some 10 days after fertilization. The growing embryo is particularly radiosensitive during the early cell divisions where the ‘all or nothing’ law applies. This means that if the embryo is damaged by radiation in this phase, spontaneous abortion occurs at an early stage but the risk of malformations during is relatively small.

43

The organogenesis phase embraces the period from 10 to 60 days of fetal development. During this phase the foundation of all the organs occurs except certain parts of the central nervous system. Ionizing radiation can cause fetal injury mainly during a short period when the organ begins to form and there is active cell proliferation. The period of high radiosensitivity for most organs is short. Skeletal malformations are relatively common after radiation (compared with malformations in other organs). A slight increase in the frequency of defects in the skeleton has been detected after very low doses (in mice irradiated with 0.05 Gy at day 7). During the growth period, radiosensitivity is lower than in earlier stages with one important exception that concerns the dangerous influence radiation has on the development of the central nervous system. This occurs late in fetal life. Development of the cerebrum is a very protracted and radiosensitive period compared with the development of other organs. Cerebral effect has been confirmed amongst children born after the explosion of the atom bombs in Hiroshima and Nagasaki. In an investigation of 1,599 children irradiated during fetal development, excess in the frequency of mental retardation was observed.

individuals were referred to institutional care. The risk of serious damage was found to be greatest if the radiation occurred during weeks 8 - 15 of fetal development. The result does not exclude the absence of a threshold value; that is to say, there might be a linear dose-effect relation. However, it is important to realize that in the group with low radiation doses, the number of mentally retarded was only three and the statistical errors are large. The risk of mental retardation, when the fetus was irradiated in weeks 8-15 of the pregnancy was calculated to be 0.04 per Gy. This shall be interpreted as follows: If 1000 fetuses at this developmental stage are exposed to 1 Gy each, then 40 of the children will be mentally retarded. If irradiated after week 16, the risk is four times less. With low doses of radiation, the effect on the central nervous system seems to be the greatest risk.

Figure 11 Frequency of mental retardation in children irradiated during weeks 8 to 15 of fetal development. Figure 10 Frequency of microcephaly, reduced size of head, in children born after explosion of the atom bombs in Hiroshima and Nagasaki. Irradiation occurred during weeks 6 to 11 of fetal development. The individuals had difficulty in doing simple calculations, making simple conversation or taking care of themselves. Most of these

44

In Hiroshima and Nagasaki, a smaller head has been observed to occur with the same increased frequency as mental disorders. The critical induction period for the head size occurs earlier in fetal development and there does not appear to be a clear association between these two effects.

Early findings indicated a threshold under which radiation was not harmful and would not cause cancer. However, with time people became more and more concerned about the risks involved. The increasing use of ionizing radiation in society (X-ray diagnostics and therapy) also promoted the interest to quantify these risks.

Figure 12 The development of cells in the human cerebrum. Effects on the development of the central nervous system after low radiation doses can be seen in low frequency in experimental animals. The effects studied included the size of the cerebrum, the incidence of cell death and disorientation of nerve cells. Most people working in this experimental field think that there is a threshold value somewhere between 0.05 and 0.1 Gy.

Late effects on organisms Fairly soon after the discovery of X-rays by Roentgen in 1895 and natural radioactivity by Becquerel in 1896, it was established that radiation had harmful effects. Pioneers in the field of radiation worked without protection and received very high doses, particularly to the hands and arms. After reddening of the skin and slow healing ulcers in the acute stages of the injury, forms of cancer appeared sufficiently often for the risk to be noticed. Genetic effects became evident somewhat later. In 1927 Muller demonstrated that X-rays could produce mutations in the banana fly. Other late effects include opacity of the optic lens and fibrosis, e.g. collections of fibrous tissue.

It is still not strictly proved that the induction of cancer is without a threshold value. However, it must be stated that our knowledge of risks with ionizing radiation is far more accurate than with any other type of biological hazard. The risk estimates in radiation protection are conservative and assume that all doses can cause biological damage but that the risk is proportional with the dose – if the dose is reduced ten times, the risk is also reduced ten times. The risk assessment is based on experience of groups of people who have been irradiated. 1. Survivors of the nuclear bomb explosions in Hiroshima and Nagasaki in Japan. Knowing where people were located at the time of the explosion, the radiation dose has subsequently been calculated for the individuals. Of course, individual dose estimations are associated with gross errors but are still precise enough to permit risk assessment. 2. People who had radiation therapy for relatively benign conditions such as back and joint complaints. The doses were correctly determined and entered in their clinical records along with the region of the body, which was irradiated (the radiation field). 3. Miners who were exposed to radon-222 and its radioactive daughter products. The lungs are the critical organs in these cases. 4. Children who during fetal development were exposed to X-rays in connection with diagnostic X-ray examination of the maternal pelvic region.

Induction of cancer Quite soon it became apparent that ionizing radiation could both cause and cure cancer.

45

of dying of radiation-induced cancer. For adults the risk is 4 % per Sv. During the fetal development the risk appears to be still somewhat higher. This assessment is based on examination of children who during fetal life were exposed to radiation during X-ray investigation of the maternal pelvic region.

Figure 13 Excess frequency of cancer in Hiroshima and Nagasaki after the nuclear bomb explosions. In addition, several epidemiological investigations in this field exist. Some of these comprise a great number of people and therefore provide important information. Without doubt, the most important group are the survivors of the Hiroshima and Nagasaki nuclear bomb explosions. In investigations of small groups, results are often obtained that sometimes suggest increased risk and sometimes lesser risk from low doses depending on the random frequency of cancer cases. It is in such instances important not to draw prejudiced conclusions from isolated investigations. Instead evaluations must be made from all the epidemiological investigations taken together. In Hiroshima and Nagasaki, at the end of the 40s and beginning of the 50s, an increasing frequency of leukemia was confirmed. Later, a number of other types of cancer showed increased frequency. The total number of cancer cases related to irradiation was ”only” about 500. A problem, which is discussed in many contexts, is the shape of the dose-effect curve at low radiation doses. Different interpretations are given but the truth is that it cannot be shown how the curve looks at low doses. ICPR (International Commission on Radiological Protection) states in its risk assessment (low radiation doses and dose rates) a 5 % risk per Sv of getting radiation related cancer if a population of normal age distribution is irradiated. This implies that if an individual is exposed to 1 mSv, there is a risk of 1 in 20,000

46

Figure 14 Relation between radiation dose and excess frequency of leukemia.

Genetic effects Changes in the genetic information (mutations) are first apparent in the future generations. The cause is a change in the information material (DNA) in the sex cells (spermatozoa and egg cells). If the change is small, it is called point mutation and if large, it is usually called chromosome change. Quantification of genetic risks is more difficult than for cancer. For example, an increase in hereditary damage has not been discerned in the offspring of irradiated people in Hiroshima and Nagasaki. This does not mean that the irradiation did not induce any genetic changes amongst these people but that no statistical increase was demonstrated. It is thought that the risk of genetic injury is less than the risk of cancer.

Internal irradiation The risk of external radiation is low when working with low energy beta-emitting radionuclides like 3H, 14C and 35S. In fact, these beta particles can not even penetrate the skin. However, there is always a risk of internal contamination by inhaling, through the mouth

or by absorption of lipophilic labeled compounds. Generally, it is difficult to make assessments of the risks because possible uptake and effects depend on which compound is labeled. The biological half-life of e.g. 3H depends not only on its decay but also on the metabolism and biological half-life of the labeled compound. Thymidine, labeled either with 3H or 14C, is incorporated in the DNA of proliferating cells. Naturally, if 3H is a part of DNA, the critical biomolecule, the dose effect should be greater than if it is incorporated into e.g. a protein. Beta particles from 3H decay have a maximum range of 7 µm, which is about the diameter of the nuclei in our cells. One might guess that in this situation the biological half-life should be long and thus the radiation risk large. However, thymidine is rapidly metabolized in the body and only a few labeled thymidine molecules are actually incorporated into DNA. Rapidly proliferating cells incorporating thymidine also renew and replace their DNA fast. Cells in the small intestine, for example, are renewed within a few days. The time the 3H will stay in the DNA is therefore short. The risks are greater if the radionuclides are incorporated into structures that are stable. This can apply to the incorporation of 3H-thymidine in the cells of growing individuals in whom the biological half-life can be very long. The same is true for radionuclides such as the isotopes of Ca, Sr, Ba and Ra that are incorporated into bone tissue. Often, these elements are reused and are not metabolized in the same way as organic compounds. Iodine isotopes are mainly taken up by the thyroid gland and are incorporated into the thyroid hormone, which is stored in the gland for a long period and can give high doses locally. As a rule only marginal measures can be taken to reduce the effect of radionuclide intake. The uptake of radioactive iodine in the thyroid gland can be reduced by the intake of non-radioactive iodine but this should, preferably, occur before that of radioactive iodine. Similarly, the uptake of Sr-isotopes is reduced if stable Ca is supplied. In principle, the same thing also

47

happens if inactive thymidine is given together with 3H-thymidine. But this way to protect against internal contamination is usually not very effective and not very practical.

Comparison: Radiation – Chemicals In certain situations, it is desirable to be able to compare the risk assessments of radiation and chemicals that affect us. With the establishment of limited concentrations of chemicals in our environment, it is taken for granted that there is a threshold value. No one can affirm with certainty whether this threshold value really exists or whether in most cases, it is tenable. In general, there is an attempt to stop the use of carcinogenic substances. Regarding radiation, since a limiting value is established it is equivalent to accepting the risk of cancer but at a very low level. Note that the natural background radiation gives the same risk as a similar amount of added radiation dose. Knowledge of the effects of radiation is as a rule more widespread than that of chemicals even if it is usually easier to understand the effects of chemicals. Radiation hits very nonspecifically, whereas a chemical substance often has a specific effect. Radiation, however, has from the beginning been considered dangerous in that some of the early pioneers developed cancer and this risk has been documented further after the Hiroshima and Nagasaki nuclear bomb explosions. Besides, there are only a few types of radiation to consider, which makes epidemiological research easier. On the question of chemicals, people as a rule have had a basically different approach – it is only in the last 10 years that a greater awareness of their dangers has been revealed. In addition, there are numerous chemicals and it is impossible for anyone to have a detailed knowledge of the dangers of all of them. This is valid especially when a long-term view is taken. In most cases, from a risk point of view, direct comparisons of radiation and chemicals are very difficult to make. Naturally, there are gaps in our knowledge of radiation but the gaps are even greater for chemicals. If a comparison is

made it must proceed from a number of premises that furnish the final comparison with large errors.

instructions on how people in Sweden should work with wood ashes and sludge, which in certain regions was contaminated by high concentrations of radiocesium.

Radionuclides in nature The cosmic radiation from outer space consists of particularly high-energy particles, especially protons. When cosmic radiation hits the earth's atmosphere, large numbers of secondary particles are formed. This radiation can also create radionuclides such as 14C in the atmosphere. A number of radionuclides have such long halflives that they have survived since the ”creation of elements”. Some of these decay to other radioactive elements. In certain cases, long chains of decays occur by which a radioactive substance - by successive radioactive decays - is converted into one daughter nuclide after another, until a stable (non-radioactive) end product is formed. Most radioactive material occurring in nature is included is one of four such chains, all of whose links have atomic numbers over 81. In Sweden, the largest contribution to natural background radiation comes from radon-222 and its decay products ("radon-daughters"). They form part of a chain of decay, which begins with uranium-238 and ends in lead-206, which is stable. Besides these chains, some other radionuclides occur in nature. One example is potassium-40 which forms 0.01 % of all natural potassium. Another is carbon-14, known for its use in determining the age of organic archaeological findings.

The Chernobyl accident The Chernobyl disaster in 1986 resulted in the deposition of radiocesium, mainly 137Cs, in the ground, vegetation and in the lakes in large areas of Europe. In the worst affected regions problems arose concerning the production of food of acceptable quality. Apart from this, the fall out caused only minor disturbances. SSI (Swedish Radiation Protection Institute) issued 48

During the spring and summer of 1986, comprehensive restrictions and measures were adopted concerning the agricultural production of food. The reason for introducing restrictions was that radiocesium was deposited directly on grasslands. From 1987 onwards, the uptake of cesium by their roots caused the contamination of the crops, which normally resulted in low cesium concentrations in agricultural products. In the future, the problem will instead be with foodstuffs produced in the natural ecosystem. The radiocesium content in these products now is much greater than in products of the agricultural system and besides, diminution of the content occurs slowly or not at all. The latter applies particularly to the radiocesium content in elk and roe deer, which was the same in 1993 as in 1986. Fish in the forest lakes have shown a slow reduction but in reindeer, the reduction has occurred more quickly. The recommendation that is usually given, is that the radiation dose arising from the intake of radiocesium from Chernobyl should fall below 1 mSv per year, which corresponds to the intake of 80,000 Bq of 137Cs. The fallout from Chernobyl also contained 134Cs and the intake of 60,000 Bq of this cesium isotope corresponds to 1 mSv. 134Cs can be disregarded since it "only" has a half-life of 2 years compared with 30 years for 137Cs. As mentioned earlier, the risk of 1 mSv is equivalent to 1 in 20,000 dying sometime in the future from radiation-related cancer. This is considered to be an acceptable risk. Note that of these 20,000 persons, more than 6000 will obtain cancer due to other reasons.

Dose limits for radiological workers In different countries legal rules for radiological workers may vary somewhat. In Sweden, the authority is SSI (Swedish Radiation Protection Institute), has given the following limits. Period of time Quantity

Limits (mSv)/year Workers in General

Students and Trainees aged 16 - 18 years

Annual Effective dose Dose equivalent to the lens of the eye Dose equivalent to the skin Dose equivalent to hands, forearms feet and ankles In addition, for 5 consecutive years. Effective dose

50 150 500 500

100

Effective dose Dose equivalent to the lens of the eye

6 50

Dose equivalent to the skin

150

Dose equivalent to hands, forearms Feet and ankles

150

Table 1 Dose limits for people working with ionizing radiation from external sources

49

CHAPTER 6 Radionuclide targeting INTRODUCTION Tumor cell-specific targeting for delivery of toxic agents is an attractive approach for the killing of spread cells whose positions cannot be determined through available diagnostic methods. The targeting process might not in itself give a satisfactory treatment of large tumors because the radiation dose may not be sufficiently high or the targeting substance may not penetrate well. However, large tumors with rather well defined margins can often be treated by surgery, or radiotherapy with external or internal radiation sources. In the latter case, encapsulated radiation sources are positioned in the tissue interstitium or in body cavities. Therapies based on targeting such as radioimmunotherapy and boron neutron capture therapy can be complementary with the major aim to kill spread cells. The radiation biology considered in connection with the use of targeted radionuclides is mainly concerned with the effects of the radiation from the radionuclides on the targeted cells or their close neighbors. This means that the targeting process is mainly based on the desire to optimize the biological action of electrons from beta decays (beta minus and beta plus) and on alpha particles from the decay of heavy nuclei. Photons (gamma- and X-rays), which are used in medical diagnosis and which most often also accompany alpha and beta decays of radionuclides used for therapy, are not of direct interest. Photons do not deliver their energy in the local environment around the targeted cells. Instead, they deliver their energy to large volumes of the body and give an increased background dose, which is undesirable. ABOSRBED DOSE Mainly high-energy beta (e.g. from 131I or 90Y) and alpha particles (e.g. from 211At) are considered to contribute to the local dose in targeted radiotherapy. The tumor dose for beta particles can be calculated approximately by Dose (Gy) =1.610-13 N Emean / m

50

where Emean is the mean energy (MeV) of the beta particle (Emean= Emax/3), m is the mass of the tumor (kg) and N is the number of radioactive decays given by the relation N=A x T½eff / ln(2) where A is the radioactive uptake (Bq) and (T½eff)-1 = (T½phys)-1 + (T½biol)-1 in seconds. The formula predicts, for example, that if a total of 10 MBq of the radionuclide 90Y is taken up in a tumor with the mass 20 g (20 cm3) then the tumor dose will be about 10 Gy. The formula has to be corrected if a certain fraction of the electrons escape from the tumor mass. The same formula applies for alpha particles if Emax is used instead of Emean and can, in principle, be used also to calculate the dose to different organs due to background irradiation by photons. Calculations of photon-mediated doses require that a reasonable assumption can be made about the radioactivity uptake in a defined mass of tissue and the average absorption of the photons. It is actually possible from modern scintigraphy to make reasonable assumptions about the relative photon release from different parts of the body and by knowing the total amount of injected radioactivity, it might be possible to make good approximations of the number of Bq in different body regions. PRIMARY INTERACTIONS Most of the radiation damage is mediated via ionization of cellular water, which gives free radicals with the capacity to damage critical biomolecules. Only a small part of the radiation-induced damage is obtained through direct ionization of the biomolecules (Fig. 1). DNA is the most important biomolecule when the action of radiation is considered. The formation of radicals is increased in the presence of oxygen, and oxygen can also react with primary lesions in DNA causing fixation of damage. Oxygen is in fact a powerful radiation sensitizer. Low molecular weight thiols (e.g. SH containing amines or amino acids) are

protecting substances because of their normally high proton-donating capacity. The main intracellular protector of this type is the tripeptide glutathione

Figure 1 Schematic drawing of the primary interactions taking place after irradiation. The sensitizer O2 and the protectors of SH-type act both at the primary radical formation level and at the fixation of DNA damage level. . a

b

c

d

Figure 2 Schematic descriptions of proposed DNA repair mechanisms. a. Single strand breaks. Taken care of by excision repair. Exo- and endonucleases remove sugar-phosphate and bases from the damage. Polymerases synthesize a new strand using the intact stand as a template. Finally, ligase closes the opened ends. b. Base damage. Glycosylases remove the damaged bases, which introduces a single strand break. This is repaired by the mechanism described above (see a). c. Double strand breaks. In one proposed fast mechanism, enzymes trim the two near ends so that they fit together. Thereafter, the ends are brought together, similarly to how it is made in a plasmid using DNA technology.

51

Since parts of DNA are lost, this repair type is most likely when introns are damaged. In a slower type of repair, a near homologous DNA sequence is “lined up”. Strand exchanges convert the double strand break into two single strand breaks (one on each strand) that are repaired by excision repair. Gene rearrangements or translocations can occur if the damage DNA sequence exchanges genetic material in the lining up process. This process is probably necessary when exons are damaged. d. Cross-links. The two DNA strands are covalently linked. This is a serious damage since it prevents strand separation at transcription and DNA synthesis. This repair mechanism is not well understood, but probably involves mechanisms similar to those described for double strand breaks. DNA DAMAGE AND REPAIR DNA is the critical target for radiation damage as has been shown in numerous experiments. One Gy of low-LET radiation gives about 1000 single strand breaks, 1000 base damages, 30-50 double strand breaks and a few cross-links in DNA. The single strand breaks and the base damage can effectively be repaired by the excision repair system as indicated in Fig 2. Double strand breaks can also be repaired although the mechanism is not clear and the (fidelity correctness?) of the repair is not necessarily high. Errors in the repair of double strand breaks in DNA might give rise to rearrangements (translocations) of genes. It is assumed that, several hours after the irradiation with a dose of 1 Gy, there are on average one or two unrepaired or seriously miss-repaired double strand breaks or cross-links in DNA. This remaining damage probably gives rise to inactivation of cell proliferation. If such damage is randomly distributed among cells and all of them, with a high probability, inactivate cell proliferation then the shape of the cell survival curves in Figures 4-8 can be qualitatively understood. The only assumption that has to be made is that the lethal damage is Poisson distributed among the cells. This gives a certain probability of survival for each cell independent of how high the dose is.

INTRINSIC RADIOSENSITIVITY Knowledge of the factors that determine intrinsic radiosensitivity is limited. However, it is known that cells react strongly after radiation damage. More than 40 genes have been reported to be activated by ionizing radiation. The molecular genetic knowledge about the regulation and trigging of DNA repair is not known in detail although some insight has been gained the last years. Figure 3 is highly speculative but includes some of the most recent theories of the relations between the genes involved. The DNA is checked regarding structural damages by unknown mechanisms (DCM). When severe such damage is found, certain genes code for transcript factors. These activate genes that code for inhibitors to cyclins and the corresponding cyclin-kinases. This gives a block in the cell cycle and the cell will then have a reasonable time for enzymatic repair of critical DNA damages before it is forced into mitosis. The DNA-histone complex is, if necessary, opened in preparation for the repair enzymes to reach the damaged sites.

Radiation

Cell division

DNA check mechanism No!

DNA repair

DNA damage ? Gene activation

DNA-histone opening mechanisms

Transcript factors

Growth delay

Yes!

Gene activation

Cell cycle inhibitors

Figure 3 A schematic overview of assumed mechanisms involved in cellular and molecular reaction on radiation induced DNA damage. Some diseases are known to be due to defects in the DNA repair system or closely related systems. Ataxia telangectasia is a disease where

52

it is assumed that transcript factors are mutated or defective, which prevents the slowing down of the cell cycle. The cells do therefore not have enough time to repair DNA damage before they are forced into mitosis. In Blooms syndrome there is a defect in the ligase activity, and in Fanconi´s anemia it seems to be difficult to repair certain DNA cross-links. The most dramatic increased radiosensitivity is seen in Ataxia telangiectasia. This indicates that cell cycle block and cell cycle check points are critical, not only for genetic stability as previously has been proposed, but also for the determination of radiosensitivity. However, more basic research is needed to obtain better knowledge about the factors that determine intrinsic radiosensitivity. SURVIVAL CURVES The initial events at the radiochemical level give rise to different types of biological effects that can be measured at the cellular level. The most important effect, from the therapeutic point of view, is of course the inactivation of tumor cell proliferation and this will mainly be discussed below. The arrest of cell proliferation is mediated through severe damage in DNA. The shape of cell survival curves is well known from experiments with external radiation. Cell survival is defined as the relative number of cells that, after irradiation, has the capacity to form colonies containing at least 50 cells. This is analyzed through the sparse seeding of the analyzed cells as monolayers or as suspension cultures in agarose gel. One or two weeks after irradiation, the exact time being dependent on the growth rate of the cells, the number of colonies containing at least 50 cells is scored. Cell survival is "fractional" in that a certain fraction of cells always seem to have survived as shown in Fig. 4. This is due to heterogeneity in local energy deposition and heterogeneity in structural DNA damage and repair. Thus, with an increasing dose the probability of lethal hits increases for each cell, giving rise to the type of "fractional" curves seen in Fig. 4.

Figure 5 Schematic drawings of cell survival curves with the functional parameters Do, n, α, β and S2Gy indicated in the curves.

Figure 4 Typical clonogenic cell survival curves after irradiation with low- or high-LET radiation. MODELS Cell survival can be mathematically modelled. Commonly used models are the single-hit, multi-target and linear-quadratic models. The former is described by

Table 1. The table shows crude estimates of the functional parameters Do, n, α, β and S2Gy. Typically, glioma, melanoma and osteosarcoma cells are radioresistant (Low), adenocarcinoma cells are intermediate sensitive (Medium) whereas lymphoma, myeloma, leukemia and small-cell lung cancer cells can be regarded as radiosensitive cells (High). Degree of radiosensitivity Low

Medium

High

Do (Gy)

1.0 - 3.0

1.0 – 2.0

0.5 – 1.2

n

2 – 10

2-3

<2

where D is dose (Gy) and Do is a constant (Gy) giving the slope of the curve in the high dose region. The extrapolation number n gives the radiosensitivity in the low dose "shoulder" region (Fig. 5a). The linear-quadratic model is described by the simple relation

α (1/Gy)

0.01-0.5

0.03-1.0

0.05-2.0

β (1/Gy2)

0.001-0.05

0.001-0.1

0.005-0.2

S2Gy

> 0.5

0.3 – 0.5

< 0.3

S = exp{-(αD + βD2)}

One main difference between external irradiation and irradiation by targeted radionuclides is that the latter most often gives a much lower dose rate. A low dose rate means that critical radiation damage in DNA can be repaired during the irradiation, which gives a less efficient inactivation of cell proliferation as shown in Fig. 6. The shifts in Figure 6 illustrates the changes in survival seen when the dose rate is shifted from about 1 Gy/min to about 1 Gy/hour and further to 1 Gy/day. One Gy/hour allows DNA repair to take place during the irradiation, which means that more dose is needed to inactivate the cells.

S = 1-(1-exp{-D/Do})n

where α is the slope of the curve near the dose zero and β is the constant describing the shape of the curve at high doses (Fig. 5b). Both formulas are used to give a simple twoparameter description of the radiosensitivity of normal and tumor cells. It has been claimed that the survival of cultured tumor cells at 2 Gy correlates, to some degree, with the clinically observed radiosensitivity of the corresponding types of tumors. Typical values of Do, n, α and β are given in Table 1 along with typical survival levels at 2 Gy, S2Gy.

53

DOSE RATE

Figure 6 Examples of cell survival curves when the dose rate is changed

ion One Gy/day allows both DNA repair and cell proliferation to take place during the irradiation and the latter means that the number of cells increases during the irradiation, making it necessary to kill more cells. It is likely that the dose rate in targeted radiotherapy is so low that DNA repair, and sometimes cell re-population, during irradiation have to be taken into account. The low dose rate makes targeted radiotherapy inefficient when cell inactivation per dose unit is considered. However, this does not at all disqualify targeting of radionuclides for therapy. The tumor cell specificity is the main critical aspect and if the tumor cells obtain a low dose rate, in spite of a good tumor specificity of the targeting agent, then the normal tissues must have an even lower dose rate. Thus, normal tissues will in such cases always have better possibilities to repair unwanted damage. It is the differential uptake between normal and tumor tissue that is important. If the tumor specificity is too low then it will be difficult to obtain high tumor doses without also giving too high doses to normal tissues, independent of the dose rate. HIGH-LET Conventional radiation with low ionization density, low-LET (LET = Linear Energy Transfer), delivers an ionization density in the range of 1 eV per nm, which seldom gives dense clusters of ionization. The corresponding values for high-LET radiation (e.g. alpha particles) are 100-200 eV per nm. A delivered energy of about 30 eV is needed for each ionization, since about 10-15 eV is needed for the release of a bound electron and the rest of the energy is converted to thermal energy. 54

electron

Figure 7 A crude comparison between the ionization densities and the size of DNA when low- and high-LET radiation types are applied. Considering that the diameter of the double stranded DNA molecule is 2 nm and that highLET delivers in the range 200-400 eV over that distance, it is easy to realize that densely ionizing radiation can give severe and clustered damage, which is difficult to repair. High-LET radiation can be considered to give few particle tracks, but everything that is traversed is destroyed. Low-LET radiation gives more randomly scattered ionization separated by rather large distances along the DNA molecule (Fig. 7) and the cell has a good chance to repair such damage. There are nearly no dose rate effects when highLET radiation (e.g. alpha particles) is applied in targeted radiotherapy. It does not matter if two severe damages on DNA are made within a few hours or within only a few minutes. The damage can probably not be effectively repaired and each damage might be severe enough to inhibit further cell divisions. Thus, provided the uptake of the radionuclides is tumor specific, it might be an advantage to use high-LET radiation because the probably low dose rate is not an obvious disadvantage. It might in fact be enough to administer the radionuclides over a very long time.

OXYGEN Acute hypoxia or anoxia in a tumor gives radioresistance due to the decreased number of formed free radicals and the lack of oxygenmediated fixation of damage. When oxygen is depleted there are better chances for proton donation from glutathione, which also means increased protection. The quantitative determination of the oxygen effect is given as a dose-quotient as shown in Fig. 8. Chronic hypoxia can under certain conditions also give radioresistance. The problems with hypoxia and anoxia are the same when targeting of radionuclides and external radiation are applied. However, one specific problem relating to targeting is that hypoxia and anoxia is an indication of bad vascularization, which might in itself prevent the targeting agent from reaching all tumor cells. Thus, it is possible that hypoxia and bad uptake of radionuclides might correlate. This has to be analyzed further. Something that is well known is that high-LET radiation is equally effective on hypoxic cells and normoxic cells, which means that it could, in some cases, be an advantage to use such radiation.

that the cells really are radioprotected. The partial oxygen tension has to be much below 10 mm Hg for the cells to be fully radioprotected. Another unsolved question is whether severely hypoxic cells still have the capacity to proliferate. Furthermore, some scientists have claimed that even if there are hypoxic areas with clonogenic cells in certain regions of a tumor, these cells might not exert a problem because of reoxygenation during the course of fractionated radiotherapy. It can be concluded that in spite of several years of research there are still uncertainties regarding the importance of hypoxia for the curability of tumors and the uncertainties regarding this problem are not smaller when targeted radiotherapy is considered. CYCLING CELLS .

Figure 9 Examples of reported variations in the radiosensitivity for resting cells and cells in the cycle when exposed to low- and high-LET radiation.

Figure 8 The influence of hypoxia on cell survival. The method to calculate the oxygen enhancement ratio, OER, is indicated. However, it is far from clear to what extent human tumors suffer from such severe hypoxia

55

Cells in the cycle are, on average, more sensitive to low-LET radiation than cells in resting phase and this applies equally to both normal and tumor cells. Furthermore, the radiosensitivity has been reported to vary over the cell cycle as shown in Fig. 9 when conventional low-LET radiation is applied and there is no difference whether the radiation comes from an external or an internal radiation source.

High-LET radiation prevents the cells in the late S and early G1 phases from being protected. In fact, it is assumed that cells out of the cycle are as sensitive as cells in the cycle when high-LET is applied, and it might therefore be an advantage to use high-LET radiation in those cases when the targeted cancer cells not are in the cycle.

Table 2. Examples of radionuclides of therapeutical interest. The last column indicates the labeling mode. Ch stands for chelator mediated whereas H stands for halogen labeling. Characteristics of emitted radiation

RADIONUCLIDES

Energy Different radionuclides are of interest and some considered for therapy are listed in Table 2. The interesting radionuclides can be divided in at least two groups: those that give long range effects so that neighbors to the targeted cells can also be damaged, and those that give short range effects so that mainly the targeted cells suffer. Among the long-range agents are high-energy beta emitters such as 90-Y (max. range 12 mm) and 131-I (max. range 2.4 mm), suitable when the uptake is high but heterogeneous. Another interesting nuclide is 32-P (max. range 8 mm).however, so far, it is mainly available in the form of phosphate and therefore has been considered too dangerous to use since dephosphorylation and phosphorylation processes might give unwanted incorporation of the radioactivity in normal tissue such as bone. The other extreme are the Auger electron emitters, such as 125I, which have to be inside the cell nucleus to give a significant radiation dose to DNA. The range of most auger electrons is only about 1-2 µm. Radioactive alpha emitters, such as 211At, give alpha particles with a range of about 50-70 µm and therefore belong to an intermediate group. The therapeutical efficiency of high-energy beta-emitting radionuclides varies with the tumor size and is highest at diameters comparable to the range of the beta particles. This has clearly been shown for 131I and 90Y by Wheldon. It was shown that these nuclides are not efficient for small cell clusters or single isolated cells because the emitted electrons have such high energy and range that they deliver large fractions of their energy outside the small cell clusters or single targeted cells. This is shown in the recurrence probability curves shown in Figure 10.

56

Type of

MeV

Ranges Max Mean (mm) (mm)

decay 90

Y

β

2.3

12

4

Ch

131

I

β

0.6

2.4

0.8

H

211

At

α

5.9

0.05

0.05

H

125

I

Auger

<0.03

<0.005

<0.001

H

Figure 10 Tumor recurrence probabilities versus tumor cell number when 131I (left) and 90Y (right) radionuclide therapy is applied. The influences of type 1 heterogeneity are indicated with dashed lines.

For single cells or small cell clusters, it is better to target the tumor cells with short-range radiation such as alpha emitters or agents other than radioactive nuclides, such as toxins or stable nuclides for neutron capture therapy. In the latter case, 10-B can be applied because the induced high-LET fission fragments have a

range of 6-9 µm, which is ideal to kill a single targeted cell. The auger electron emitters can only be used when there are reasons to believe that the radionuclides are taken up in the cell nucleus. TARGETING PRINCIPLES Several targeting principles for tumor selective delivery of radionuclides to tumor cells have been described in the literature. The most popular so far is to tag the nuclides to monoclonal antibodies either with more or less specific methods or by attaching the radioactivity to certain parts of the antibodies such as the carbohydrate moiety. Antigenic structures have been identified as potential targets such as the membrane-associated form of the carcinoembryonic glycoprotein antigen, CEA, in colon carcinomas. A new group of interesting targets are the sometimes overexpressed growth factor receptors, such as the EGF-receptor which is overexpressed in several malignant gliomas, adenocarcinomas and various squamous carcinomas, and the PDGF-alpha receptor which is overexpressed in certain gliomas. In these cases, monoclonal antibodies with specificity for the receptors and the corresponding ligands loaded with radioactivity could be applied. The latter is presently being tried with EGF-dextran conjugates loaded with radioactivity. Overexpressed signal pathways and other related uptake mechanisms, such as the mIBG (catecholamine precursor analogue) uptake mechanism, can be used in certain cases, e.g., for neuroblastomas.

monoclonal antibodies can be heterogeneous, which might also apply to other targets and targeting substances. The heterogeneity develops as a result of the karyotypic and phenotypic instability which characterizes most malignant tumors and which is most accentuated in large tumors. Type 2 If there is homogeneous uptake of the targeting substance, there is heterogeneity in the specific energy deposition. This applies both to radionuclide therapy and external radiotherapy and is most pronounced for high-LET treatments where the dose is delivered through few particle tracks. Type 3 If a sub-population of cells is selected on the basis of similar specific energy deposition, then there is heterogeneity because not all particle tracks give similar damage. Some tracks might give no, or only a few, breaks in DNA whereas other tracks might pass longitudinally through a DNA molecule causing several breaks and severe fragmentation. Thus, all cells with a certain energy deposition do not suffer from identical damage. Type 4 If we select a subgroup of cells that all have about the same specific energy deposition and similar types of DNA-damage, then there might be differences in the effects on cell proliferation due to differences in repair. Some cells might be near mitosis and there will be only a few hours before the chromatin is condensed and chromosome or chromatide damage is expressed whereas other cells might be in an early cell cycle phase or in a quiescent state and therefore have long time available for repair.

HETEROGENEITY The therapeutical response of any type of radiotherapy is more or less heterogeneous and the targeting processes give extra heterogeneity. The heterogeneity problems in targeted radiotherapy are of at least four different types:

All four types of heterogeneity must be considered in targeted therapy whereas types 24 only apply in external radiotherapy. Types 2-4 also account for the "fractional survival" always seen in radiation cell survival curves in vitro. RECURRENCE

Type 1 There might, in targeted therapy, be heterogeneity in the expression of targets, or in the uptake of the targeting substances, or both. Antigenic expression and the uptake of

57

Considering typical cell survival values for radiotherapy recurrence, probability curves can be constructed for radionuclide therapy as

described by Wheldon. Such curves are shown above in Fig. 10. These curves show that the recurrence probability is low at certain tumor sizes and increases for both smaller and larger tumors. Heterogeneity does not appreciably change the optimal tumor size for treatment, but decreases the total dose and thereby increases the risk of recurrence and the curves are then shifted upwards along the y-axis in Fig. 10. WHOLE BODY IRRADIATION Photons, which most often accompany alpha and beta decays, are not of immediate interest for targeting but they give an undesired background dose to large areas of the body. The most critical tissue is the bone marrow, which, if the whole marrow is irradiated, sets the LD50 dose for humans to about 4 Gy. The bone marrow syndrome is, after doses higher than 4 Gy, fully expressed after one month in humans, which corresponds to the time it takes for most blood cells and lymphocytes to die. The persons then die because new blood cells and lymphocytes have, due to the irradiation, not been formed from the stem cells in the bone marrow. A life can only be saved if bone marrow transplantation can successfully be carried out. When whole body doses above 10 Gy are applied, the gastrointestinal system shortens the lifetime to about one week corresponding to the time it takes for the gastrointestinal epithelium to be degraded. The probability of saving a human life after a total body dose higher than 10 Gy is zero. In targeted radiotherapy, it is important to keep the whole body dose, and especially the bone marrow dose, well below 2-3 Gy. If it is not possible to avoid this, preparations must be made for bone marrow transplantation. In the latter case, it might be possible to increase the whole body dose up to about 7-8 Gy. The dose values given in this section apply to high dose rate situations (about 1 Gy/min), commonly used in external radiation therapy. At low dose rate (< 1 Gy/hour), which is the common situation for systemic radionuclide

58

therapy, somewhat higher values (about 1.5 higher) can be applied.

CHAPTER 7 DETECTORS Our five senses are limited and to understand what really occurs around us we must sometimes use devices -- detectors. Our unassisted senses have limited means of detecting ionizing radiation. Heavy ions passing through the vitreous body of the eye may produce light, which is perceived and ions in the air present blurred perceptions of "something". Roentgen discovered that his rays gave rise to “phosphorescent" light when they met certain material. Becquerel discovered natural radioactivity by demonstrating that mineral photographic plates become blackened. The Curie couple learned to make use of the electroscope to measure charges that radium produced in air. So we have gradually learnt to construct and use instruments that augment our senses.

instrument, the type of radiation it can indicate, how it is calibrated and - not least - how to handle the instrument. Common types of laboratory and radiation protection instruments and detectors will be described in this chapter as well as their characteristics and a range of applications. Detection Principles Most detectors for ionizing radiation use its ability to cause ionization and excitation (see Chapter 1). This chapter will just deal with instruments based on this principle. Other detectors may be based on the ability of certain types of radiation to cause nuclear reactions (e.g. in the detection of neutrons). "Counts" of electrons

Using heat as a detector Ionizing radiation transmits energy to the irradiated material. Like all other energy deposition, it gives rise to an increase in temperature that can be used to monitor the irradiation. The absorbed dose of 1 Gy (1 J/kg) will increase the temperature by about 2.5 •10-4 oC in water, which is a fairly good equivalent to tissue. Thus, irradiation with 4 Gy, which is a fatal whole body dose, should then increase the body temperature by only 0.001 oC. In radiation protection, where the doses are in the range µGy – mGy, the warming effect is accordingly quite negligible. Hence, measuring instruments in the laboratory must be founded on other principles. They must be sufficiently sensitive to give warning at dose rates > 10 µGy/h. But they must also be able to measure low radioactivity concentrations in laboratory work or to measure radioactivity distributions in Nuclear Medicine. Varying measurement problems make different demands on the instruments' performance. A single instrument is seldom idealistic from all viewpoints. Therefore, many different types of instruments and equipment are developed. To be able to interpret the measurements correctly, we need to know the principles of the 59

When matter is ionized a large number of electrons are set free. Binding energies of outer shell electrons are in the order of 3-6 eV, but since the energy transfer is a statistical process an excess of energy is needed. On average, in most materials about 30 eV are required to create one ionization. Energy not used for ionization causes excitations, vibrations and other atomic processes. A 1-MeV particle, which deposits all its energy in a detector, creates 30,000 primary, free, electrons. If all electrons are collected and counted in a proper way, we should be able to register not just that the detector was hit by the particle but also the energy, position and direction of the particle. How much information we can get depends not only on the detector principle but also on the sophistication level of the detector system. Materials that are insulators, such as gases and certain solid materials, allow a direct count of the ionized electrons. If a high tension is applied no current will flow in an isolator. If the material is irradiated, free electrons are produced that can move affected by the electric field (Figure 1a) and a measurable electric current is produced (Figure 2). If many particles hit the detector they may contribute to a constant current at the level of a

few pA, which can be measured by an electrical instrument. This current is just related to the total energy deposited in the detector. No information of the number or type of particles is obtained. If we instead measure the current from each individual particle as a short current pulse more information is obtained. We can count the number of particles per second hitting the detector, and we can integrate the pulsed current and obtain information about how much energy each particle deposits in the detector etc. However, to measure current pulses, consisting of a few thousands of electrons only, is technically more difficult and puts other demands on the detector material.

a

b

c

Excited or ionized atom

Free electron with kinetic energy

At deexcitation a light photon is emitted

A changed chemical state

will fill this hole and release energy that can be emitted as a light photon (Figure 1b). Such photons can be "counted" by light-sensitive detectors (photo-multipliers or photo-diods). The number of photons is proportional to the number of ionizations and excitations in the material. This is also related to the total energy released in the detector by the ionization radiation. Materials of this type are called scintillators and may be gases, fluids or solids. "Counts" of changed chemical states Change in the chemical state occurs when an atom or molecule undergoes ionization and excitation (Figure 1 c). For example, if the irradiated substance initially contains divalent ions (e.g. Fe+ +) and an electron is knocked out, trivalent ions are formed (Fe+++). These ions can be distinguished chemically and the number of trivalent ions formed can be determined. The number of ions formed gives a measure of the energy that the radiation has deposited. Liquid and solid material can be used in these types of detectors.

particle path -

Figure 1 Different effects of ionizing radiation. a. Free electrons are produced by ionization. In an electric field, an electric current or pulse is produced which can be measured. b. On ionization or excitation, a hole appears in one of the electron shells. An electron from an outer orbit fills this hole. A photon is emitted with energy equal to the difference in binding energies. For certain material, these photons are emitted as detectable light photons. c. The ionization changes the chemical state of the material permanently. The number of such changes can chemically be measured. "Counts” of light photons On ionization and excitation, a hole appears in one of the electron shells around the atom. Sooner or later, an electron from an outer level

60

-

+

Figure 2 Principles of a gas detector. An ionizing particle creates a number of ion pairs in a gas between two electrodes. The electrons move towards the positive electrode and the positive ions towards the negative electrode. A current flows through the detector and can be measured either as pulses or in a dc-mode. Most radiation protection instruments are constructed on the basis of these three principles: counting the number of electrons, photons or changed chemical states. The construction and use of the most common types will be described.

The voltage between the electrodes. The design of the electrodes. The composition and pressure of the gas. The wall construction of the detector.

Influence of high tension If no voltage is applied, the ion pairs, which have formed, recombine (Figure 3a). This means that the positive ions and the electrons attract one another to reform neutral molecules or atoms. No current is measured and hence, no ionizing radiation is detected. If a small voltage is applied produced ion pairs can either recombine or are separated by the electric field. The number of separated ion pairs increases with increasing voltage, but is still less than the number produced by the ionizing radiation (Figure 3b). Increasing the voltage further a stage is reached where all the produced ion pairs are separated and picked up by the electrodes (Figure 3 c). The electrons and the positive ions are accelerated in the electric field but collide occasionally with neutral gas particles and lose their kinetic energy. a 0

b -

c -

d -

e -

If the voltage is increased further (Figure 3 d and e) the electrons will, between collisions, gain that much energy that they, in turn, can give rise to secondary ionization. This will create ion pairs that were not produced directly by the ionizing radiation. Figures 3 d and e also show how the collected charge varies depending on where the primary ionization took place. By designing the electrodes suitably, detectors can be constructed so that the collective charge is practically independent of where the primary ionization occurred.

1012

1010

108

I

II

Ion chamber area

1. 2. 3. 4.

Recombination area

The principles of a gas detector are illustrated in Figure 2. The factors that determine the detector function are:

Number of collected ion pairs

Gas detectors

becomes less than the number of primarily formed ion pairs. c. If the voltage is increased a little, the primarily formed ion pairs move towards their respective electrodes. The current measured is equal to the number of primarily formed ion pairs. d. If the voltage is further increased the electrons formed acquire such a high energy that secondary ions are produced. The current through the detector becomes greater than the primarily formed ion pairs. e. It the voltage is the same as in d, but the primary events occur nearer the positive electrode, fewer secondarily formed ion pairs are created. The measurability of the process depends on the geometry of the detector.

III

IV

Proportion counter area

Geiger-Muller area

V

Spark area

106

104

N2 Number of ion pair from a α - particle

102 0

+

+

+

N1 Number of ion pair from a β - particle

+

100

Figure 3 The influence of the voltage between the electrodes of a gas detector. a. If the voltage is zero, the ion pairs formed recombine. No current flows through the detector. b. If the voltage is small, the ion pairs formed can recombine or travel to their respective electrodes. The current through the detector 61

0

300

600

900 High voltage (V)

Figure 4 The number of collected ion pairs in a gas detector as a function of the electrode voltage. A detailed explanation of the diagram is given in the text.

The influence of the voltage is summarized in Figure 4 that gives the number of collected ion pairs as a function of the increased voltage. The diagram shows, as a comparison, the number of primary produced ion pairs from a beta particle (N1) and an alpha particle (N2) and how they relate to the number of collected electrons. l Recombination region The number of collected ion pairs is smaller than N1 and N2. ll Ion chamber region The number of collected ion pairs is equal to the number of primary ion pairs N1 and N2. lll Proportional region The number of collected ion pairs is proportional to the number of primary ion pairs formed. The proportional factor, also called the gas multiplication factor, can be as large as 100,000. lV Geiger-Müller region The number of collected ion pairs does not depend on the number of primary ion pairs formed. Even if the deposited energy varies, the detector gives the same pulse height. V Discharge region A spark occurs between the electrodes for each ionizing particle, which gives rise to ion pairs in the detector. Ionization chamber A detector that works within the saturation region in which all the formed ion pairs are assembled is called an ionization chamber (Figure 2) and is generally a portable instrument. The ion flux produced in the instrument can be measured directly.

Figure 5 A portable ionization chamber for radiation protection use. The current measured equals the primary formed ion pairs. An ionization chamber with a capacity of 1 liter exposed to an irradiation of 1 µGy/h produces an ion flux of about 10-14 A. In a portable instrument, it is difficult to amplify and measure smaller currents. Hence, 1 µGy/h is usually the lower measurement limit of such instruments. By refining the electronics or increasing the ionization volume, instruments with higher sensitivity can be made but they become bulky and cumbersome. Proportional Counters By increasing the voltage, primary produced electrons can be accelerated to such a high velocity that in turn they can cause further ionizations. particle path +

-

Figure 6 A proportional counter. The measured current is proportional to the number of ion pairs formed. The ratio between measured and primary formed ion pairs is called the gas multiplication constant.

62

As is illustrated in Figure 3 d and e, the number of assembled electrons depends on where the primary ion pairs are formed. By shaping the positive electrode suitably, this dependency on position can be eliminated. If a thin straight filament or loop is used as the positive electrode, the field intensity close to it is so great that it causes secondary ionization (Figure 6). In the large volume of the detector, the primary formed electrons migrate towards the positive electrode without causing secondary ionization. Only the electrons within the nearest millimeters of the thin filament are exposed to such high field intensity that they can cause secondary ionization. Within this region, each primary formed electron causes repeated ionization, which results in intensification of the primary current pulse. This ionization avalanche can intensify the primary yield by a factor of 100 to 100,000. The phenomenon is called gas multiplication. It is mainly the composition and the pressure of the gas that determine the sensitivity and the working voltage of the proportional counter. The size of the current pulse obtained from the counting tube is proportional to the number of primary ionization created by the penetrating particle. This can be used in a mixed radiation field to separate between particles, such as alpha and beta particles, giving away different specific ionizations. Such pulse screening demands precisely stabilized high tension and pulse height discriminators. By working the proportional counter at atmospheric pressure, the chamber can be designed so that a weak radioactive sample can be introduced into it. Thus a favorable measuring geometry is achieved and at the same time the method makes it possible to measure particles whose energy is so low that they cannot penetrate the chamber walls. This type of detector is called a "flow counter". After entering the sample, a suitable gas composition in the detector is established by a slow flow of counter gas (Figure 9 c).

63

Geiger-Müller counter (GM-tube) If the high tension is increased above the proportional region, the proportionality between pulse size and primary ionization is lost. In region IV of Figure 4, the so-called GeigerMüller tube's plateau, the detector pulses become equally large for α- and β-particles. The gas amplification can amount to 108 but the pulse size does not vary very much. If the voltage is increased still further, there is an electrical breakthrough creating a spark.. We then enter into the spark chamber area where light or sounds from the sparks are counted. In the GM-tube, the electron avalanche close to the filament advances along the whole filament (Figure 7) with a speed, usually 10 cm/µs. In the gas cylinder around the filament, all electrons are picked up by the electrode creating positively charged plasma. This decreases the electrical field gradient in the immediate environment of the filament that is essential for the genesis of an electron avalanche. When the positive ions have diffused towards the casing of the tube, the high electrical field intensity around the filament is restored and the tube is again ready to trigger a new event.

particle path -

+

The area around the central electrode will be competely ionized

Figure 7 Geiger-Müller detector. The discharge around the filament produces so much positive charge that the region of high electrical field strength disappears. When the positive ions have diffused towards the surface, the high field strength returns and the detector is again sensitive. When the positive ions reach the wall of the tube, they can knock out electrons from it and thereby create new pulses unintentionally. This can be prevented either by electronic

"quenching-circuits" that reduce the tube voltage briefly (between 10-3 and 10-4 seconds) or by making a gas mixture so that the tube becomes "self-extinguishing".

Figure 9 Different constructions of gas detectors

As a rule, the gas mixture consists of an inert gas such as argon at a pressure of 10 cm mercury and a quenching gas, commonly ethanol (10 %) or halogen (0.1%). Tubes containing an alcohol admixture have a limited lifetime (1010 pulses) since the alcohol molecules are consumed, whereas halogen tubes have the practical advantage of unlimited lifetime.

However, low-energy photons and electrons can not readily penetrate thick detector walls. For these types of radiation, detectors with thin walls must be chosen so that the radiation can enter the detector gases. For low energy electrons (e.g. beta particles from 14C) extremely thin windows (1-10 µm) are essential. Often a combination is used in that a thickwalled detector is provided with a thin foil at one side. Such a construction is usually called an end-window counter (Figure 9 b). To detect 3H-beta (energy < 18 keV) a windowless detector (gas flow counter, Figure 9 c) has to be used. To achieve stable conditions, the detector gas is allowed to flow at uniform speed through the detector out by an opening where the investigated object is introduced. Radiation protection instruments that work according to this principle have been constructed to detect 3H but are generally expensive and difficult to use.

Figure 8 This instrument contains a small GMtube. A larger GM-tube can be connected if a more sensitive measurement is required. This tube may be of the end-window type, equipped with a very thin foil that allows low energy βparticles to penetrate. Gas detectors can be made in many different ways for different uses. An instrument for photon radiation with energy higher than 50 keV requires a thick wall around the gas (Figure 9a). Photons with high energy have very little likelihood to interact with the gas itself. The thick wall increases the probability for interactions where high-energy photo- or Compton electrons are formed. These can then enter and ionize the detector gas.

The ionization chamber is principally the best instrument (Figure 5) to measure the radiation energy deposited or the dose equivalent. The flow of ions through the chamber is directly proportional to the absorbed energy. Disadvantages are that the instrument is relatively insensitive and can be heavy and awkward to use. The proportional counter gives a pulse that is proportional to the energy, but it is not especially suitable for measuring the absorbed dose. When the proportional counter is used as a radiation protection instrument, it is usually to differentiate sparse and massive ionizing radiation (e.g. α and β).

Sample

Gas inlet

Stright, cylindrical tube

End window tube

Gas flow tube

64

The Geiger-Müller counter gives no energy information. However, by a suitable assembly of the detector walls, the instrument can be calibrated to show the dose rate equivalent (µSv/h) for photons within a wide energy range (50 keV - 3 MeV). It must be stressed that this is only an approximate calibration and valid

only for photon irradiation within this energy range. The great advantages of GM-detectors are that they can be of robust make, easy to handle, relatively sensitive, and cheap.

Ionizing radiation Deposition of energy as ionizations and excitations. Material: Crystals, organic liquids and gases.

Scintillation Detectors Many solid and fluid materials can convert the energy of absorbed radiation into light. This phenomenon, luminescence, has for a long time been used in nuclear physics to demonstrate particles and to make direct observations on fluorescent screens, e.g., patients undergoing radioscopy. As early as 1910, Rutherford used a scintillation detector (zinc-sulfide) when he made scattering experiment with α-particles. He then observed the light flashes from the screen, the scintillations, with his naked eye.

A part of the energy is transformed into light emitted in flashes or scintillations.

Photomultiplier tub

The scintillations are registered by a light sensitive detector which transforms the light into electric signals.

Electronics

The electric pulses are analyzed and recorded by a special type of electronics, pulse electronics.

Figure 10 Principles of a scintillation detector. Today, sensitive light detectors (photomultiplier tubes and photo-diodes) can convert transmitted light to an electric signal, which is then recorded. The principles of a scintillation detector are given in Figure 10. Requirements for scintillating material to be used for detector purposes can be summarized as follows: 1. High efficiency in converting energy in the form of ionization and excitation to light. If a βparticle deposits 100 keV in a scintillator with a 10 % conversion factor, only 10 keV is emitted as light energy (photon of blue light ~ 3 eV). Accordingly, at each scintillation about 10,000/3 ~ 3,000 light photons are obtained. 2. A scintillator must be transparent for the light it produces. 3. The light photons must be emitted during a short period (< 10 -5 s). 4. The frequency of the emitted light must be in accordance with the light detector's response function. A photo-multiplier tube usually has a sensitivity of about 20 % to detect a blue photon. For the example above, this means that if 3000 blue photons hit the PM-tube only 3000*0.2 = 600 photons are detected.

65

Both organic and inorganic material can work as scintillators. Organic scintillators can be regarded as organic scintillating molecules in solution, either in a liquid form or in a solid phase. They usually have a density and an elementary composition corresponding to the body tissue. They are also characterized by great speed, that is to say the light flashes are of short duration. An advantage is that they can be made in large volumes where several photomultipliers collect the light. Inorganic crystals have greater density (2,000 8,000 kg/m3 ) and often have greater light yield with longer after-glow time than organic scintillators. The most common crystal in the inorganic group, sodium iodine activated by thallium, NaI(Tl) is available commercially in sizes up to several dm3 and in thin slices with diameters of 1 m or more. Due to favorable characteristics, absorption, light emission, speed and costs, NaI(Tl)-detectors have a great significance in γ−radiation detection. These crystals are very sensitive to humidity and are therefore encased in an internally white, reflecting wrapper with a window that permits light to reach the photo-multiplier tube.

NaI(Tl) - detectors Radiation

Scintillation

The principle of a scintillation detector is shown in Figure 11. For gamma detection it is an advantage if the scintillator contain high atomic number materials, such as iodine, since it has a high probability to interact with gamma. A high probability to absorb the total gamma energy is also of advantage since it determines the incoming gamma energy.

Scintillator Photocathod

The produced light photons in the detector hit the photoelectric cathode of the photomultiplier tube. Generally, this consists of a bivalent alkaline compound, e.g. SbKCs, which has many loosely bound orbital electrons. Light photons can expel these electrons in a photo absorption process. The freed electrons are accelerated in vacuum by an electric field towards an electrode. The kinetic energy transferred creates more electrons, which in turn are accelerated towards a new electrode and so on. The electrodes are called dynodes and give rise to secondary electrons (compare gas multiplication).

The entire detector must be encased in a lightand airtight covering to protect the photomultiplier from outside light and the crystal from humidity.

Dynods

Anode

R

-V+

C

Figure 11 The principle of a NaI(Tl) detector. A crystal of sodium iodide, usually in the form of a cylinder, is attached to a photo-multiplier tube with good optical contact. Other surfaces are coated with reflecting material to force light photons to hit the photo-multiplier tube no matter where they are formed in the crystal. X-ray from the K-shell

Number of pulses/keV

Typically, each dynode hit gives rise to 2 - 5 electrons. Generally a photo-multiplier tube has 6-10 dynodes and may yield an electron amplification of about 106 times. Consequently, for each light photon that produces a photoelectron, 106 electrons are obtained from the photo-multiplier tube. A 100 keV deposition in the crystal may produce about 3,000 light photons. Typically will this produce 600 photoelectrons (20 % conversion efficiency) and in the end a current pulse containing 600 · 106 electrons will emerge from the photo-multiplier tube.

etc

1000

Photo peak

Back scatter peak

Compton edge

100

10 0

200

400

600

Pulse height, γ-energy (keV)

800

Figure 12 Gamma energy spectrum of 137Cs 66

Generally, the sodium iodide crystal is "doped" with a small amount of thallium (0.2 %) to increase the light yield. Because of its high density and the high atomic number of iodine, the sodium iodide (thallium) detector is best suited for the detection of gamma- and Xradiation. When the pulse coming from the detector is proportional to the deposited energy, and the radiation frequently deposits its entire energy in the crystal, the detector can be used for energy analysis. A typical gamma energy spectrum is shown in Figure 12.

Plastic Crystal Detector If the sodium iodide crystal is exchanged for a plastic crystal, a detector is obtained that can be suitable for both beta- and alpha- detection. The light proof wall of the crystal can be made extremely thin to let low-energy particles penetrate. This type of detector is also sensitive to photon radiation but, due to its low density, not to such a high degree as the sodium iodide detector. The plastic crystal detector is not suitable for energy analysis of photon radiation. The low atomic number of plastic material makes the photons, especially incoming photons, deposit only part of their energy in the detector (Compton interaction). The probability of photo absorption is of course low. Liquid Scintillation Detectors A scintillator in the form of a liquid makes it possible to mix the sample to be measured with the actual detector material (Figure 14). Solved radioactive sample

Figure 13 Scintillator detector for radiation protection purposes. There is either a thin NaI(Tl)-crystal adapted for photon energies from disintegration of 125I or a thick NaI(Tl)crystal designed for detection of higher gamma energies. Thin crystals make suitable detectors for photons of low energy. The total crystal volume is small and hence the efficiency for other radiation, e.g. cosmic radiation, is small. Hand instruments intended for the detection of 125I are available and can be made sufficiently sensitive to detect small amounts in the thyroid gland (a few hundred Bq). At high photon energies, one needs to increase the crystal volume to increase the efficiency. Thereby the background contribution from cosmic radiation increases and these detectors become less suitable for detecting radiation of low energy.

67

Liquid scintillator

Photomultiplicator tube “viewing” the scintillations Sample and scintillator are mixed

Electronics counting pulses and analysing pule height

Figure 14 Principles for Liquid Scintillation Counting. The advantage of this method is the intimate contact obtained between sample and detector. No walls prevent a direct transfer of β-energy to the scintillator. This technique maximizes the energy transfer from sample to detector and hence is well suited for measuring low-energy radiation from, for

example 3H, for which suitable radiation protection instruments are lacking.

Photographic Emulsions Ionizing radiation such as visible light can affect the silver halide granules of photographic emulsions so that in the development process they can be reduced to silver. Thus, blackening of photographic films may occur as for X-ray photographic examinations. Photographic films can also be used for dosimetry.

Figure 15 Liquid Scintillation Detector. The instrument is very useful for radiation protection work e.g. in contamination tests. One common way to use the liquid scintillation detector for radiation protection is to make a sweep test. A bench surface or part of the body suspected of being contaminated is washed with absorbent material moistened in a suitable solvent. The absorbed material is then put in a scintillation flask (Figure 15), then put in the scintillator and the sample is measured in the detector. By analyzing the pulse heights, information can be obtained as to whether the material emits radiation energy. Semiconductor Detectors Solid material may be used as an alternative to letting the radiation interact with a gas and collecting the ions formed in it. Suitable semiconductor materials are silicon and germanium. Sometimes such detectors must be cooled to a low temperature. Semiconductor detectors that often have very good energy resolution have frequently been used for α- and β- spectrometry.

Figure 16 Two types of personal dosimeters. A track film type is on the left. In films e.g. nitrate cellulose films, high ionizing particles cause tracks, which can be developed and counted. To the right is seen a dosimeter based on photographic film. When the film is irradiated it gets blackened. The density can be measured to give an idea of the dose of ionizing radiation received. Electrons or γ-radiation causes a general blackening of film emulsions whereas heavy, charged particles, which have proportionately short range, give rise to trajectory tracks that can be studied under the microscope. Such track detectors may also be used for non-charged radiation, e.g. neutrons. In the plastic film backing, the fast neutrons are scattered against the protons. In this process, low-energy protons with high LET produce tracks that can be measured and used to quantify the neutron dose. In radiation protection work, film dosimetry is of great importance for routine individual dose measurements. Two emulsions of different sensitivities are often used to ensure detection in a wide dose range. The density of the blackness is obtained by using densitometers, instruments that record the

68

amount of light that passes through the film emulsion. The densitometer scale is graded to give a direct reading of the density and dose. By protecting different parts of the film with different types of filters, one may separate between charged particles and photons. Lowand high-energy gamma radiation can also to a certain degree be separated. This is sometimes essential for estimating the risks associated with irradiation. Thermoluminescence Dosimeters In certain materials such as LiF, CaSO4, CaF2, and BeO, the ionizing radiation can cause excited conditions that can persist for a long time. By heating the material, de-excitation can be achieved whereby light photons are emitted and can be recorded. The amount of light emitted is proportional to the amount of radiation to which the dosimeter is exposed. The radiation transfer energy to the bound electrons in the material. Some are energetically raised up into the conducting band (Figure 17a). When these electrons fall back towards their bound electron state, they may be trapped in an excited level between bound and conducting energy levels (Figure 17 b). These traps are caused by impurities in the material. The electrons cannot energetically fall down into the bound energy levels since such changes are "prohibited". Depending on the material, the electrons can remain for months in these "traps". The energy added in the heating process during read-out lift up the trapped electrons into the conducting level (Figure 17) from which they can fall directly to the ground levels, emitting light photons that are recorded. The thermoluminescent dosimeter can to a certain extent replace film as a personal dosimeter and is also suitable for special appliances such as finger dose measurements. It is light, available in different forms, and easy to tape securely to the fingers.

69

Free electrons Ionizing radiation hit the thermolumiscent material. Bound electrons are lifted up in the conducting band.

Bound electrons

Free electrons A certain proportion of the free elctrons are immediately Electron trap falling back to the bound state while emitting the excess energy Forbidden as a light photon. Other are transition trapped in intermediate levels Bound with long life-time. electrons

Free electrons The trapped electrons are lifted up to the conducting Electron trap band by heating the material. They may then fall back to the Forbidden bound state emitting a light transition photon which is detected. Bound electrons

Figure 17 Principle of operation of the thermoluminescence dosimeter.

Light emitted in an allowed transition

Light emitted in an allowed transition

CHAPTER 8 Scintillators in the laboratory rather thick samples emit measurable amounts of radiation. However, the detector material has to be heavy and dense to detect high-energy photons with some efficiency. The detector should also surround the sample to increase the detection efficiency.

SUMMARY The following table lists different detector principles for ionization radiation Type of signals

Type of instrument

Electrical pulses

Ionization chamber Proportional counter Geiger-Müller tube Semi-conductor Film

Chemical Changes Prompt photons

Delayed photons

Raising of temp.

Detector material

Gas Gas Gas Solid state Photographic emulsions Chemical dosimeters Solids/fluids Scintillation counter Crystals or liquids Crystals or Čerenkov counter liquids ThermoCrystals Luminescence Phosphorous Crystals Imaging Calorimeter Solid or fluid

DEFINITION OF THE PROBLEM Most radionuclides of biological importance, such as 3H and 14C, are pure, low energy beta emitters. The maximum beta energy of 3H (18 keV) has a penetration of about 10 µm in water. The mean energy of 6 keV yields about 200 ionization events in total (30 eV per ionization). If we want to measure 3H with some efficiency we can then conclude that • the sample has to have negligibly mass or thickness to avoid self-absorption • the detector should not have an absorbing window If a gamma- or X-ray-emitting nuclide, such as 125 I, is used the self-absorption in the sample is of minor concern. The HVL of 125I-photons in water is about 2 cm, which means that even

70

From such considerations two main detection techniques have developed to measure radioactive biochemical or biological samples • the liquid scintillation counter and • the gamma well counter based on NaI(Tl) Since such instruments are commonly used we will here consider the technique in some detail. The measuring results depend on different experimental conditions such as the type of radionuclide used, the volume and the chemical conditions of the samples, etc. Although modern instruments tend to automatically correct for a number of variables, untrained and non-experienced personnel may introduce severe measurement errors. LIQUID SCINTILLATION COUNTING The main idea in this technique is that both the sample and the detector are liquids, which are mixed to have a close contact. The detector material converts the beta energy into light photons. Sample

Liquid scintillator

Low energy beta will not penetrate detector walls. Solve the sample! Add liquid scintillator! Shake! Measure!

Figure 1 The radioactive sample is mixed with the liquid scintillator detector in a plastic or glass vial (usually 2-20 ml in volume). The vial with the sample and the scintillator is placed inside a measuring chamber (Figure 2) where all the emitted light is reflected into 2 or 3 photo-multiplier tubes.

The detector The energy transfer process from the radioactive decay to the emission of light is described below. Chemicals that convert absorbed energy to light are called “fluors”. Usually 5-10 gram fluor is solved in one liter of aromatic solvent. The aromatic structure of the solvent is essential since it provides a first excited electron level about 3-4 eV above the ground state. The number of excited states is proportional to the energy delivered by the beta particle. This excited level in the solvent has a rather long lifetime but is sooner or later transferred to the first excited level in the fluor (of somewhat lower energy). This excited level in the fluor has a short half-life (in the order of some nanoseconds). In the decay a blue light photon is emitted. MEASURING CHAMBER Reflectors Liquid scintillation vial

Photomultiplier tube 1

Photomultiplier tube 2

Radioactive decay Energy transfer

Excited solvent molecule Energy transfer

N O

Excited fluor molecule Emission of light

Figure 3 A schematic figure of the energy transfer from the beta energy to the emission of blue light photons. Typically, about one blue photon is emitted per 200 eV deposited by the beta particle. The detection efficiency for the PM-tube is about 20 %. On average, it takes about one keV to create one photoelectron as seen in Table 1. Thus, a beta particle of 1 keV or less will probably not give rise to a signal at all. Table 1 The table shows typical radionuclide photon and photo electron yields for a single decay event Mean Radio- Max nuclide Decay decay Energy energy (keV) (keV) 3 H 18 5.6 14 C 159 50 32 P 1700 650

Scintillation pulse Background pulses Coincidence circuit. Accept simultaneous pulses only. The summed output pulse is proportional to the total energy delivered in the vial by the beta particle.

Typical photon yield 30 250 3300

Photo electrons in the PM tubes 6 50 660

Summed pulse

Figure 2 A liquid scintillation sample is placed inside the measuring chamber. The light photons from the scintillation are forced to hit one of two photo-multiplier tubes. A coincident circuit sorts out background pulses coming stochastically from each PM-tube from true events, which create simultaneous pulses in the two PM-tubes.

There is also a background to overcome. Now and then, a spontaneous emission of an electron from the cathode surface occurs (called the dark currant of the PM-tube). To distinguish the true signal from the background modern instrument use two or three PM-tubes coupled in coincidence (see Figure 2). The solvent Pure benzene is a highly flammable liquid with a high vapor pressure, which should be avoided due to environmental reasons. Therefore, early

71

scintillators were usually based on toluene or xylene. Modern scintillators are., from this aspect, based on even better solvents (Table 2). Table 2 Some data of commonly used solvents in liquid scintillation detection. Solvent

Toluene Xylene Phenyl-orthoxylyl ethane, PXE Di-isopropylnaphthalene, DIN Scint ´HiSafe´

Flash point (oC) 4 9

Equilibrium vapor concentration, 25 oC (parts per million) 36 000 11 000

149

1 400

148

1 600

The scintillator The solvents themselves are rather poor scintillators since the decay time of the excited states is rather long and the PM-tubes are not sensitive to the wavelength emitted. A more efficient scintillator, a fluor, with short half-life and with a suitable frequency of emitted light is therefore added. The most widely used primary scintillator is 2,5-Diphenyloxazole, better known as PPO (see Figure 4). It emits light with fluorescence maximum at 365 nm. N O PPO, 2,5-Diphenyloxazole CH3

CH3 CH=CH

CH=CH

Bis-MSB, 1,4-Di-(2-methylstyryl)-Benzene

The high flash point solvents have several advantages. They may be stored during normal room conditions. The low vapor pressure lowers the ventilation demands. These solvents have a much lower penetration through the standard polyethylene vials, which enables long term counting. The health hazard is considered to be lower than for toluene but contamination by the skin should be avoided. A slight disadvantage is that the counting efficiency is somewhat lower than that for a toluene-based scintillator. Water-based samples cannot directly be mixed with organic solvents. Usually, a detergent such as Triton X100 is introduced into the scintillator liquid to cope with this problem. Small amounts of water (up to 1.5 ml sample per 10 ml scintillator) can then be solved in the scintillator. If more water is added (5-10 ml per10 ml scintillator) a stable gel is formed that also can be used for counting. The intermediate volume area, (1.5 -5 ml sample per 10 ml scintillator) creates an unstable two-phase system, which is not suitable for counting. The borders between the different phases vary with temperature, and chemical composition of the sample, and details have to be taken from each individual producer of the scintillation cocktail.

72

Figure 4 Commonly used primary scintillator, PPO and secondary scintillator, Bis-MSB. A problem may arise if the sample absorbs light in the frequency region of PPO emission. This may be solved if a wavelength shifter (secondary scintillator) is introduced, which converts the 365 nm to a somewhat longer wavelength. A common secondary scintillator is 1,4-Di-(2-methylstyryl)-Benzene (Bis-MSB). Detection efficiency As seen in Table 1 the number of photoelectrons, i.e. the number of electrons emitted from the cathode surface of the PM-tube, is few. The coincidence condition requires that at least one photoelectron is created in each PM-tube, which corresponds to a deposited energy of about 2 keV. In a continuous beta spectrum, there will always be beta particles present with such low energy that they do not trigger the coincidence circuit. The counting efficiency is thus always below 100 %. It is easy to realize that this creates most problems in low-energy beta spectra, such as 3H where the relative amount of such particles is high. Typical detection efficiencies for 3H are about 65 %, for 14 C 95 % and for 32P better than 99 %.

CPM per energy

losses in the energy transfer process (Figure 3). There are three main types of quenching processes that occur in the system:

3H 14C 32P

• • •

18 159 Energy (keV)

1700

Figure 5 Energy spectrum of three typical betaemitting nuclides. The energy scale is logarithmic to enable a simultaneously display of the varying beta energy distributions. The number of counts is often referred to as counts per minutes (CPM) and is related to the number of disintegration per minute (DPM) by the detection efficiency ε as CPM = ε * DPM Another important parameter is the pulse height since it is related to the total deposited energy. 14 C counting gives, in the mean, 8 times higher pulses than 3H counting. This makes it possible to separate between these two radionuclides in dual labeling experiments. The common way to display this is in an energy spectrum as seen in Figure 5. Since there is a large difference in the beta energies between different nuclides (a factor of 100 between 3H and 32P) the energy scale is usually logarithmic. The area of the spectrum gives the total numbers of counts. A common way to handle this is to amplify the summed pulse (see Figure 2) with a logarithmic amplifier. This pulse is analyzed by an analogue to digital converter (ADC) and is presented by a multi-channel analyzer (usually 1024 channels or more). It is then easy to sum different parts of the spectrum. Quenching In an ideal world the energy transfer in the scintillation system would be both perfectly efficient and undisturbed by environmental factors. In the real word, however, energy losses are substantial and depend on several factors. Quenching is a term used to describe energy

73

Physical quenching Chemical quenching Color quenching

Physical quenching In gel counting, the scintillation system forms small micro-droplets. The hydrophilic part of the detergent is directed against the water droplets whereas the lipophilic part is in contact with the organic solvent. The diameter of the droplets varies with the system parameters and the amount of water in the scintillator, but is in the range of 20-200 nm. If the radioactivity is in the aqueous phase, the beta particles have to penetrate a layer of water before they can start to interact with the detector part. During this water passage energy is lost that is not available for light production. Another example of physical quenching is counting radioactivity absorbed in a solid phase, e.g., a filter disc. Chemical quenching Any compound in the system that does not have an aromatic structure similar to the solvent interacts with the excited states. Such molecules may steal the excited state and de-excite it in small steps without creating a light photon. The most severe quenching agents are halogenated compounds (CCl4 > CHCl3 > CH2Cl2), ketones and aldehydes. Less severe are salt solutions, bases, acids, alcohol and water. Color quenching Color quenching occurs after the fluorescence stage, when light-absorbing compounds interpose. The number of photons that leave the scintillation vial and hit the PM-tubes decreases. The fluorescence emission takes place in the blue region of the spectrum. A sample collared in yellow preferably absorbs in blue and therefore the order of severity of color quenching is:

Sample Channels Ratio Red > orange > yellow > green > blue In all types of quenching, energy is lost. The pulse out is lower and the energy spectra in Figure 5 are moved to the left. This means that differently quenched samples are measured with different counting efficiency. However, if the quenching in an experiment can be proved to be constant, then the CPM value may be used as a substitute for DPM values.

To calculate the DPM of an unknown sample, the counting efficiency of the sample must be known. Spectral shifts are directly related to energy losses and thus to counting efficiency. A correction method, which uses the radioactivity in the sample itself can be designed as follows: 

A number of calibration samples with known amount of radioactivity are quenched differently. The samples are measured (CPM) and the counting efficiency (ε = CPM/DPM) of each sample is determined.



The spectral shift due to the quenching is also quantified. The classical method is to split the spectrum into two parts or two ''counting windows''. A ratio is formed

Quench Correction A methodology is therefore required to evaluate the level of quench, determine the detection efficiency, and correct the CPM result to give DPM. Quench correction adding an internal standard

Sample Channel Ratio ( SCR ) =

One method available is to add a known amount of a radioactive isotope to the scintillator sample mixture. This is called ''spiking''. Ideally the isotope added should be in the same phase as the sample. Using the following equation, the efficiency of the energy transfer process can be determined. Efficiency = CPM increase/DPM added The advantages of this method are that only simple arithmetic is required to obtain a DPM result. However, the disadvantages are many and the method is only used in special situations. The disadvantages are: • • • • • •

Costly in terms of the amount of radioactive isotope needed Costly in extra handling time. Costly in extra counting time (it is necessary to count the sample before and after spiking). Additional potential health hazard in extra sample handling. Physical quenching can lead to errors in the efficiency calculation. Increased errors due to multiple pipetting.

74

Count window 1 Count window 2



SCR is measured in all the calibration samples and will vary with the degree of quenching. We can now make a plot with the efficiency (ε) on the y-axis and the quench-measure (SCR) on the x-axis.



For an unknown sample we can determine SCR. From the plot we can then determine the measurement efficiency of the unknown sample.

Multi-channel analyzer (MCA) technology where one can use thousand windows instead of two to quantifying the spectrum shift of the sample has improved this technique somewhat. However, the main drawback is that one need to have fairly high levels of radioactivity in the samples to obtain good statistics in the quench parameter.

External Standard Channels Ratio Samples with low radioactivity content are thus still a problem. The common way to overcome this is to irradiate the samples for a short moment with a strong, gamma-emitting source. This irradiation produces Compton electrons in the sample that will interact with the

scintillation solution as the beta particles do. We can then measure a Compton electron spectrum, which is shifted in a quenched sample like a beta spectrum. We can then create a quench measure, External Standard Channels Ratio (ESCR), which can be used to monitor sample efficiency in the same way as above. It is independent of the sample radioactivity and high statistical confidence is achieved rapidly. Modern equipment, which analyses the data with MCA, can create special Quench Parameters based on the external standard and can from these measurements automatically correct for various types of quenching. CRYSTAL SCINTILLATION COUNTING In liquid scintillation we measure the energy deposition of charge particles. Due to the low Znumber, gamma is detected with low efficiency. However, all gamma-emitting nuclides also emit charged particles such as beta particles, conversion electrons, Auger electrons etc. and can thus be effectively detected in a scintillation detector. So why use a gamma detector if you already have a liquid? The answer is that gamma counting is more simple and reliable. The gamma rays penetrate even large samples so there is no need to solve the sample. The sample composition does not influence the measurements and no quenching corrections are needed. However, there is still a need to understand the system in order to avoid pitfalls. A gamma detector needs to be of high density and to have a high Z-number to be efficient. There are a number of crystal materials that fulfill the following criteria: 1. 2. 3. 4. 5. 6.

High atomic number and density High light output High transparency Fast decay time Stable in time Inexpensive

75

However, the most commonly used gamma detector in a biological laboratory is by all means a well counter based on NaI(Tl) (see also Chapter 7). We will below discuss different aspects of NaI-systems in some detail. Sodium iodide spectroscopy Because NaI(Tl) detectors have energy resolution, they can discriminate between gamma rays of different energies and hence can map the energy spectra of radionuclides. Before going into a detailed discussion of the features of NaI(Tl) spectroscopy, it is useful to briefly review the physics of gamma ray interactions (see also Chapter 3). Over the range of energies typically encountered in nuclear medicine and biology procedures (30 keV to 1 MeV) there are two gamma-ray interaction modalities: photoelectric absorption and Compton scattering. In photoelectric absorption, the gamma ray’s energy is completely absorbed by an inner-shell atomic electron, The electron is ejected from the atom with an energy equal to the gamma-ray energy minus its atomic binding energy. The empty vacancy left by this event is quickly filled by outer-shell electrons with the emission of characteristic X-rays and Auger electrons. The cross-section (that is the probability that an interaction occurs) for photoelectric absorption depends on the cube of the atomic number (Z3) of the absorbing materials. Because of the high effective Z of NaI(Tl), photoelectric absorption is an important interaction up to 1 MeV and is the dominant interaction up to about 300 keV. In low-Z materials such as soft tissue, photoelectric absorption accounts for a small fraction of photon interactions. In Compton scattering, the incoming gamma ray dissipates only a portion of its energy as it scatters off a loosely bound electron. The energy lost by the gamma ray is imparted to the electron. The energy transferred to the scattered electron depends on the scattering angle and is at a maximum when the gamma ray is scattered back in the direction it originally came from (back-scatter). The cross section for Compton scattering is only slowly varying with Z and

energy. As a result, Compton scattering is the dominant interaction for low-Z materials (such as soft tissue) and becomes increasingly important in all materials as energy increases.

fraction and depends on the detector thickness and the gamma-ray energy.

When the gamma ray energy exceeds 1.02 MeV another interaction known as pair production becomes possible. The gamma ray disappears and is replaced by an electron and a positron. The electron dissipates its energy and remains stable. The positron dissipates its energy and then captures an electron, resulting in mutual annihilation and the emission of two 51l keV photons (annihilation radiation). The crosssection for pair production depends on the square of the atomic number (Z2) of the absorbing material and increases as the gamma ray energy increases above the 1.02 MeV threshold.

It is possible for a gamma ray to be Compton scattered in the detector and for the scattered photon to escape without further interaction. The energy deposited in the detector varies with the scattering angle. This results in the so-called Compton plateau, which represents the range of possible energy deposition from Compton scattering events (Figure 7). On the high-energy side, there is a sharp drop off. This is the Compton edge, and it corresponds to a gamma ray that is scattered at 180 degrees in the detector. The energy of the Compton edge is calculated from Eγ -Eγ /(1 + Eγ/255) where Eγ is the energy of the gamma ray (in keV).

Compton features

Photopeak

Only a part of the energy stay

The most evident feature of the gamma-ray spectrum is the photopeak, which results from gamma rays whose energy is totally absorbed in the crystal (Figure 6).

NaI-detector Counts per channel

Counts per channel

Total absorption NaI-detector Photopeak or Full value peak

The gamma is scattered backward

NaI-detector

Compton edge Distribution of Compton electrons

Photon energy Figure 7 Gamma spectrum of a NaI(Tl)detector. The shaded part shows the Compton scattered energy deposit in the detector.

Photon energy Figure 6 Gamma spectrum of a NaI(Tl)detector. The shaded part corresponds to the photopeak or full energy peak. Because this is generally the result of a photoelectric absorption, it is referred to as the photopeak but any combination of interactions that result in the complete absorption of the gamma ray contributes to the peak. The fraction of counts in the energy spectrum that fall within the photopeak is referred to as the photo76

Escape peaks When a gamma ray interacts with the sodium iodine crystal, it is possible that only a portion of the energy is absorbed. This is how the Compton edge and plateau are formed (as just described). Energy from interactions can escape the crystal in other ways that yield discrete peaks in the spectrum, so called escape peaks.

Lead X-rays Iodine x-ray leaves crystal

NaI-detector Counts per channel

Lead shield

Iodine Escape Peak Energy = Eγγ - 27 keV

Photon energy Figure 8 Gamma spectrum of a NaI(Tl)detector. The shaded part corresponds to a loss of energy from the full energy peak due to an Xray escape. For example, when photoelectric absorption occurs, the characteristic X-rays from iodine are emitted isotropically. Normally, these X-rays are absorbed and contribute to the total energy peak. However, if the interaction takes place near the edge of the detector, the X-rays may leave the crystal. This creates an additional peak called the iodine escape peak with a pulse height 27 keV less than the total gamma-ray energy (Figure 8). Iodine escape peaks are usually only evident for low-energy photons (< 100 keV), in which a substantial fraction of the gamma rays are absorbed near the crystal surface. Another type of escape peak may be observed when high-energy gamma rays are counted and pair production is possible. If the annihilation takes place in the crystal, one of the 511 keV annihilation photons, or both, may escape the crystal. Thus two additional peaks can be seen, one at 511 keV below the true photopeak and the other at 1.02 MeV below. It should be noted that if pair production occurs in the vicinity of the detector (for example, in the lead shield), a peak at 511 keV also shows up in the spectrum.

77

NaI-detector Lead shield

Lead x-ray Energy = 80 keV

Photon energy Figure 9 Gamma spectrum of a NaI(Tl)detector. The shaded part corresponds to an incoming lead X-ray, which is created by Compton scatter in the detector shield. Because most detectors are shielded with lead, it is common to see a peak corresponding to the lead X-ray energy (approximately 80 keV). This is emitted when the gamma ray undergoes a photoelectric absorption in the lead shielding and the lead characteristic X-ray is absorbed by the detector (Figure 9). Coincidence sum peaks Two gamma in the same decay are detected simultaneously

Counts per channel

Counts per channel

Lead x-ray is detected

NaI-detector

Coincident Sum Peak E=Eγγ1+Eγγ2 Eγγ1

Eγγ2

Photon energy Figure 10 Gamma spectrum of a NaI(Tl)detector. The shaded part corresponds to the summation of two simultaneous gamma-ray energies.

If two gamma rays or X-rays are absorbed in the crystal within a short time, the magnitude of the scintillation corresponds to the sum of the photon energies. This produces a peak on the spectrum whose apparent energy is equal to the sum of the two absorbed photons and is referred to as the sum peak (Figure 10). Coincidence sum peaks can occur from a variety of situations: multiple gamma-ray emission in a cascade (111In), emission of a gamma and characteristic X-ray in electron capture (125I) or counting a high-activity source. Coincidence sum peaks are much more likely in well-counter geometry where the gamma rays and X-rays have a much higher probability of being simultaneously detected. Characteristic X-rays Radionuclides that decay by electron capture necessarily emit characteristic X-rays and Auger electrons because of the inner-shell vacancy that results from the capture. As a result, there is a peak on the spectrum corresponding to the absorption of these X-rays (Figure 11). This peak is often very prominent especially when the rate of gamma emission is low. An example of this is 201Tl, in which the 68- to 80-keV Xray peak is the most obvious feature in the spectrum.

Counts per channel

Gamma and characteristic x-rays are emitted in the decay

NaI-detector

Characteristisc x-ray Peak

Photon energy Figure 11 Gamma spectrum of a NaI(Tl)detector. The shaded part corresponds to the absorption of a characteristic X-ray emitted in the decay.

78

Backscatter peak When a radioactive source is located some distance from a detector and has material behind it, a distinct peak is evident on the spectrum that is due to gamma rays that have been Compton scattered at 180 degrees and then totally absorbed in the detector. These gamma rays initially are directed away from the source but are, after being backscattered, detected (Figure 12). The reason for a distinct peak is that gamma rays scattered at other angles cannot reach the detector. Backscatter peaks are most intense when the material behind the source is dense and has a high Compton cross-section. The energy for backscatter peak is Eγ /(1 + Eγ /255), where Eγ is the energy of the gamma ray (in keV). Media scatter If the gamma rays travel through any material on their way to the detector, Compton scattering may occur and the scattered gamma rays may be detected (Figure 12). These scattered gamma rays span the range from the backscatter peak to primary gamma-ray energy for single interactions. If the medium is thick enough, multiple interactions are likely to occur, and the scattered photons will cover an even wider range. Compton photons produced outside the detector and in the detector are not possible to separate. As a result, when it is important to discriminate scatter radiation, only photopeak events are considered.

energy peaks are broadened as shown in Figure 13.

NaI-detector

Peak maximum counts

NaI-detector Counts per channel

Counts per channel

OBJECT or SHIELD

Forward Backward Compton γ spread Compton γ

Half maximum counts

∆E E

Compton electrons from the detector

Photon energy

Photon energy Figure 12 Gamma spectrum of a NaI(Tl)detector. Scatter from the detector shield or from an object is scattered into the detector. Most scatter have energies from zero to the energy of the gamma. However, backscatter (photons scattered backwards)l have a more defined energy which appears as an extra peak in the spectrum.

GENERAL PROPERTIES OF NaI(Tl) SYSTEMS

Figure 13 Gamma spectrum of a NaI(Tl)detector. The figure illustrates the way to quantify the energy resolution of a detector. Energy resolution is quantified by determining the amount of spread with respect to the gamma-ray energy. The spread of the peak is measured in kiloelectron volts at the halfmaximum level (full-width-at-half-maximum, FWHM). Thus, energy resolution is defined as Energy resolution = 100% x ∆E/E = =100% *FWHM (keV)/Eγ(keV)

Although NaI(Tl) counters can be configured and used for a number of different purposes, they have some common characteristics including energy resolution energy calibration energy linearity, detection efficiency and count rate performance. These will be detailed in the following sections. Energy resolution Gamma rays are monoenergetic. A detector with perfect energy resolution would generate pulses with the same pulse height for each gamma ray that was totally absorbed and the energy spectrum for these pulses would be a single line. Sodium iodide detectors do not have perfect energy resolution because of statistical fluctuations in the number of electrons liberated at the photocathode of the PMT. As a result, the

79

Energy resolution improves (gets smaller) as the photon energy increases. The 662-keV gamma ray from 137Cs is often used to measure the energy resolution of NaI(Tl) detectors, and 7% is a typical result. Calibrating a spectrometer Because of the proportionality that exists between the pulse height of the detected signal and the energy absorbed from the gamma-ray interaction, it is possible to calibrate the equipment in terms of energy. Typically, the pulse height analyzer (PHA) has a multi-turn dial for adjusting the baseline (lower level) and another dial for adjusting the energy window. In many cases, it is useful to calibrate the PHA (MCA) to a true gain of; that is, where 1 unit on

the baseline dial corresponds to exactly 1 keV of energy. This can be done as follows: 1. Take two radionuclides with different energy gamma rays such as 131I and 99mTc. The gamma-ray energies should be neither too close in magnitude nor too far away. 2. Adjust the baseline and window settings for a 10% energy window centered on the higher energy gamma ray assuming a true gain of 1. For the 364-keV peak of 131I, this would correspond to a window setting of 36 and a baseline setting of 346. 3. Put the high-energy source in front of the detector. Adjust the gain of the amplifier or the high voltage to maximize the detector count rate. 4. Readjust the PHA (MCA) for a 10% window on the lower energy gamma-ray energy, again assuming a true gain of 1. For the l40-keV peak of 99mTc this would correspond to a window setting of 14 and a baseline setting of 133. 5. Put the low-energy source in front of the detector and verify that the count rate is maximized. If it is not the amplifier gain and the high voltage must be readjusted. Steps 3, 4 and 5 should be repeated until maximum count rates are achieved for both radionuclides at their respective settings. With the detector set at a true gain of 1, the appropriate settings can easily be calculated if it is necessary to change the coarse gain on the amplifier. For example, if there is a true gain of l and the coarse gain is increased by a factor of 2, the baseline settings are all multiplied by 2. In this instance, a 20% window centered on 99mTc has a window width of 56 and a baseline of 252. Energy linearity The pulse height of the scintillation signal is proportional to the energy deposited in the gamma-ray interaction. This statement is not true over a wide range of energies because of non-linearity in the spectrometer. For example, if the spectrometer is set to a true gain of 1, it is unlikely that both the low-energy photons (such as the 27-keV X-rays from 125I) and high-energy

80

gamma rays (such as the 662-keV gamma rays from l37Cs) will fall exactly at their expected settings. Although this is not a problem when setting up the spectrometer for gamma rays, that are relatively close in energy, it must be considered when using radionuclides whose emissions fall at the extremes of the energy range. Detector efficiency Not all gamma rays emitted from a source are detected. Some of the gamma rays never reach the detector and of those that do, not all interact. The detector efficiency quantifies the fraction of gamma rays that are detected. This can be considered from two points of view: the fraction of photons hitting the detector or the fraction of photons emitted from the source. The first depends on the intrinsic efficiency of the detector whereas the latter depends on both the intrinsic efficiency and the source detector geometry. Geometric efficiency The intensity of gamma rays emitted from a point source obeys the inverse square law: I(r) = I(0)/r2 The inverse square-law is a natural consequence of the fact that gamma rays travel in straight lines. Imagine that there is a point radioactive source at the center of a sphere. The density (gamma rays per square centimeter) at the surface of the sphere is I(0)/4πr2 where I(0) is intensity of the gamma rays emitted from the source and 4πr2 is the surface area of the sphere. Suppose we have a detector located on the surface of the sphere at a distance, r, from the source. The fraction of gamma rays that hit the detector is directly proportional to the area that the detector covers on the surface of the sphere. The geometric efficiency is defined as gp = (Area of detector) x cos(φ) / 4πr2 Eq(1) where gp is the geometric efficiency for a point source and φ is the angle between the surface and the detector. If the detector is pointed

directly at the source φ = 0 and cos(φ) = 1. The geometric efficiency can be improved by either increasing the area of the detector or decreasing the distance between the source and the detector. For scintillation detectors, geometric efficiency is maximized with a well design, in which a channel is bored into the surface of the NaI(Tl) crystal. The source is placed inside the well and is surrounded in nearly all directions by the detector. It should be noted that Eq(1) is not appropriate for very small source-todetector distances. When the diameter of the detector is large compared with the source-todetector distance, then the appropriate equation is gp = ½(1 - cos(φ)) where φ is the angle between the source and the edge of the detector. For example, if the source is sitting right on the detector, then φ = 90 degrees and gp = 0.5. If the source is positioned in a well chamber in which φ = 160 degrees, then

Because the attenuation coefficient for NaI(Tl) decreases rapidly with gamma-ray energy, the thickness of the detector must increase to maintain intrinsic efficiency. In Table 3, the thickness of NaI(Tl) required for an intrinsic efficiency of 0.9 is calculated at selected energies. The table also includes the thickness required for a photo-peak efficiency of 0.9.

WELL COUNTER A well counter is a NaI(Tl) detector that has been specially designed to maximize efficiency. As shown in Figure 14, the channel (or well) is made in the center of the crystal. With this design, the source can be placed inside the well to obtain a high measuring efficiency. The intrinsic efficiency can be high since large crystals (5 x 5 or 7.5 x 7.5, diameter x height in cm) can be used.

gp = ½(l - (-0.94)) = 0.97 Intrinsic efficiency The intrinsic efficiency (ε) is the fraction of photons that hit a detector and are detected. It is determined by the thickness (t) and the linear attenuation coefficient (µ) of the detector material, ε = 1 - exp(- µt). The intrinsic efficiency is high (that is, close to 1) when the product µt is large, thus we want either µ, or t, or both to be large. Table 3 Sodium iodine intrinsic efficiency of some selected gamma energies. A gives the thickness needed to have 90% intrinsic efficiency of any energy deposition in the crystal. B gives the thickness needed to have 90% intrinsic efficiency of total gamma ray energy absorption in the crystal. Gamma energy (keV) 30 140 172 247 364 511 662

Thickness of Thickness of NaI(Tl) (mm)A NaI(Tl) (mm)B 0.5 0.5 5.5 9 8 14 20 40 31 50 52 69 52 84 81

Figure 14 A schematic view of a well counter for gamma counting. The well, i.e. the hole in the crystal, may be different in commercial instruments due to how the samples are changed automatically. It is easier to arrange this technology in a through-hole detector but the counting efficiency is somewhat lower due to the larger hole.

Well counters that are used routinely to assay large number of samples are usually automated. In these devices, the samples to be assayed are loaded into counting tubes that sit in special racks. A transport mechanism allows each sample to be counted sequentially for a preselected time interval, and the results are stored as a computer file. It is possible to accurately assay the absolute activity of samples in a well counter. This, of course, requires a calibration source to relate the detected counts to radioactivity units. It also requires knowledge of the problems presented by count rate losses, background correction, and the sample geometry. These problems are briefly discussed in the following sections.

approximately 10% losses when the total count rate is 20,000 cps. If a high-energy sample is being counted (for example 18F) the photopeak count rate at which these losses appear may be below 10,000 cps. Sample volume The volume and geometry of the samples being assayed should be kept constant for two reasons.

Figure 15 A commercial gamma counter

Dead time losses The finite temporal resolution of the NaI(Tl) detector results in count losses. These become increasingly important as the total count rate seen by the detector rises. Count rate losses are a particular concern when assaying samples that span a wide range of activities. For radionuclides with short half-lives, the best alternative is to wait until the samples decay down to the point when the losses can be ignored. It should be noted that in many well counters, the count rate at which dead time losses are manifested could be relatively low, especially for high-energy radionuclides that have a small photo fraction. For example, if the pulse width at the amplifier is 5 µsec, there are

82

Figure 16 Illustration of sample volume and detector geometry for optimal and reproducible counting. First, the detection efficiency depends on the source distribution in the well.

Second, the amount of self-absorption within the sample also depends on the source distribution. It should also be noted that absorption of lowenergy gamma rays is highly dependent on the material and wall thickness of the source containers. Therefore, if assays from a large number of samples are made, care should be taken to ensure the uniformity of the source containers as well as the source volumes

0.4 N = 20 p = 0.05

0.3

p = 0.15

p = 0.5

0.2 0.1 0 0

5

10 15 200

5

10 15 200

5

10 15 20

r, the number of outcomes in N tries

Background Background refers to detected counts that do not originate from the desired sample. Because of the high sensitivity of NaI(Tl) detectors for gamma rays, they must be heavily shielded to reduce the counts from sources outside the well. Usually, several centimeters of lead surround the detector. If many samples are to be counted over a long interval, several background checks should be made just in case there is a timedependent factor associated with the background. Dynamic range The magnitude of activities that can be accurately measured with a well-counter range from approximately 20 Bq to 20 MBq. On the lower end, the limitation is the uncertainty in the background count rate, and the upper end is limited by dead time losses. COUNTING STATISTICS The radioactivity decay can from a statistical point of view be regarded as tossing a coin. There are two outcomes since during a time interval, either the nucleus decay or it does not. Events of this type have statistically a binominal distribution. The probability to decay can be set to p, and the probability not to is then 1-p. In N tries, e.g. N radioactive atom are investigated during the time interval, r atoms may decay. The relation between the number of tries and the probability gives a distribution of the number of outcomes, which can be described by the relation below

83

Figure 17 The binominal distribution illustrated for some values of the number of tries and probability for one of the outcomes. The distribution P(r) is illustrated in Figure 17 for some values of N and p. In a statistical distribution the sum over the number of outcomes should be one.

Important parameters in a statistical distribution are the mean and the variance and they can for this distribution be expressed as

If we consider the radioactive decay the probability for decay, p, is small. During one second, a nucleus changes its conditions 1010 times or more, but may still not be able to decay. However, the number of tries, i.e. the number of nuclides, is also large, 1010 or more. We know that each nuclide has a specific decay constant, which somehow is a measure of this nuclide’s probability to decay. If we in the binominal distribution let N --> ∞ and p --> 0 while we are keeping N*p= constant = µ we obtain the following relations

This relation is called the Poisson distribution and is said to be the statistical distribution for rare events, such as the radioactivity decay. The distribution has the mean µ and the variance of the distribution is also equal µ. In fact, the only variable in the distribution is the mean value µ. If we vary this value, the distribution varies as shown in Figure 18. 0.4 λ=1 t=1

0.3

t=3

0.1 0 5

10 15 200

5

10 15 200

Number of Counts, N 100 1000 10000 100000

One standard deviation, N 10 32 100 320

Relative error

N /N as % 10 3.2 1 0.32

t=10

0.2

0

Table 4. Counting statistics

5

10 15 20

r, the number of outcomes

Figure 18 The Poisson distribution shown for some values of the distribution parameter µ. For low values of µ, the distribution is rather skew but with increasing value of µ it becomes more symmetric. In fact, it is possible to show that for large values of µ the distribution converts into a Gaussian distribution. However, this Gaussian distribution is special since the variance is equal to the mean of the distribution. This can be used in nuclear counting to estimate the counting statistics from one single measurement. Consider the following example. We measure one sample with a nuclear detector and obtain the number of counts N. The sample or detector is of little interest and so is the measuring time. Only the number of obtained counts is important. We know that the measurement is restrained by the Poisson statistics. The Poisson distribution can be approximated with a Gaussian distribution (if N > 25) and with the mean = variance. Large values of N twill give a rather narrow distribution ´, which means that N is a good approximate value of the mean and the variance i.e. µ≈N and σ2≈N. The error in the measurement can then be written as N ±

N.

One must remember that it is an approximation. However, it enables us to estimate the error of the measurement from one single value.

84

Since we are working with a Gaussian distribution we can apply all knowledge about error propagation in this type of distributions. The only thing to remember is that the variance is equal to the mean, which simplifies the calculations. If we have two measurements, N1 and N2, obtained during the same time, we obtain the following relations for the error propagation in addition and subtraction N1 + N2 ±

N1 + N 2

N1 - N2 ±

N1 + N 2

The relative errors are 1/ N1 and 1/ N 2 . If we divide or multiply the data, we use the relative error the following way M = N1 x N2 or M = N1/N2 ∆M/M =

1 / N1 + 1 / N 2

A common situation is that we would like to subtract the background from the sample and to calculate the error. If the count rate is 300 CPM for the background and 600 CPM for the sample and we measure the two values for one minute, we obtain Ns = 600 and Nb = 300. We then obtain M = Ns - Nb = 300 ±

600 + 300 = 300 ± 30

If we increase the time we measure the background to 100 minutes, we obtain Nb=30000 ± 173 and

M = Ns - Nb/100 = 300 ± or M = 300 ± 24.6

600 + 1.73 * 1.73

Another example illustrates the error propagation if a variable does not follow Poisson statistics but normal Gaussian statistics. We measure a sample and get N = 6000 CPM. We would like to know the absolute value of the radioactivity and the uncertainty in the determination. For the detector we need two values, the efficiency which is 50% and the relative error (standard deviation) of this determination, which is 5%. We then obtain A = N / 0.5 = 6000/60/0.5 = 200 Bq and ∆A/A =

1 / 6000 + 0.05 * 0.05 = 0.052

A = 200 ± 10.3 Bq

85

CHAPTER 9 NUCLEAR BIOLOGY AND MEDICINE: NUCLIDE PRODUCTION enzymatic function may be slowed down considerably.

GENERAL ASPECTS Nuclides are divided into two groups: stable and unstable, i.e. radioactive. The number of known nuclides is more than 2000, of which 280 are stable and the remainder radioactive. Half-lives vary from a millionth of a second to millions of years. Most radionuclides are very short-lived. Radionuclides undergo transformations or decay, emitting α, β−, β+, γ, X-rays, neutrinos, conversion electrons, Auger electrons, protons, neutrons and heavy ions. The use of nuclides, both stable and radioactive, has steadily grown in importance in science, technology and medicine during the last 70 years. If we restrict ourselves to the area of biology and medicine, the dominant use is the tracer technology, i.e., to label bio-molecules to follow their chemistry, kinetics, metabolism and catabolism in vitro and in vivo. The application areas are in fundamental science such as investigating new biochemical pathways as well as diagnostics and therapy. Other areas of application are external and intra-cavity therapy.

The use of stable nuclides has advantages especially in vivo, since no radioactivity and associated biological hazards are involved. However, sensitivity is limited due to a natural background. The detection of natural isotopes is also somewhat more technically complicated and expensive, whereas radioactive nuclides can be detected more easily and at lower concentrations. Examples are auto-radiographic techniques that have provided regional information down to the sub-cellular level and at concentrations of pM. Non isotopic labeling However, in Nuclear Medical diagnostic procedures there wass a problem. Very few radionuclides of biogenic elements emit suitable γ rays that could easily be detected outside the body. Instead one had to rely on radioisotopes of non-biogenic elements, such as 67Ga, 99mTc, 111 In and 131I. They had the right physical properties, were easily available and could easily be labeled to biomolecules.

Isotopic labeling 2

Stable nuclides of biogenic elements such as H, C and 15N and even more radioactive nuclides such as 3H, 14C, 32P, and 35S have played an important role in biology. The reason for this is simple. Important biomolecules can be labeled by exchanging an adherent common, stable nuclide, i.e. 12C, to a stable non-common isotope, such as 13C or a radioactive isotope of the same element, i.e. 11C or 14C. This is referred to as isotopic labeling. The small change in weight of the two isotopes is usually negligible and from a chemical and biomedical point of view, they are identical. The labeled molecule can be administered as a tracer in amounts and in such a way that it does not disturb the normal metabolic pathways. 13

However, some care must be taken. If one replaces H with 2H or 3H, the change in weight is dramatic. Usually, this does not matter but if the labeling position is critical e.g. close to a chemical bond cleaved by an enzyme, the

86

However, small molecules, such as amino acids, were found to lose their specific biological activity when labeled with such nuclides. 125Ithyrosine could be transported across the cell membrane but could not be incorporated into the proteins, i.e. the membrane transport system was fooled but not the more specific tRNAsystem. Other molecules, such as 125I-IUdR, could mimic the biogenic molecule thymidine and were incorporated into the DNA-molecule but at a lower rate. In this case the iodine replaces a methyl-group, a quite dramatic change. However, the iodine atom has about the same van der Waal radius as the methyl-group, which is enough to trick the biology. Labeling with analogs Elements of the same group in the periodic system may to some extent replace each other in the biological system. Strontium can exchange calcium in the bone metabolism and rubidium can replace potassium. Even more heavy

elements were found to work, such as 201Tl+, which also replaces potassium and is a commonly used radionuclide in heart studies. Thallium is a well-known poison but small amounts, well below toxic levels, are given in the clinical investigations. In fact, the amounts given are even below the levels contained in foodstuffs in the market. The approach to use analogs to label biomolecules was tried. One found, for example, that selenium could replace sulfur in biomolecules e.g. 75Se-methionine had about the same metabolism as biogenic methionine. Radionuclide therapy Isotopes of iodine, 125I and 131I, were suitable for non-isotopic labeling of a variety of biomolecules. However, iodine is also a biogenic element but only in a very special organ, the thyroid. The natural high uptake of iodine in thyroid (30-50 % of administered iodine) was early used, not only for diagnostic purpose but also for therapy in malignant thyroid tissues as well as in metastasis. A suitable amount of 131I (about 1 GBq) gave a sufficiently high radiation dose to kill the tissues taking up iodine but with minor effects to other tissues. One may consider this as the first use of “targeting” therapy. The same situation was found in bone metastasis, where the calcium analog 89Sr could be used for therapeutic treatment of severely spread malignant bone diseases. When the first tumor selective molecules were developed during the 70s (Mabs) one had a great hope to use the specific uptake in the tumors for tumor treatment with radionuclides. The technique has found some applications but is still under development. Smaller molecules such as tumor receptor specific peptides and hormones are today used for the same goal. Different applications set different demands on the radionuclide used. A radionuclide used for diagnosis should emit photons in the range 100 -200 keV in order to be optimal with today’s common detector, the Anger gamma camera. Other detection modalities have been developed

87

which use higher photon energies (511 keV) but in coincidence (Positron emission tomography, PET). The labeling of the biomolecule should keep the biological activity and the kinetics of the labeled molecule intact. The label should be stable in vivo for the time of the investigation, and the radiation dose to the patient should be small. In therapy, about the same conditions should be fulfilled but the radiation dose should be high in the tumor but small in normal tissues. The different aspects of radioactive labeled compounds are illustrated in Figure 1. Different nuclides have to be produced both in an optimal, efficient and economical way. Different production routes give a product that might be useful in some circumstances, but not all. In order to understand these problems, it is necessary to have an understanding of all the underlying techniques, their advantages and shortcomings. Physical aspects Radionuclide • type of decay • half-life • production • distribution after MAb catabolism Imaging device • radionuclide • resolution • sensitivity • availability

Clinical aspects Diagnostic value • tumor to tissue contrast • cost/benefit Side effects • dosimetry • HAMA

Biochemical aspects TUMOR

Antigen • tumor specificity • tumor density • binding availability Antibody • specificity • affinity • size • pharmacokinetics • catabolism

Radiolabeling method • efficiency • effect on antibody • in vivo stability ANTIBODY

Figure 1 Different important aspects when choosing the optimal condition for a nuclear medical application. RADIONUCLIDE PRODUCTION There are two major ways to produce radionuclides: using reactors or particle accelerators. We can say that we irradiate either with neutrons (thermal, slow or fast) or with charge particles such as protons, deuterons, alpha or heavy ions. Since the target is a stable nuclide, we generally obtain either a neutron-rich

radionuclide (reactor produced) or a neutrondeficient radionuclide (accelerator produced).

Figure 2 The radioisotopes of iodine to the right of the stable 127I are said to be neutron rich and are mainly produced by reactor irradiation. The isotopes to the left are said to be neutron deficient and are mainly produced by accelerators. However, there are many exceptions. An accelerator may produce 131I and 125I is in fact routinely produced by reactors. Reactors for radionuclide production A nuclear reactor is a facility in which fuel such as natural uranium or uranium enriched in 235U, 233 U, or 239Pu undergoes fission. In the process, neutrons are produced from about 10 MeV and downwards. The neutrons are moderated and in different parts of the reactor one may find different qualities of neutrons. The neutron has no charge and does not feel the Coulomb barrier. Even neutrons with very low energy (thermal neutrons = 0.025 eV) can enter the target nucleus and cause a nuclear reaction. On the other hand, reactors do not produce very fast neutrons (En < 12 MeV). Neutrons are not able to knock out many nucleons from the nucleus since the mean binding energy of nucleons is in the range 8 MeV (see Chapter 2). A reactor produces a neutron cloud in which we place our target. Usually, we irradiate the target isotropically. One has to consider that heat is evolved in the reactor core and the temperature at irradiation positions may easily reach 200 oC. The reactor is characterized by the energy spectrum, the flux (neutrons/cm2/s), and the temperature at the irradiation position.

88

Figure 3 A top view of a nuclear reactor core used for radionuclide production (TRIGA II). Some of the tubes seen are part of a pneumatic rabbit system for placing samples in different positions of the core for irradiation. The targets are encapsulated into closed tubes adopted for the transport system. Nuclear reactors especially designed for radionuclide production and for neutron activation are available. Some may even be found in research hospitals for the clinical use of short-lived radionuclides. Reactors used for radionuclide production usually have a rabbit system (mechanically or pneumatic) to enter target into irradiation position and to take the target out. In Table 1, possible nuclear reactions for radionuclide production with a reactor are reviewed. The most typical neutron reaction is the gamma capture i.e. a thermal neutron is captured by the target nucleus and since there is no momentum in the reaction the decay energy is emitted as a prompt gamma ray. The reaction in Table 1 59

Co (nth, γ) 60Co

is an important application when producing strong γ-emitting sources for external therapy, but is more or less of no value when labeling biomolecules. Since the produced radionuclide is of the same element as the target the specific

radioactivity, i.e. the radioactivity per mass of sample, is very low in this type of nuclear reaction. The ideal production route is when the produced radionuclide is of a different element than the target. This gives what is called a noncarrier added production route. Table 1 Typical nuclear reactions in a reactor used for radionuclide production T½

σ (mb)

Co(nth, γ)60Co

5.3 a

2000

N(nth, p)14C 33 S(nth, p)33P 35 Cl(nth, p)35S

5730 a 25 d 87 d

1.75 0.015 0.19

Cl(nth, α)32P

14 d

0.05

fission  131I

8d

Type of Neutrons Thermal

Reaction

35

24

Mg(n, p)24Na

15 h

1.2

Cl(n, α)32P

14 d

6.1

35

The reaction 6Li(n, α)3H has a large crosssection for the production of 3H+-ions of some energy. These ions can be used for the production of e.g. 16O(3H, n)18F. The target used may be 6LiOH. In such a way, a reactor may produce some typically neutron-deficient radionuclides. A drawback with such a production is that the crude reaction solution is heavily contaminated with 3H-water that might be difficult to remove. Another reactor-produced neutron-deficient radionuclide can be exemplified by the production of 125I 124

A general feature with reactor-produced radionuclides is that most emit high-energy βparticles that give relatively high absorbed doses to patients. However, there is a possibility to produce neutron-rich radionuclides that do not emit beta particles. An example is given below 98

59

14

Fast

the production yield, one needs to work with expensive enriched targets which, however, can be reused several times.

Xe(nth, γ)125Xe (T½=17 h)  125I (T½= 60

d) A drawback with this production is that 124Xe has a natural abundance of 0.1 %. To increase 89

Mo(n, γ)99Mo (T½=66 h)  99mTc (T½= 6 h)

Figure 4 A cross-section of a 99Mo/99mTc generator (Courtesy of Mallinckrodt Medical Inc.) The 99Mo can also be produced by fission. The system 99Mo/99mTc forms a generator system. The mother nuclide 99Mo has a reasonably long half-life for centralized production and distribution to hospitals. The generator system consists of a column on which 99Mo is attached. By eluting the column with saline the daughter nuclide 99mTc is removed with negligible losses of 99Mo. The generator system can work for one week before it has to be renewed. By “milking” the generator once a day, the hospital obtains supply of a short-lived radionuclide which has ideal γenergy (140 keV) for the detectors, gives low radiation dose to the patient (the decay of a meta-stable nucleus gives mainly γ rays and few conversion electrons).

choose a nuclear reaction that is also practical and economical.

Inlet needle for eluant Aluminium crimp cap

Elutions 1.0 Mo

99

Rubber stopper Glass casing Glass wool

99Mo

absorbed onto aluminia

0.5 99m

0.2

Tc

0.1 0.05

Glass filter Rubber stopper

0.02

Aluminium crimp cap

0.01

Outlet needle for 99mTcO4-

Figure 5 A crosssection through the generator column

0

12

24

36

48

hours

Figure 6 Elution profile of a 99Mo/99mTcgenerator

Accelerators for radionuclide production An accelerator can be huge as in CERN (with a diameter of more than 1 km) and used for particle physics. However, when using accelerators for medically useful radionuclides, much lower energy is needed. One can do very well with a proton accelerator of < 80 MeV or even less (< 40 MeV protons) if one also has the possibility to accelerate deuteron- and alphaparticles. During the last decade, small accelerator yielding 16 MeV protons and 8 MeV deuterons have become fairly common at larger hospitals for positron emission tomography (PET) applications. Although such accelerators are much compact than a reactor, the particle energy they can provide is 2-8 times higher than for the reactor neutrons. That energy transferred to the nucleus is much higher and more particles can be kicked out of the nucleus. We may produce nuclides farther from the stable nuclide line and we have better opportunities to choose a suitable radionuclide for our specific application. Since we have more bombarding particles to choose between, we also have a better opportunity to

90

Figure 7 A cyclotron that can accelerate protons up to 17 MeV; generally used for radionuclide production. Another advantage with accelerator production is that, since we irradiate with protons or heavy ions, we produce a compound nucleus that is of a different element than the target. It is often much easier to produce non-carrier added products in an accelerator than in a reactor. A technical difference between reactor and accelerator irradiation is that in the reactor the particles come from all directions but in the accelerator there is a direction of the particles. The number of charge particles is often smaller and is usually measured as an electric current in µA (6*1012 particles/s for protons but 3*1012 for alpha-particles which have 2 charges per particle). A drawback in accelerator production is that the charged particles are stopped much more efficiently than neutrons. Protons of 16 MeV are stopped in 0.6 mm Cu. If we have a typical production beam current of 100 µA and a beam area of 2 cm2 we stop 1.6 kW in a volume of 0.1 cm3 which will evaporate all types of material if not efficiently cooled. The acceleration of the beam is done in vacuum but the target irradiation is made outside in atmospheric pressure or in gas targets at 10-20 times over pressure.

Table 3 Cyclotron produced non- positron emitting radionuclides. Examples from current research program in Uppsala.

He-gas 25 µm folie

Solid target

Particles from accelerator in vacuum

Water cooling of target

PETNuclide



Decay

Application Area

Cd(p, n)114mIn

50 d

EC

Therapy

Au(p,2p5n)191Pt

2.8 d

EC

Therapy

Bi(p,2n)208Po

2.9 a

α

α-source

Bi(α,2n)211At

7.2 h

α, EC

Therapy

114 197

He-gas 25 µm folie

Gas target under pressure

Particles from accelerator in vacuum

209

Water cooling of the target walls

Figure 8 Schematic targets for radionuclide production. The upper part of the figure illustrates the use of solid target while the lower figure the use of a gas target. Table 2 Cyclotron produced positron-emitting radionuclides. Examples from current research program in Uppsala. PETnuclide

T½ (h)

β+ (%)

Application area

N(p,α)11C

0.34

100

Org. synthesis

O(p,n)18F

1.16

97

Halogenation

Sc(p,n)45Ti

3.9

85

Chelators

Cr(p,n) Mn

134

29

Mn kinetics

Ni(p,3p4n)52Fe

8.3

56

Fe kinetics

17.5

76

Chelators

3.4

61

Cu kinetics

14

18 45 52 58

58

52

Ni(p,α)55Co

60

61

Ni(d,n) Cu

76 85

Se(p,n)76Br

16.2

54

Halogenation

83

32.4

24

Sr kinetics

86

14.7

33

Chelators

110

1.15

62

Chelators

100

23

Halogenation

Rb(p,3n) Sr

88

Sr(p,3n) Y

110

Cd(p,n) In

124

124

Te(p,n) I

209

Usually, one uses thin foils (about 25 µm thick) to separate vacuum and the target. Two such foils are used with a stream of He-gas inbetween to cool the foils. He-gas is preferable since it has excellent cooling capacity and little radioactivity is produced in the gas.

91

The accelerator production offers great flexibility in producing one radionuclide in different ways as seen in the example below, which gives some of the available ways to produce 123I (T½ = 13 h). 127

I (p, 5n) 123Xe  123I Xe (p, np) 123Xe  123I 123 Te (p, n) 123I 122 Te (d, n) 123I 124 Te (p, 2n) 123I 121 Sb (α α, 2n) 123I 121 Sb (3He, n) 123I 123 Sb (3He, 3n) 123I 124

Factors to consider are economy, radionuclide yields, amounts of radionuclide impurities, separation, and radiochemical aspects. Nuclear physical considerations If we want to observe an object, we usually illuminate it with light. The way the light is reflected, diffracted and absorbed informs us about the object and its properties. We may learn whether it is transparent or opaque. If it is transparent we can measure the refraction index, and if it is opaque we can find its absorption capacity. If monochromatic light is used, we may find how these properties vary with wavelength. The object may emit radiation of a different kind (e.g. photo effect emitting electrons). If the light beam is shut down, the object may continue to reemit light (phosphorescence).

If we want to “see” an object as small as the atomic nucleus, we need to use “light” of a much higher frequency, which also means much higher energy. However, the same type of experimental approach can be used although we need to design special particle detectors to replace the eye. It is not by chance that one of the most powerful nuclear models is called “the optical model”. We can regard particle beams of electrons, protons, neutrons and mesons as “matter beams” that have wavelengths suitable to “see” the atomic nucleus. These beams may be reflected, refracted and absorbed too. dΩ

dσ dΩ

IoN dΩ N

Φ

Incident beam Io Target

Scattered particles

Figure 9 A principle experimental setup. A nuclear physicist is usually interested in the particles coming out, their energy and angle distribution, but the radiochemist is mainly interested in the transformed nuclides in the target. A large difference between neutrons and charge particles is their possibility to penetrate the nucleus (Figure 10).

CROSS-SECTION

NEUTRON

PROTON

ENERGY

Figure 10 General cross-section behavior for nuclear reactions as a function of the particle energy. Since the proton has to overcome the coulomb barrier there is a threshold that is not present for the neutron. Even very low-energy neutrons can penetrate into the nucleus to cause a nuclear reaction.

92

We may write a nuclear reaction in the following way a+Ab+B+Q where a is the incoming particle and A is the target nucleus in their ground states (the entrance channel). Depending on the energy and the particle involved we may open different outgoing channels, where b is the outgoing particle and B is the rest nucleus in excited states. Q is the reaction energy and can be both negative and positive. If the incoming particle is absorbed we have a capture process type (n,γ) and in a reaction of type (p,n) we obtain charge exchange. If many particles are expelled we can refer to the reaction according to the process such as (p,3n). Each such reaction further distinguished by their excitation state is called a reaction channel. Different reaction mechanisms can use the same reaction channel. Here we differentiate between two ways • •

The formation of a compound nucleus Direct reactions

The compound nucleus has a large probability to be formed in a central hit of the nucleus. The incoming particle is absorbed and an energyrich compound nucleus is formed. It has usually a lifetime of about 10-14 seconds. This time is long enough for the nucleus to forget the direction of the incoming particle. However, the nucleus needs to get rid of its energy excesses, and sooner or later one or several particles “evaporate” from the nucleus. The rest nucleus formed is usually left in an excited state and cools off further by emitting gamma-radiation. The out-coming particles are emitted isotropically since the compound nucleus has forgotten about the direction of the incoming particle. The energy of the emitted particles has a typical distribution (evaporation spectrum). Most emitted particles have a low energy but some can have up to 5-6 MeV.

is high when the particle energy is just above (see Figure 13).

Elastic reaction (p, p)

Threshold

(p, p´) inelastic

t re a c

(p, d) picup

tions

(p, n) charge exchange

etc Inelastic reactions

56Fe

Cross section (barn)

Compound nucleus

Direc

Incident proton

(n, n´)

Total Compound

Direct reactions

Figure 11 A schematic figure showing some reaction channels at proton irradiation. The direct reactions preferably occur at the edge of the nucleus (see Figure 12). The incoming energy is directly transferred to a nucleon (knock-on reaction) giving two out-coming particles or you may have a stripping reaction of type (d,n). The outgoing particles usually have high energy and are emitted in about the same direction as the incoming particle. a

+

A

C

Before

B After

Delay

Compound nucleus formation

b a

+

A Before

Before

B + After

After

b Direct stripping

Direct knock-out

Figure 12 An illustration of the difference between direct nuclear reactions (lower part of the figure) and the formation of a compound nucleus (upper part of the figure). Most reactions are probably a mixture of these two types of reactions. The probability of these two events varies with energy in a different way. The direct reactions are heavily associated with the geometrical size of the nucleus. The compound nucleus, however, is formed just across the reaction threshold and the probability 93

0

1

2

3

4

5

6

7

8

MeV

Figure 13 A schematic view of particle energy variations of cross-section for direct nuclear reactions and for forming of a compound nucleus. When producing radionuclides, a detailed knowledge of different reaction probabilities is a necessity. The reaction probability is expressed as a cross-section or a surface. The unit is barn (b), which is an area of 10-28 m2. This is a rather large unit, so mb is a more commonly used unit. The cross-section unit is related but not identical to the nucleus projected area. The probability of a hit is a combination of the area of both the nucleus and the incoming particle. Particles of low energy can act like a mass wave of low wavelength and cover a large area in space. One may then obtain crosssection values of magnitudes higher than the area of the nucleus. After hitting the nucleus, one has also to consider the physical and statistical laws of distributing the energy so that a specific reaction channel is opened. Some cross-sections may therefore be very small. Figure 14 illustrates a well-known thumb rule in radionuclide production. The maximum crosssections found at about 10, 30 and 40 MeV for the (p,n)-, (p,3n)- and (p,4n)-reactions, respectively. Thus, it takes about 10 MeV to kick out a nucleon, i.e. a proton of 50 MeV can cover radionuclide productions that involve the emission of about 5 nucleons. Figure 14 also shows an example of using the cross-section information for an optimal production of 73Se with protons. At low energy there is a disturbing production of 75Se and if one uses excessively

high proton energy one obtains another unwanted radionuclide impurity, 72Se. The last impurity can be avoided completely by restricting the proton energy to an energy lower than the threshold for the (p, 4n)-reaction. The impurity from the (p, n)-reaction is not possible to avoid but can be minimized by using a target thickness that avoids the lower proton energies (having the highest (p, n)-cross-sections). 103

This then gives a radioactivity yield of the desired radionuclide at end of bombardment (EOB) mainly dependent on the beam current and the irradiation time. The yield is usually expressed in GBq/µAh, i.e. the produced radioactivity per time integrated beam current. If possible, one tries to keep the radioactivity of the contaminants small (< 1 %). However, after EOB, the long-lived radio-contaminants start to increase with the decay of the product.

75As(p,n)75Se 75As(p,3n)73Se

102 101 75As(p,4n)72Se

100 0

10

20

30

40

50

Proton energy, Ep (MeV) Figure 14 Excitation functions of As(p,xn)72,73,75Se reactions. The optimal energy range for the production of 73Se is Ep = 40  30 MeV.

Specific radioactivity In a target irradiation, we produce a number or a mass of radioactive atoms. The most common way to express this number is to give a measure of the radioactivity in Bq. However, this is an indirect way since it only tells us how many nuclei decay per second. To tell how many radioactive nuclei we had from the start, we need to add up all decays from the start to infinity.

75

As seen in Figure 14, the chosen production parameters are a compromise. A proton range from 40  30 MeV uses the (p,3n) crosssection well. Some 72Se-contamination is acceptable in order to increase the yield of 73Se. An important factor is the half-life, 75Se (T½=120 d), 73Se (T½= 7.1 h) and 72Se (T½= 8.5d). Sometimes it is possible to wait for the decay of the radioactive contaminants. This is not the case here, instead the longer half-lives of the radioactive contaminants help to keep the radioactivity of these low. The cross-sections give a number of the produced atom nuclei of each isotope. If the half-life is long, then the decay is spread out over a longer time, hence the radioactivity is lower than for a short-lived radionuclide. The relation between the number of nucleus produced and radioactivity is set by the specific radioactivity. The practical setup when doing a radionuclide production looks like this. A suitable As-target is made and irradiated with 40 MeV protons. The thickness of the target is such that it decreases the proton energy down to 30 MeV.

94

Radioactive decay is a statistical process. If we have a number of radioactive nuclei N they all have the same probability of decaying. This probability does not change in time, nor is it affected by chemistry, environment etc. During a small time, dt, the number of decayed atoms is dN. The radioactivity of the sample is expressed as A = dN/dt. The relative number decayed, λ∗dt = dN/N is always the same and is constant. λ is called the decay constant and has the unit seconds-1. We can write this as a differential equation (the minus sign is due to the decrease of N in the decay process).

dN/dt = - λ * N This equation has the unique solution

N = No * e-λ*t where No is the initial number of radioactive nucleus. Since the radioactivity A = dN/dt we get a relation between radioactivity and the number of radioactive nucleus, A = λ * N. This means

that the radioactivity also will decrease in the same way as the number of radioactive atoms

the labeling chemistry does not differ between e.g. the radioactive 131I and the stable 127I, we have to include the number of stable iodine atoms that might have been in the target or added during the separation process.

A = Ao * e-λ*t where Ao is the initial radioactivity at time 0.

Spec. Radioact. = Ao/(No + Nstable) In the decay process, the half-life (T½) of the radionuclide, is an important parameter. Its is defined as

Since in many cases the stable atoms are more numerous than the radioactive, the mass does not change with time but the radioactivity does. The specific radioactivity decreases with time.

A = Ao 2-t/T½

In radionuclide production and radiochemical separations the following expressions are used:

The relation between λ and T½ is given by

λ = ln(2)/T½

Carrier free: Only the radioactive atoms are present in the preparation, i.e. no stable isotopes of the same element are present.

0,8 0,6 0,4 0,2 0 0

1

2

3

4

No carrier added: In the separation process, no stable isotope of the same element has deliberately been added. However, one cannot exclude that some stable istope may have been added through uncontrolled processes, such as stable carbon being added from glass wear etc.

DECAY INTEGRAL

EXPONENTIAL DECAY 1

5

DECAYED ATOM FRACTION

RADIOACTIVITY FRACTION

By integrating the radioactivity decay curve (Figure 15, left) from t= 0  ∞, we add together all the decayed nuclei, No = ∫A dt (Figure 15, right). 1 0,8 0,6 0,4 0,2 0

TIME IN T½

0

1

2

3

4

5

TIME IN T½

Figure 15 By integration of the exponential decay curve (left figure) one obtains a number of totally decayed atoms (right figure). The relation between radioactivity and the number of atoms that are producing this radioactivity is called specific radioactivity

Spec. ac t. =

Ao ln(2) = No T ½

The number of radioactive atoms can be expressed in mass units such as gram or moles, and specific radioactivity can then be expressed in units such as MBq/g or GBq/µmole. If we only consider the radioactive atoms, we can always give a unique value of the specific radioactivity. If we include the stable isotopes we have to modify the concept somewhat. Since 95

Carrier added: In the separation process one deliberately adds some amounts of the same element to facilitate or stabilize the separation process. This will decrease the specific radioactivity of the product.. For radioactive labeled compounds one usually calculates specific radioactivity as radioactivity per number of labeled and unlabeled molecules. This means that we might here obtain values higher than the theoretical values for radioactive atoms since we may have more than one radioactive label per molecule (i.e. from 14Clabeled benzene where all six atoms are 14Catoms). To illustrate how the theoretical specific radioactivity is calculated, the following example is given.

Example. How to calculate the maximal specific radioactivity of 125I.

radionuclides of other elements from our preparation. Figure 16 A hot-lab equipped with a hotcell for high radioactivity work. Behind 5-10 cm lead one can work with a radioactivity of 100 GBq. Automated systems or manipulators are used to protect personnel.

The physical half-life of 125I is T½=59.41 days. The number of radioactive atoms is then 59.4*24*3600/ln(2)= 7.41*106atoms/Bq. We consider the radioactivity 1 GBq = 109 disintegration/s. This correspond to 7 .41*1015 atoms. By using the Avogadro´s number we get that 1 µmol of 125I-atoms corresponds to 6.023*1017 atoms. We can then calculate the specific radioactivity (SA) of 125I SA(125I)=

6.023 * 1017 = 81 GBq/µmol 7 .41 * 1015

In literature, specific radioactivity is often given in the old unit for radioactivity, Curie (1 Ci=3.7 1010 Bq). 1 Ci = 3.7 1010* 59.4 * 24 * 3600 / ln(2) = 2.74*1017 atoms

C15O2

17

SA(125I)=

Time is essential. If we work with a short-lived radionuclide such as 110In (T½ = 69 minutes) and the separation procedure takes 4-5 hours, the radionuclide is of no practical interest. The separation procedure should take at most about 1 half-life.

6.023 * 10 = 2.2 Ci/µmol 2.74 * 1017

C15O

Deutron-irradiation Fast on-line chemistry

TARGET d+14N-->15O+n

Radiochemistry During target bombardment, a heavy shielding is necessary around the reactor/accelerator to protect from prompt radiation. But even after shut down, radiation shielding is advisable. Besides the wanted radioactivity, the target may contain several other reactions, either of the same element (radio-contaminants) or other elements. Some radionuclides may be shortlived and before processing it may be of advantage to let them decay. In the subsequent processing, the target is usually transported in a lead shield to a hot-laboratory and placed in a lead shielded hot-box.

15O

N2+1% O2

Figure 17 Target system for production of 15O (T½= 2 minutes) and on-line production of a number of labeled ligands. Gaseous or liquid targets are disadvantageous in the irradiation but advantageous in the separation since the separation system may easily be automated. 1

2 4

In the radiochemical process we want to remove the small amount of desired radioactivity (≈ nanomole) from the bulk of target (0.1-1 g). We want the separation efficiency to be as high as possible and the separation time as short as possible. We also want to separate all

96

H215O

2

5

3 1100ºC Argon

200ºC 6

Figure 18 A schematic description of the 76Br separation equipment. (1) Furnace, (2)

Auxiliary furnace, (3) Irradiated target, (4) Deposition area of selenium, (5) Deposition area of 76Br, (6) Gas trap. Solid targets usually have to be dissolved (usually in boiling acids) before wet chemical separation methods are applied. In general, two principles are used: liquid extraction and ion exchange. Occasionally, thermal separation techniques may be applied, which have the advantage that they do not destroy the target (important when expensive enriched targets are used) and that they lend themselves to automation. As an example of such dry methods the thermal separation of 76Br (T½ = 16 h) is described. The target is Cu276Se, a selenium compound that can withstand some heat. The nuclear reaction used is 76Se(p,n)76Br. 1. The target is placed in a tube and heated, under a stream of argon gas, to evaporate the 76Br activity by dry distillation (Figure 18). 2. A temperature gradient is applied to separate the deposition areas of 76Br and traces of co-evaporated selenide in the tube by thermal chromatography. 3. The 76Br activity deposited on the tube wall is dissolved in small amounts of buffer or water. Separation yields of 60-70% are achieved and the separation time is about one hour. Since dry distillation permits the extraction of radiobromine without destroying the target, the Cu2Se targets are reusable. Considering the rather expensive 76Se-enriched target material, this is practically a prerequisite for this type of production. The chemical form of the 76Br activity after separation, analyzed by ionexchange high-performance liquid chromatography (HPLC) and thin-layer chromatography (TLC), was almost exclusively found to be bromide.

97

Table 4 Specific radioactivity of some commonly used radionuclides in biology and nuclear medicine RadioNuclide



Specific radioactivity Common values Max for compounds GBq/µmole

3

H C 35 S 32 P 33 P 131 I 125 I 57 Co 58 Co 75 Se 197 Hg 203 Hg 11 C 18 F 14

12 a 1.08 5730 a 0.0023 87 d 55.5 14 d 340 25 d 197 8d 599 60 d 81 270 d 18 71 d 68 120 d 40 64 h 1776 47 d 104 20 min 3.55 *105 2h 6.59 *104

GBq/µmol

0.01-5 0.0001-0.01 0.0001-0.01 0.001-10 0.001-1 0.01-1 0.01-1 0.1-10 0.1-10 0.001-0.1 0.01-1 0.001-1 1-100 1-100

CHAPTER 10 NUCLEAR BIOLOGY AND MEDICINE: IMAGING Imaging techniques have always been important in the application of radionuclides and tracer techniques. Henri Becquerel did not only discover the natural radioactivity but he also, by chance, invented the autoradiographic technique when he discovered the impact of the uranium salts on a photographic film. This technology has developed considerably and can today give quantitative information of tracer distributions from the organ level to the cell nucleus structures. External detection of gammaemitting nuclides has developed from scanning with a collimated gamma sensitive probe to today’s sophisticated systems for Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET).

Several factors, such as the thickness of sample and detector as well as the energy and penetration ability of emitted radiation, determine the information one can obtain from such a system (Figure 1). Other important factors are how firm the radioactivity is bound to the sample. For example, a labeled ligand that diffuses in the sample during preparation gives a low-resolution image. Several sample preparation techniques for different applications are possible to use, such as •

Macro-autoradiography (Whole Body). The animal is frozen in a liquid that prevents freeze artifacts. A frozen block is cut by a saw or placed in a whole body tomograph. Sections with a thickness of 20 µm or more are mounted directly on a plastic tape.



Embedded sample, tomographic section. Normal histological embedding and mounting techniques on glass are used. Tomographic sectioning gives samples with a thickness of 3 µm or less.



Frozen sample, tomographic section, freezedried mounting on glass backing. This technique can be used when the labeled ligands are expected to diffuse. Samples can be obtained with a thickness of about 3-5 µm.



Ultra-tomographic techniques used in electron microscopy. Sample thickness of about 100 nm.

Autoradiography Autoradiography in biomedical applications is a technique for the visualization of radioactive materials in cells, cell cultures, tissues and organs. A thin sample (cell culture) or a thin section obtained by tomographic techniques is placed in close contact with a detector that can register the position of the decays. Sample

thick thin Detector thin thick Eβ β high Sample thick Resolution bad independent if film thick or thin

Eβ β high Sample thin Resolution good if film is thin

Eβ β low Resolution good independent of sample and film thickness

Figure 1 General principles of autoradiography. The sample and the detector in close contact give the best resolution and sensitivity. If spatial resolution is desired in the same range as e.g. the beta particle range, then the thickness of sample and detector is of less importance. If the desired resolution is better than the range of emitted particles, then the thickness of sample and detector are the dominating factors. An important factor is of course the intrinsic resolution of the detector.

98

The thinner the sample, the smaller the radioactivity contents in the sample. One has to adopt the radioactivity concentration in the sample to the desired resolution. When using ultra-thin sections, one also has to consider the specific radioactivity of the ligand and whether it is theoretically possible to obtain detectable radioactivity amounts in the sample. Generally any radionuclide can be used for autoradiography. However, if high resolution is important, then one prefers to use radionuclides that emit radiation with low penetrating ability

such as 3H and 125I. These radionuclides can be used to label a variety of biomolecules at high specific radioactivity. Photographic film Photographic emulsion is still the dominant detector. It is also the only choice when spatial resolution of a few µm or less is desired. There is a large choice of different types from standard X-ray films to liquid film emulsions for high resolution. Measuring the film density can be used to quantify the autoradiography, e.g. by a computerized image analyzer system. A calibration curve of (radioactivity concentration)*(exposure time) to density can then be used. It is important to obtain a correct exposure of the film due to the limited linear range of the system. Figure 2 Whole-body autoradiography showing the distribution of 14C-methionine. L u Some structures are L i marked, such as brain (B), lungs (Lu), liver (Li), intestine (I) and tumor (T). The density can be T measured and related to radioactivity concentration by comparing with a calibration scale.

(about 100 nm). Ideally a monolayer of silver bromide crystals is used and exposed. After the development procedure the silver grains are hardly visible even in light microscopy with the best resolution. However, in electron microscopy the high atom number of silver gives a good contrast and images of a high resolution may be obtained.

B

T

I

Figure 3 Electron microscope autoradiogram showing labeled acetylcholine on receptors in a neuromuscular junction. More information at www.nbb.cornell.edu/neurobio/salpeter/salpeter.html

Phosphorous imaging techniques The photographic film has some disadvantages such as

If closer contact is needed the film emulsion without backing can be applied directly onto the sample. The two techniques used here are the strip film technique and the dipping technique. Another way to quantify is to measure the number of silver grains in the film emulsion. It is often necessary to do manually using a microscope and it is a tedious work. An alternative is to use an electron microscope. A diluted fine-grain film emulsion is used. Using the surface tension in a loop, a thin film is produced that is applied onto a thin sample 99

• •



Low sensitivity, which means long exposure times. Low linearity. The exposure has to be right. If the film is over- or underexposed, one needs to redo the process. Sometimes it is difficult to cover the dynamics in the sample by one single exposure. Time consuming handling. The samples have to be dry before applying the film emulsion. The developing process takes time. Digitalization or grain counting is necessary in order to quantify.

Lately, new types of detectors have been developed to overcome such drawbacks. They are based on different physical principles such as multi-wire chamber (gas detector), multichannel detectors (scintillator detector), and phosphorus imager (trapped excited electrons similar to thermo-luminescence detectors), strip detectors and pixel detectors (solid state detectors). Their resolutions are limited to about 25 µm and upwards. Here, we will just describe one type of these detectors, the phosphorous detector, in some detail. An image plate of this type contains BaFBr:Eu+2 crystals. When a radioactive sample is placed on the plate the emitted ionizing radiation excites electrons in the crystal. In the de-excitation process, a small fraction of electrons are trapped in an excited stage and a latent image of the radionuclide distribution is formed.

Figure 5 The read-out as a function of number of disintegration for a phosphorous system and photographic film.

Scanning Laser (red) Unstable state Trap

Figure 6 The read-out system of one of the commercial systems for phosphorous imaging. A red laser spot is scanned over an exposed image plate by an optical system.

PM-tube Ground state Figure 4 Principle of the phosphorus imaging technique.

A scanning red laser beam is used to “develop” the imaging plate. The size of the laser beam spot determines the resolution (25-200 µm). The energy in the red light photons is enough to lift up the trapped electrons to the unstable state. It has then a high probability to fall down to the ground state emitting a blue light photon that is counted by a photo-multiplier tube (PMtube). The number of blue photons registered in each position is proportional to the radioactivity concentration of the sample. Exposing the image plate to an intensive white light will anneals remaining trapped electrons and the image plate can be used again.

100

The technique is more sensitive than the film since a denser material is used as detector. It is also more useful for high-energy particle for the same reason. Besides, it is linear over a larger range of exposures than the film. Gamma camera The Anger gamma camera is used to detect single photons. A main component is the lead collimator (Figure 7) which selects the photons that are emitted perpendicular to a large NaI(Tl)-crystal.

its sensitivity would be unacceptably low. Typical values are in the order of 15 mm at a distance of 10 cm from the collimator. The resolution varies with the depth. Salivary glands

Liver Bladder

Figure 7 A schematic view of the gamma camera.

Front

Photons of another direction are ideally absorbed by the lead in the collimator. A photon hitting the crystal may give rise to scintillation (a number of light photons). This light spark is viewed by a number of PM-tubes. Depending on their position they “see” varying numbers of light photons. Knowing the position of the PMtubes we can calculate the x, y-coordinates of the scintillation and the energy deposited in the crystal. Each photon detected by the crystal can then be registered, and an image of the radionuclide distribution can be constructed from a number of such events. The technique lends itself to dynamic investigations. The weak point in the gamma camera is its collimator. It is made up of a slice of lead (3-10 cm thick dependent on the desired performance) with thousands of narrow holes. From an isotropic source the collimator picks out a very small fraction of photons (the exact number depending on the thickness and the hole dimensions). This means that the counting efficiency for the system is low. The intrinsic resolution of the gamma camera, e.g. how well the system can decide the position of the scintillation, is about 4 mm. However, if one made a collimator with the same resolution, 101

Back

Figure 8 Diagnosis of Disseminated Malignant Pheochromocytoma using 123I-MetaIodobenzyl-Guanidine (MIBG). The collimator also sets restrictions to the gamma energy, which one may use. Highenergy photons may penetrate the thin lead walls between the holes (the septa) of a highresolution collimator. For high gamma energies, the septa have to be bigger and the resolution of the collimator decreases. High-energy photons may also penetrate the crystal without giving a scintillation, which decreases the sensitivity. Optimal gamma-ray energies to be used in typical gamma camera systems are in-between 80-200 keV. One of the most used radionuclides is 99mTc (140 keV). The gamma camera image is a depth integrated radionuclide distribution. Gamma rays from different depth are detected with different sensitivity and resolution. Gamma rays from different depth are also absorbed and scattered in different ways. Hence, the gamma camera gives information that is difficult to quantify in absolute terms.

SPECT Each gamma camera exposure gives a depthintegrated measurement of the radionuclide distribution. In Figure 8, the patient is viewed from two sides: the frontal projection and the back projection.

Figure 10 Perfusion measured in a normal female volunteer using SPECT and 99mTcHMPAO. Data are presented in 20 tomographic slices through the brain.

Figure 9 The gamma camera is stepwise rotated around the patient (typically 3.6 o per step). For each step a projection is obtained, i.e. a measurement of the integrated radionuclide distribution from that angle. If we have a time stable radionuclide distribution, i.e., we have reach an equilibrium, the two measurements should give about the same radioactivity distribution, however mirrored (Figure 9). The difference is due to somewhat different attenuation of photons. By a proper mathematical treatment of all these measurements (about 50 projections) it is possible to describe the radionuclide distribution in 3D. We obtain what is called tomographic information since we slice the patient, not by knife but with the help of the computer. For each slice we can present the radionuclide distribution as an image. The most common reconstruction method is called filtered back-projection.

102

Figure 11 A dedicated SPECT system. Three gamma camera detectors rotate together in to speed up the data collection. A typical time to make a SPECT investigation is about one hour. The time per projection is about one minute, and 50 projections or more are needed. The long exposure time also requires that the radionuclide distribution is “frozen”, i.e. does not change substantially during the investigation time. To speed it up two or more gamma camera heads can be placed together as in Figure 11. PET In positron emission tomography (PET) positron-emitting nuclides, such as 11C are used. The positron travels 2-3 mm in tissue before it is stopped. When stopped it attracts an electron

and forms a strange atom, a positronium, which exists for a short while before annihilation of the electron masses. The energy is emitted as two anti-parallel 511 keV photons as indicated in Figure 12. Positron Decay Coincidence Detection 15O

positron

P

ν β+

15N

5

hoton or 11 keV P etect 5 |d e b M tu

e

r|PM oton detecto V Ph 11 ke

tube

In PET, data is collected in all directions randomly. There is no need to move the detector but all projections are collected simultaneously. Each event is sorted into a set of parallel coincidence channels defining one projection. This means that a full set of data for tomographic information may be collected during 2-10 seconds only. There is no need for a radionuclide distribution in steady state but very fast kinetic investigations can be performed. TRANSMISSION Rotating source

electron

Coincidence Detection Time Window: 10 ns (BGO-detectors)

Figure 12 The emission of two annihilation photons in the positron decay. detector rings

Bed Detectors

Figure 14 A transmission scan. A line source of Ge/68Ga is circulated around the patient. The density of a patient slice is seen viewing lungs, heart, spine and muscle. The bed upon which the patient lies is of a light material with low density and is hardly seen.

annihilation photons

68

Figure 13 A schematic view of a positron camera system. The photon energy of 511 keV is high. It comes out of the patient efficiently but it penetrates lead and detector materials almost equally well too. It is difficult to collimate these photons as it is done in the gamma camera, and NaI(Tl) is not efficient enough. In PET, one or several rings of small detectors are placed around the patient (Figure 13). Each one of these detectors is coupled in coincidence with all opposite detectors. If two such detectors are hit simultaneously (within 10 ns) the PET camera assumes that it has been hit by two photons originating from the same decay. A straight line is defined by the known positions of the two detectors and the PET camera assumes that the decay has occurred somewhere close to that straight line.

103

At present, the most common detector material is bismuth germinate (BGO), a scintillating crystal with a high Z-number (Z=83) and a density close to 8 g/cm3. These properties make a detector with a fairly high efficiency to 511 keV photons. A crystal with the length of 25-30 mm has a detection efficiency of about 70 %, which in the coincidence channel corresponds to 50 % efficiency of detecting coincidences. There is no need for a lead collimator. Instead, we can say that we use an “electronic collimator” by measuring the two opposite directed photons by coincidence. This makes the system rather sensitive Commercial systems with a spatial resolution of about 4 mm in all directions are available today. The resolution is mainly set by the crystal

dimensions but there are factors, such as the range of the positrons that set the theoretical limit of resolution to 2-3 mm. An advantage of PET is that the radionuclide distribution can be quantified in absolute values (Bq/cm3). In an investigation, two measurements are made: transmission and emission. The transmission is performed as described in Figure 14. EMISSION

Figure 15 An emission scan of a human heart. A ligand (acetate) labeled with a positron emitting nuclide (11C) is given. A line source containing a long-lived positron emitter is rotated around the patient. We still use the coincidence technique. One of the photons hits a detector directly whereas the other has to pass through the patient before it hits an opposite detector. We call this a transmission scan since we have an external source and one of the photons is transmitted through the patient. Since we view the patient from all angles we have the same situation as in an X-ray investigation using computerized tomography (CT). We measure the density of the patient but with the same photon energy as we do later on in the emission scan. The line source is removed and the ligand labeled with a positron emitter is injected. Both photons have to pass through parts of the patients and we call this an emission scan. By combining the transmission scan and the emission scan we are able to correct for 104

attenuation in the body and can then obtain a correct quantification within a few per cent.

CHAPTER 11 Labeling methods A correct labeling of biomolecule is important for the outcome of all experimental applications of nuclide technique. The measuring technique is just giving information about the radioactivity but the labeling technique couples this information to the information we want – the fate of the biomolecule. Important general aspects of labeling have already been mentioned in Chapter 9. Here we will go into particulars about labeling of biomolecules. Protein labeling through tyrosine The most common labeling method is to label tyrosine with halogen isotopes. The tyrosine can either be labeled as a part of a protein (direct labeling) or as a part of a precursor, which is then attached to the protein in a suitable reaction (indirect labeling). Halogens such as iodine and bromine are introduced into the ring structure of tyrosine by oxidative processes. Iodine might also be attached to tryptophan or sulphydryl groups. At a high pH (>8) histidine might also be labeled. The most commonly used agents for the oxidation is Chloramine-T either in solution or immobilized on polystyrene beads (Iodogen). This method is used for labeling with radioactive isotopes of iodine such as 125I for basic biology and 131I for medical applications such as imaging and nuclide therapy. The last years, 123I for planar gamma camera/SPECT as well as 124I for PET, have also been applied using the tyrosine labeling strategy. Chloramine-T is the sodium salt of N-monochlorop-toluene-sulfonamide (see Figure 2). It is a mild oxidizing agent. In an aqueous solution it releases hypochlorous acid, which oxidizes iodine probably to an active iodonium ion [H2OI+]. This ion is incorporated into the tyrosine residue of the proteins. The exact mechanism for this reaction with carrier-free radioactive iodine isotopes is not known in detail.

Enzymatic labeling Enzymes such as lacto-, chloro- or bromoperoxidase can be used to label protein. One example is the labeling of antibodies with 105

radioactive isotopes of bromine through the use of bromoperoxidase. The reaction is initiated by the addition of H2O2. After a suitable reaction time, the reaction is ended by addition of sodium metabisulphite. A drawback is that it is often difficult to purify large protein product from the enzyme. This is usually easy in the tyrosine labeling method described in Figure 2 since only low-molecular-weight reagents are used.

Figure 2 Labeling of tyrosine with iodine by the Chloramine-T method. a) The structure formula for Chloramine-T. a)

SO2 N Cl

CH3

O H

b)

Cloramine-T and 125I-iodide

Na+

O H 125I

Oxidation of the iodide gives H2O125I+ CH2 R1 -NH - CH - C - R2 O

CH2 R1 -NH - CH - C - R2 O

b) A schematic view of the labeling procedure. The oxidative reaction is, after a suitable time stopped by adding sodium bisulfide. Indirect protein labeling Amino group labeling The so-called Bolton-Hunter reagent provides a general method to label amino group. This reagent labels proteins at the epsilon amino groups (i.e. lysine) and can, but more slowly, also react with the alpha amino group at the amino terminus of a protein or polypeptide. The Bolton-Hunter reagent, already labeled, is commercially available. The reagent contains a tyrosine ring that carries the iodine and a succinimidyl group that can react with amino groups (Figure 3).

Thiol labeling If the succinimidyl group in the abovementioned reagents is exchanged for a maleimide, the reagents can be used for reaction with thiol groups. In principle, this means that reactions can be made with free SH groups in the protein (i.e. cystein). Such reagents can contain for example 3H or 14C and are commercially available. Conjugations Figure 3 The Bolton-Hunter reagent is seen in the upper part to the right. It is pre-labeled in the ring with radioactive iodine. The succinimidyl group reacts with the amino group on the lysine and thereby an indirect labeling is achieved. Several Bolton-Hunter "like" reagents have been described. All contain the succinimidyl group while the tyrosine ring is modified to allow for different properties. Thereby one can counteract, at least partly, degradation or dehalogenation in vivo.

There are numerous biochemical conjugation methods described in the literature. Conjugation, in this context, generally means to bring molecular components together. Thus, any method by which two components (e.g. carbohydrate to protein, protein to protein, protein to nucleic acid etc) are coupled together can be used as a labeling method if one of the components is pre-labeled. One example of conjugation mediated labeling is that radiolabeled carbohydrates are connected to amino groups of proteins via activation of the carbohydrate (using e.g. activation agents such as periodate or CDAP). Several methods to use COOH and NH2 groups in conjugation chemistry have been described and all such methods are of potential interest for radiolabeling. Chelators

Figure 4 The reagent is pre-labeled with 3H through a carbon chain. The succinimidyl group reacts with the amino group on lysine. The protein is "tritiated". It is also possible to modify the ring so that not only iodine but also the halogens bromine and astatine can be coupled. Furthermore, there are similar reagents where the ring is exchanged for other groups and which allow for 35S or 3H labeling. It is still a succinimidyl group that accounts for the binding to the amino group of the protein (Figure 4).

106

The coupling of chelators to antibodies or other proteins and the subsequent addition of radioactive metals that are trapped by the chelator is one possibility for protein labeling. The requirement is that suitable radioactive metals are available. Many types of chelators are described in the literature and the most often used is DTPA (diethylene triamine penta-acetic acid) which binds metals such as indium (110In for PET, 111In for SPECT) and yttrium (89Y or 90Y, both for nuclide therapy). DTPA is also, for some applications, coupled to ligands such as the eight-amino acid somatostatin analogue, octreotide, giving the commercially available

product OctreoScan suitable for characterization of tumors with somatostatin receptors. Chelator DTPA

Protein Conjugation chemistry

Protein 111In

harvested, homogenized and the protein purified from the homogenate. The advantage of biosynthetic labeling is that no extra chemistry has to be applied and the product protein does not suffer from conjugation or other labelinginduced modifications. The disadvantage is that it is often difficult to obtain a high specific radioactivity. Radioactive amino acids can be purchased commercially containing, for example, radioactive carbon (14C) or hydrogen (3H).

Add radioactive aminoacid

Protein

Chelation

Figure 5 Labeling of a protein with 111In by the use of the chelator DTPA. There are many different types of chelators (DTPA "like" and others) with different substituted side groups to allow for coupling via different biochemical methods. The advantage of using chelators is that all chemistry involved can be made a long time in advance whereas the addition of the radioactive metal can be made directly before the use of the whole complex (Figure 5). A simple "desalting" is then enough to get rid of the excess radioactive nuclides. Biosynthetic protein labeling Most labeling methods modify the molecule to some extent, which may change the molecules behavior regarding e.g. distribution and binding properties. A way to evade this and to label exact copies of biomolecules (e.g. amino acids, anti-bodies, structural proteins, and enzymes) with radioactive nuclides is to feed a cell culture with amino acids containing radioactive nuclides. The requirement is of course that the cultured cells synthesize the proteins of interest. The radioactive precursors are given to the cell culture medium with the cells. If the cells excrete the protein, the culture medium can be harvested (Figure 1) and the product purified. The cell culture can be used repeatedly. If the protein is not excreted, the cells have to be

107

Synthesis

Harvest radioactive protein Figure 1 Biosynthetic labeling. Radioactive amino acids are added (top), the protein synthesis produce labeled proteins (middle) and the product are harvested for purification (bottom). Labeling of nucleic acids Nucleic acids can be labeled with several different methods. The rapid development in the area of DNA techniques have speeded up the development of both "radioactivity" and "fluorescence" based methods for labeling and, in some cases, interesting combinations of these approaches. The main radiolabeling methods used are nick translation labeling, 5´ end labeling, 3´ end labeling, hybridization labeling and biosynthetic labeling. Furthermore, special

methods to label small primers or oligonucleotides (e.g. random primer labeling) exist. The field is still developing and new methods might be expected in the near future. Nick-labeling Nick labeling means that intact DNA, in a cellfree system, is exposed to enzymes, which opens up the double stranded DNA. Short segments of single stranded DNA (only 5-8 bases) are formed. These single strand DNA segments are restored to double strand DNA by enzymatic "repair synthesis" using e.g. radiolabeled thymidine in this process. Thus, the DNA is labeled with radioactivity but is not chemically modified since only enzymes from the normal repair processes are used. 5´ end-labeling 5´ end labeling means cell-free labeling of the 5´ end of DNA or an oligonucleotide often using polynucleotide kinase to add phosphate at the end. The radioactive nuclides in the added phosphate can be 32P or 33P (high- and lowenergy beta emitters, respectively). 3´ end labeling 3´ end labeling is a method that uses e.g. deoxynucelotidyl transferase to add radioactive dATP to the end of DNA or an oligonucleotide. dATP is, in these applications, labeled with 32P. There are also special reagents (e.g. ddATP) that ensure that only one labeled nucleotide is added to the end, and there are special types of dAPT that are labeled with 35S instead of 32P. Hybridization based labeling As described above, oligonucleotides can be labeled with nick translation or end labeling. If such oligonucleotides are made single stranded they can then be used for hybridization to RNA or to partially or totally denatured DNA if there are homologous sequences. This is a powerful technique to find and label certain genes (base sequences). However, the procedure leaves the hybridized nucleic acid blocked in certain sequences by the oligonucleotide probe and this 108

probably disturbs the function if the DNA is used for further tests. Thus, this type of labeling is convenient for analytical purposes if certain base sequences are to be found, but it might not be a good labeling method if the nucleic acid must be functional afterwards. Biosynthetic labeling A direct way to label nucleic acids, DNA or RNA, is to feed a cell culture with radioactive thymidine or uridine, respectively. The radioactive precursors are given to the cell culture medium and the cells are allowed to grow in physiological conditions. After a suitable time (often 1-24 hours) the cells are harvested, homogenized and the nucleic acids purified from the homogenate. The advantage of such labeling is that no extra chemistry has to be applied and the product does not suffer from any chemical modifications. The disadvantage is that it is difficult to obtain high specific radioactivity. The radioactive precursor is usually 14C- or 3H-labeled. The nucleic acids can also be labeled with 32Pphosphate, using a biosynthetic route but in this case many other cellular components are also labeled by general phosphorylation processes. Special labeling methods Labeling of arginine Arginine residues in proteins have a general role in positively charged recognition sites for anionic ligands. This is especially the case when the ligands contain important carboxyl- or phosphate groups. Arginine also occurs in the active site of many enzymes, especially those enzymes that catalyze phosphorylation. Arginine can be labeled with radioactive phenylglyoxal, which reacts with the guadino group of arginine. It is claimed that proteins after such treatment are more resistant to trypsine-mediated cleavage. Carbohydrate labeling (OH) Carbohydrates with free OH groups can be labeled via enzymatic or oxidizing

transformation of the OH groups to aldehyde groups, which then can react with radiolabeled sodium-, potassium- or cyano-borohydride. Another alternative is to allow the aldehyde groups to react with radiolabeled aniline forming a Schiffs base, which is then stabilized with unlabelled borohydride. Radioactive reagents for such reactions are commercially available. Fast labeling with 11C, 15O and 18F for PET applications Very fast methods have been developed for labeling of low-molecular-weight sub-stances with 11C (T1/2=20 min) (e.g. methionine), 15O (T1/2=2 min) (e.g. O2) and 18F (T1/2=110 min) (e.g. fluorodeoxy-glucose). The short physical half-lives of these nuclides make it necessary to apply fast organic chemistry processes, and several such methods have been developed by organic chemists working with PET applications. The nuclides mentioned here are mainly used for diagnostic purposes in PET based applications.

Labeling of dextran Dextran and other polysaccharides can be labeled via activation with periodate or CDAP (to activate the OH groups) so that any aminecontaining radiolabeled substance attaches. This has, for example, been applied in the attachment of radiolabeled glycine or tyrosine to dextran. Another labeling method is based on the fact that dextran in nature is synthesized by the enzymatic addition of glucose from the disaccharide sucrose. The sucrose is cleaved by the enzyme sucrase, which then adds the glucose part on to the 6´ end (the growing end) of dextran. The fructose part of sucrose is left in the monomer form. Thereby dextran slowly grows to form large chains. By introducing radiolabeled sucrose, which must be labeled on the glucose part, radioactive glucose attaches to the dextran. This gives a "normal" not chemically modified dextran that is radiolabeled. These methods are of interest since dextran in some cases will probably be used as a part of conjugates in tumor therapy for targeted treatments and thereby will serve as a carrier for drugs and nuclides (Figure 6).

Labeling with "long lived" positron emitters. There are several so-called "long lived" positron emitters, such as 55Co (T1/2=18 hours), 76Br (T1/2=16 hours) and 124I (T1/2=4 days). They are suitable for labeling of macromolecules with long biological turnover rate like antibodies. Using such positron emitting labels and PET will give a detailed pharmaco-kinetic information of such molecules, which is of importance in diagnostics, therapy and in the development of new drugs. The physical halflives of these nuclides are sufficiently long that standard protein labeling methods can be applied. Such methods are chelator-based labeling for the metal ion 55Co and for the halogens, 76Br and 124I, direct or indirect or enzymatic protein labeling can be applied. The use of such nuclides has so far been limited but with improved possibilities for macromolecularbased gene-, immuno- and nuclide-therapy, the requirements to follow macromolecular distributions with PET cameras are being met.

109

EGF-receptors 10B 131

I

dextran

111 76

In

Br 211

EGF-dextran can be labeled with various nuclides

At EGF

Tumour cell

Figure 6 Schematic picture of radiolabeled EGF-dextran binding to the EGF-receptor (EGF=epidermal growth factor). Purification After radiolabeling, the product must be purified. Usually this is a simple step but sometime it might be necessary to consider various sophisticated purification methods.

The purification is in most cases only a question of removing excess radionuclides and the reagents applied. In many cases the product is a macromolecule whereas the nuclides and reagents are of low molecular weight. In that case, it is often enough to use a small, simple "desalting" column to purify the product and at the same time, if necessary, change the buffer system. In other cases it might be more complicated, such as after enzymatic labeling or when the product is of low molecular weight. It might also be complicated to purify if the labeling technique induces heterogeneity; e.g. some product proteins are labeled whereas others are not or when other forms of the product are produced due to the labeling chemistry. When a simple desalting technique is not adequate, one has to find more sophisticated methods. A number of preparative chromatographical methods are available such as large-sized exclusion columns with good molecular weight separation properties, ion exchanger columns, affinity or hydrophobic interaction columns or other methods such as preparative electrophoresis or precipitation techniques. Quality control After purification or storage of a radiolabeled product, it is often necessary to perform quality control to ensure that the product has the same properties as a standard. All available analytical methods in chemistry and biochemistry can then be considered as well as functional biological test. Storage conditions Radiolabeled products might be destroyed due to oxidation or radiolysis. Oxidation damage is most common for sulfur-containing substances such as methionine and cystein. Storing the compound in nitrogen atmosphere (e.g. in liquid nitrogen) can minimize oxidation. Radiolysis means that the substance irradiates itself and thereby causes its own decomposition. There are several ways to reduce radiolysis. The compound may be stored diluted. A common

110

way is to store the compound in a refrigerator or, if it can stand it, frozen. Some macromolecules are in fact very sensitive to freezing and a large fraction might be biologically inactivated. Long-term storage can also be made in the frozen state using cryoprotective agents (DMSO, sucrose, and glucose), which minimize the risk of ice crystal formation. However, the general rule is to use fresh preparations and to avoid long-term storage. Stability during incubations Another problem to consider is that radioactive substances might be modified during biological experiments. The substance might, in the experimental situation, be exposed to reduction, oxidation or enzymatic degradation. Such processes might be of interest to study in the experiment. However, such processes might also be undesired, and destroy or make the planned study difficult to interpret. The stability of radioactive substances can, in principle, be controlled through the extraction of biological material during the experiment. The samples are then analyzed for the chemical form of the radioactivity to allow for analysis of degradation products. Labeling services If possible one should buy your radiolabelled chemicals commercially instead of doing the labeling yourself. There are several commercial companies that offer a large collection of especially low molecular molecules like amino acids and nucleotides. When working with macromolecules it may be more cost effective to buy kits for labeling and then do the labeling in your own laboratory. In house labeling is also necessary e.g., when the applied nuclides have a very short physical half-life (as in many PET applications). Another case is when a new substance are discovered and synthesized in your laboratory. The pioneering laboratory must then analyze and characterize the labeled molecule e.g. to investigate how the labeling procedure affects the biological function and behavior. This is a research topic in itself.

CHAPTER 12 Applications Accuracy and sensitivity Radioactive nuclides give the possibility to study biological and medical processes with high accuracy and sensitivity. The accuracy can be very high since the measurements are, in most cases, not only quantitative but also possible to calibrate in detail against standard preparations, and this can be made for very low concentrations of substances. The sensitivity is very high since single decays can be detected. This allows the presence of only a few labeled molecules (and even single molecules) to be detected. The applications given below are divided in two groups: biological applications (basic biology and preclinical investigations using to a large extent pure beta emitters) and diagnostic nuclear medicine (applying gamma and positron emitters).

factors and oncogenes etc.). Thymidine is available as a commercial product labeled with 3H or 14C and those products are among the most sold radiochemicals. Genes The presence of certain gene sequences in DNA can be accurately analyzed by hybridization experiments in which radiolabeled oligonucleotides, containing the gene sequence to be analyzed, are allowed to hybridize with denaturated DNA. The presence of the interesting gene can then be analyzed quantitatively by measurement of the radioactivity. Such analysis is called "southern blot" when the reaction is made on a blotting paper to which the DNA has been transferred. The analysis is called in situ hybridization when the DNA is still in the cell (or at least in a chromosomal conformation).

BIOLOGICAL APPLICATIONS Oligonucleotides for such analysis are most often synthesized in a non-radioactive form in a "DNA synthesizer" and are then labeled with radioactive nucleosides by a process called nickor end labeling. In cell analysis, it is necessary to first degrade RNA with RNAse to eliminate the otherwise dominating binding to mRNA.

DNA synthesis DNA synthesis in cells or tissues can be quantitatively studied through the use of radiolabeled thymidine (Figure 1).

O

X

N O

X CH3 (3H) CH3 (14C)

N

125I, 76Br,

dR

and other halogens

(deoxyribose)

RNA synthesis

Figure 1 Radiolabeled thymidine and iodinated and brominated analogs Thymidine easily passes the cell membrane. It is, in the cell, only used for the synthesis of DNA and for no other macromolecular structure. Thus, it provides a very specific way to detect which cells or cell populations have an ongoing DNA synthesis. Scientific history is full of exciting discoveries obtained with the help of radioactive thymidine (characterization of the cell cycle, analysis of the action of growth 111

RNA synthesis can be studied by applying radioactive uridine, which is specifically used by cells for the synthesis of all forms of RNA. Uridine easily passes the cellular membrane and the incorporation process is, principally, parallel to the use of thymidine for studies of DNA synthesis. Radioactive forms of uridine are available commercially. Studies using uridine give an overall measure of the total RNA synthesis in the cell. For gene-specific analysis of mRNA synthesis, see below. Gene activity and transcription The activity of a gene in terms of "transcription activity" (synthesis of mRNA) can be measured quantitatively through hybridization between a radiolabeled oligonucleotide, containing the gene sequence of interest, and the formed

mRNA with that gene sequence. Such analysis is called "northern blot" when the reaction is made on a blotting paper to which the mRNA has been transferred and in situ hybridization when the mRNA is still in the cell. Undesired binding to DNA is normally not a problem as long as the DNA is in double stranded form. If necessary, DNA can be degraded with DNAse. Oligonucleotides for such analysis are made the same way as described above for gene analysis. Specific proteins The presence of specific proteins or other forms of biological material in cells can be analyzed using radiolabeled antibodies with specificity for the protein to be analyzed (see also below regarding sandwich techniques with labeled secondary antibodies). The biological sample is exposed to the radioactive antibody and when the protein is present the quantity can be determined through analysis of the radioactivity. Such analysis is called "western blot" when the reaction is made on a blotting paper to which the biological sample has been transferred. It is called radioimmuno detection when the analysis is made directly on a more or less intact biological sample. Many forms of radioimmuno detection are described in the chapter on antibodies. Protein specific antibodies can often be purchased from companies specialized on immunology products. In most cases, it is necessary to do the radioactive labeling at one’s own laboratory using methods as described in Chapter 9. In a few cases, it is possible to buy already labeled antibodies. Protein synthesis The overall protein synthesis in cells or tissues can be studied accurately through the use of radioactive amino acids. The amino acids are incorporated in the proteins and thereby give an intracellular accumulation of radioactivity. All most common types of amino acids are commercially available in radioactive forms. It should be noted that different amino acids are taken up with different efficiency over cell membranes but this is normally not a big problem in basic biology studies. Cocktails of

112

different amino acids are sometimes used for the measurement of total protein synthesis. If the synthesis of only specified types of proteins is studied, then the use of radioactive amino acids must be combined with other methods, such as gel electrophoresis or immunoreactions. Phosphorylation and basic metabolism The regulation of energy flow in biological systems involves phosphorylation and dephosphorylation (AMP ADP ATP). The energy flow is to a large extent related to energy generation in the glucolysis and the citrate cycle, and the energy consumption to the synthesis of different molecules and the active transport of molecules. In all cases, it is possible to study the metabolic steps by using 32P or 33P labeled phosphate, PO4. The radioactive phosphate is given to cells and tissues. After different incubation times the compounds of interested are isolated with biochemical methods (column chromatography, thin layer chromatography, electrophoresis, etc) and their content of radioactivity is measured. . The appearance and disappearance of radioactivity reflects phosphorylation and dephosphorylation, respectively. The analysis is easily quantified so that the number of molecules involved in the processes can be estimated. Radioactive phosphate is commercially available at reasonably low prices and is one of the most used radioactive chemicals. Phosphorylation and signaling One of the most interesting processes studied in recent years is phosphorylation related to intracellular signaling. Many intracellular signal systems seem to be mediated via phosphorylation and dephosphorylations. One example is receptor-ligand interactions (e.g. growth factor-receptor), which initiate primary phosphorylation of the receptor itself and related proteins leading to cascades of phosphorylation to exert signal transduction, often all the way into the cell nucleus. In the last steps, the phosphorylation initiates DNA mRNA transcription (activation of genes). Such phosphorylation can easily be analyzed by using 32P labeled phosphate. The radioactive

phosphate is given to cells that exert the studied reaction. Thereafter, the studied molecules are isolated (most often using electrophoresis) and the amounts of 32P associated to the molecules are a measure of the phosphorylation. By incubating the cells for longer times with radiolabeled phosphate, dephosphorylation processes can also be studied. Phosphorylation in cell free systems Radioactive phosphate can be used for end- or nick-labeling of nucleic acids in cell-free systems. This allows detection of the DNA when processed with different biochemical methods for purification, size determination and sequencing. Radioactive phosphate is, as mentioned above, commercially available and at reasonably low prices so there are no severe economic limitations for the use of the described methods. Glucolysis and energy metabolism The amount of glucolysis can be measured through the use of radioactive glucose. However, "natural" glucose is quickly metabolized and does not give an accumulation in cells and tissues with high glucolysis. Instead, the synthetic analog deoxyglucose is used since it is to a large extent phosphorylated in cells and might bind irreversibly to key enzymes in the glycolytic pathway and thereby accumulate and allow accurate detection. Radioactive deoxyglucose without phosphorylation easily passes cell membranes and is commercially available. This is the most often used precursor when energy metabolism is analyzed. The steps in the "citrate cycle" (or Krebs cycle) can be analyzed through the use of other radioactive tracers. Glycosylation The glycosylation of any macromolecule in a biological system can be analyzed by adding radioactive carbohydrates of the suitable type and then analyzing how much radioactivity is associated with the studied macromolecule. One limitation is the accuracy by which the studied

113

molecule can be isolated through available purification methods. Other metabolic processes Carbohydrate and lipid synthesis can be studied by using more or less specific radioactive precursors. Methylation processes can be studied by applying relevant radiolabeled precursors and melanin synthesis can be studied through the use of radiolabeled thioureas. Preformed melanin can be analyzed with several substances since melanin is used by the body as a scavenger for toxic agents. Radiolabeled analogues of such agents might then be used for the analysis. The list of possibilities for metabolic studies using radioactive substances can be made very long and it seems that most metabolic processes can be studied if action is taken to radiolabel relevant precursors. In principle, at least one precursor must be known and radiolabeled and this precursor then serves as a starting point for the studies under consideration. The limitations thereafter are the limitations of the biochemical methods to analyze metabolites of the radioactive test substances. An economic limitation might be the high costs to develop a radiolabeling technique for a new substance. Uptake and processing of new compounds In compound development (e.g. development of new chemotherapy agents, metabolic inhibitors or targeting agents for radionuclide therapy) it is necessary to characterize the biological properties of the compounds in some detail. This means analysis of the cellular binding, internalization, retention and degradation as well as the in vivo properties such as half-life in the systemic circulation, uptake and metabolism in the liver, uptake and release from the kidneys, and uptake in many other organs. By labeling the compound with radioactive nuclides, it is possible to follow such processes. If the compound is of low molecular weight, the most attractive approach is to exchange stable nuclides, most often carbon or hydrogen, for the same type of nuclides in the radioactive form, in this case e.g. 14C or 3H. With this procedure the chemical and biological properties of the

compounds are not disturbed. If the compound is macro-molecular (protein, glycoprotein, carbohydrate etc.) it is often possible to label with, for example, radioactive iodine without disturbing the biological properties of the molecule too much. The availability of radiochemical reagents for the radiolabeling varies from case to case. Receptor-ligand interactions If the binding of a ligand (e.g. a growth factor) to a receptor is characterized in some detail it is convenient to use a radioactive ligand. The binding of the radioactive ligand can be quantified in detail and related to the number of cells or the mass of the biological sample under physiological conditions. The specificity of the radiolabeled ligand can first be analyzed using native non-radioactive ligand in excess (at least tenfold excess) during the time of analysis. If the radiolabeled ligand binds specifically, the binding should under such conditions be displaced more or less completely. If there are unspecific inter-actions between the radioactive ligand and the biological test material, the signal of the radioactivity cannot be efficiently displaced. In cases when the radioactive ligand binds specifically, one can, at 4oC, analyze binding versus concentration of the added ligand (to obtain saturation) and also gradually displace the radioactive ligand by increasing amounts of non-radioactive ligand. Such analysis gives input data for a so-called "Scatchard analysis" in which the affinity and the number of binding sites can be quantitatively determined for the ligand-receptor interaction. Such applications are of importance in the development of targeting agents for imaging and/or therapy of spread tumor cells and metastatic growth. Antibody-antigen interactions If the antibody binding to an antigen is characterized in some detail it is, in the same way as described above for ligand-receptor interactions, convenient to use a radioactive antibody. The specificity of the radiolabeled antibody can be analyzed using native nonradioactive antibody in excess. If the radiolabeled antibody binds specifically, the

114

binding should under such conditions be displaced more or less completely. As for ligand-receptor interactions one can also in this case analyze binding versus concentration, at 4oC, and gradually displace the radioactive antibody by increasing amounts of nonradioactive antibody to obtain input data for "Scatchard analysis". An application is, also in this case, the development of targeting agents for imaging and/or therapy of spread tumor cells and metastatic growth. DIAGNOSTIC IN NUCLEAR MEDICINE Development From the historical point of view, the initial and most common strategy for the diagnostic use of radionuclides has been to give e.g. 99mTc (pure gamma emitter) in salt form (or associated to some biomolecule) and then to study the uptake pattern in normal healthy persons. The spatial distribution of the radioactivity was studied in pictures obtained using different types of gamma cameras. Different diseases could then be studied by analyzing aberrations in relation to the so-called "normal" pictures. One striking example is the studies of the breakdown of the blood brain barrier. It was found that this barrier was locally disrupted in patients with malignant gliomas, and analysis of this is today an established type of investigation. Today, the use of radioactive nuclides for diagnosis of different diseases is very advanced. Radioactive precursors can be synthesized for specific studies (as described above in the section on biological applications) and the spread of these precursors can be followed more or less quantitatively over the body by using advanced scintigraphy (e.g. SPECT and PET). The spatial resolution is so good (in the order of 1 cm) that one can fairly well calculate the uptake of radioactivity in different tissues and organs. The metabolic processing of the precursors can sometimes be followed by chromatographic analysis of blood and urine, and counting of the amount of radioactivity in the different molecular preparations.

Basic biology versus clinical tests In principle, the same radioactive precursors as described above for basic biology studies can also be applied in clinical patient studies to try to analyze aberrant processing in the studied metabolic steps. The main difference is due to the fact that in the clinical work, gamma radiation has to be used to allow for external detection with high efficiency and precision. Thus, the pure beta emitters (3H, 14C, 32P and 35S) often used in basic biology are of no or limited interest since it is in most cases not possible to obtain tissue samples for analysis. Instead, high-energy gamma or positron (giving rise to gamma radiation after annihilation) emitters have to be applied. Examples of such nuclides are 11C, 15O, 18F, 76Br, 99mTc, 110In, 111In, 123I, 124I and 131I. The fact that some of these nuclides have a very short physical half-life puts special requirements on the radiolabeling and the handling of the nuclides. It also gives certain advantages since few radioactive molecules need to be attached to the test compound to have a certain number of radioactive decays during the time of analysis. The discussion on clinical applications below follows, due to pedagogical reasons, the same order as for the biological applications. This is a somewhat unusual order in comparison to what is used in clinical textbooks on nuclear medicine where simpler methods such as the use of free iodine for studies of uptake in the thyroid are mentioned first and more complicated investigations employing macromolecules are mentioned later. However, to start with discussions on genes and DNA-mRNA transcription (as in most biological textbooks) it is clearly seen how much more biologically advanced all laboratory techniques are in relation to the available clinical methods. This is hoped to serve, for the reader, as an inspiration for further efforts to develop the clinical methods. DNA synthesis can probably be fairly well studied in different tissues in patients through the use of 76Br labeled thymidine (11C labeled thymidine is probably less suited because of the

115

short half-life of 11C). The technique is fairly new, and tests are in progress to analyze DNA synthesis in growing tumors. Brominated thymidine (Figure 1) has to be synthesized at special laboratories designed for fast synthesis of radioactive nuclides. Genes and transcription The presence of certain gene sequences in DNA and the corresponding DNA-mRNA transcription activity may, in the future, be analyzed in the intact body via hybridization techniques. Oligonucleotides containing the gene sequence to be analyzed should then be labeled with radioactivity and injected into the patient. The analysis at the clinic should in most cases be non-invasive and made by external monitoring of the localization of the applied radiolabeled oligonucleotides. Of course, there are still many unsolved problems relating to this problem like: How can the oligonucleotides find their way to the target organs? How can they find their way over the cell membranes to finally reach the nucleic acid of the targeted cell? How to prevent degradation of the oligonucleotides before they reach the target? The expected developments in gene therapy will hopefully help to obtain suitable transport means for such oligonucleotides. Thus, the technique is not available today but will hopefully be available in the future. The only realistic possibility available today is the analysis of operation material and biopsies with the same techniques as applied in basic biology (in vitro diagnosis). RNA synthesis RNA synthesis can in principle be studied using radioactive uridine in parallel to the use of radioactive thymidine for studies of DNA synthesis. However, the use of gamma emitters for studies of uridine is not so developed, and we therefore have to wait for further progress from the involved research laboratories. Specific proteins Radiolabeled antibodies with specificity for the protein to be analyzed can in some cases be used to externally visualize the presence of the

protein in different parts of the body. Whether the analysis can be made in a quantitative manner depends on the availability of the protein in the tissue, the antibody penetration properties and on the scintigraphy technique applied. This technique, often called immunoscintigraphy, radioimmunolocalization (RIL) or radioimmunodiagnosis (RID), is presently under intense development and we foresee interesting applications in the near future. Protein synthesis The overall protein synthesis in tissues and organs can, at the clinic, be studied rather accurately through the use of radioactive amino acids. One amino acid used for PET-based analysis is 11C labeled methionine which has, for example, been used to monitor the increased amino acid uptake (and possibly also increased protein synthesis) in different types of tumors. In principle, all types of amino acids can be labeled with gamma or positron emitters and applied in such studies. An interesting aspect is to try to distinguish between amino acid accumulation due to changed transport in certain diseases and the changed protein synthesis. The parallel use of amino acid analogs, one that can only be taken up intracellularly and one that can also be incorporated in proteins, might give future possibilities to study this. Phosphorylation Phosphorylation processes in patients will hopefully be studied in the future applying phosphate analogs. This is at present the subject for research and development. Glucolysis and energy metabolism The amount of glucolysis in different tissues can be measured quite accurately applying 18Flabeled deoxyglucose, often called FDG. The uptake in different tissues is assumed to reflect the amount of glucolysis, and it has in many cases been found that tumors have an increased uptake. Suitable precursors for studies of the citrate cycle will hopefully be developed in the future.

116

Glycosylation and other metabolic processes The glycosylation of any macromolecule can, in principle, be analyzed by adding relevant radioactive carbohydrate and then analyzing how much radioactivity is associated with the studied macromolecule. The limitation is, if blood or urine samples are not enough for the analysis, that tissue samples have to be taken from the patient and the studied molecule isolated through available purification methods (chromatography, gel electrophoresis). Carbohydrate, lipid and melanin synthesis and methylation processes can be studied by using more or less specific radioactive precursors and following similar strategies as indicated for the studies of glycolysation. Some progress has been made along these lines but there is a lot of developmental work before even a minor part of all potential possibilities can be exploited. Therapeutical compounds In the development of new chemotherapy agents, metabolic inhibitors and targeting agents for radionuclide therapy, it is necessary to characterize the biological properties of the compounds. This is partly made in the laboratory but it is also important to study parameters like uptake in different tissues to allow for estimation of the half-life in the systemic circulation, uptake and metabolism in the liver, uptake and release from the kidneys, and uptake in many other organs. By labeling of the compound with gamma- or positronemitting nuclides it is, in principle, possible to follow such processes in the patient or in volunteers. The investigations should be complemented by chromatographic analysis of blood and urine to give knowledge about the presence of degradation products. Molecular interactions If the binding of a ligand or an antibody to a receptor or antigen is to be characterized directly in a patient, the difficulties are much larger than when analyzing such phenomena in the laboratory. The difficulties partly relate to the large volume of the patient requiring

enormous amounts of compounds for studies of saturation and displacement. It is also very difficult to assure that the transport systems in the body allow for interactions to take place in a representative and reproducible way. Furthermore, there are difficulties due to metabolism of the labeled ligands and antibodies (a correct "Scatchard analysis" normally requires inhibition of degradation at 4oC). It is of course not possible to distinguish degradation products from the original substance through the necessarily non-invasive analysis systems (gamma cameras). In spite of these difficulties, scientists are seriously attempting to develop models through which scintigraphic dynamic data can be treated in terms of molecular flow and interactions. Analysis of operation material and biopsies If, at the clinic, there are possibilities to obtain fresh living tissue samples that either can be cultured or prepared otherwise as any biological sample in experimental biology, then all the possibilities for analysis listed above (see biological applications) are available. This is often called in vitro diagnostics and is a growing research area. However, we foresee an interesting development regarding in vitro diagnostics in the future.

117

Related Documents