Computers Application In Radiology _ Ready.doc

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Computers Application In Radiology _ Ready.doc as PDF for free.

More details

  • Words: 13,319
  • Pages: 58
Computers Application in Radiology

Dr. Awadh Alqubati

Computers Application in Radiology

By: Dr. Awadh Ali Alqubati

Computers Application in Radiology

Dr. Awadh Alqubati

Outline Introduction Section I: History of computerized radiography Section II: Computer basics Section III: Pixels and Voxel Section IV: Hardware used in digital radiography Section V: The digital imaging processor Section VI: Fundamentals of Computed Radiography (CR) Section VII: Overview on using the CR System Section VIII: Concepts of Direct Digital Radiography (ddR) Summary and References

Computers Application in Radiology

Dr. Awadh Alqubati

Course Objectives Upon completion the reader will be able to: 

Define basic terms of the binary computer system: byte, kilobyte, megabyte, gigabyte, and terabyte.



Define the terms pixel, voxel, and their relationship to digital imaging gray scale.



Discuss the formation of a digital image and digitalization.



State

the

ionizing

radiation

range

for

photostimulable

phosphors

using

electromagnetic radiation and alpha particles. 

State other medical or scientific applications for photostimulable phosphor imaging besides CR and DR imaging.



Define the terms pixel and voxel.



Discuss how the gray scale seen with digital imaging is produced electronically.



Discuss the historical aspect of the development of computerized radiography.



Define the term image matrix and what is meant by gray-scale dynamic range.



Calculate the number of pixels an image may have given the matrix size.



State two components of a computers CPU and discuss their functions.



List 2 components of the digital image processor and discuss the application of each to imaging.



List the basic hardware components of a CR imaging system.



Define what is meant by the term photostimulable phosphor.



Discuss the process of photostimulation luminescence.

Computers Application in Radiology 

Dr. Awadh Alqubati

State the wavelength of light needed to cause photo luminescence and the wavelength of light emitted from the phosphors during laser scanning of the plate.



Discuss why the photostimulable plate must be erased following each exposure and reader scan.



Define the term exposure index and discuss its role in technique selection by the technologist.



Discuss text information data entry into the CR unit using the RIS/HIS broker.



Discuss the process of cassette labeling for reader algorithm selection during processing of the image plate.



State the role of CR image capture into PACS for image display and storage.



Discuss what direct digital radiography (ddR) is and how it is revolutionary over analog radiography and computed radiography.

Computers Application in Radiology

Dr. Awadh Alqubati

Outline Introduction I. History of computerized radiography II. Computer imaging basics: a.

Binary system; Bits and Bytes.

b.

Digital imaging shades of grey.

c.

Formation of the digital image and digitalization.

III. Pixels and Voxels a.

Gray-scale range and dynamic range.

b.

Image matrix and pixels

c.

Spatial resolution and pixel size

d.

From pixel to voxel

e.

3D reconstruction

IV. Hardware used in digital radiography a.

The CPU and its component features

b.

Computer memory primary and secondary memory

c.

Applications, programs, routing software.

V. The digital imager processor a.

Analog-to-Digital converter

b.

Look-up Tables (LUT) and their functions

c.

ALU and Array processor

VI. Fundamentals of computed radiography (CR) a.

Photostimulable phosphor plates and cassette holders

b.

Mechanisms of Image storage phosphor imaging

c.

Exposure characteristics of photostimulable phosphors

d.

Photostimulation using laser scanner and image amplification

e.

PMT and image signal focusing

Computers Application in Radiology

VII.

f.

Image viewing on CRT

g.

Exposure index

Overview on using the CR system

Dr. Awadh Alqubati

Computers Application in Radiology

Dr. Awadh Alqubati

Introduction Radiography has evolved from screen-film imaging to a highly integrated, high quality image and information acquisition, display, archival, and retrieval system. The characteristics of images produced and processed for analog standards are the result of many consultations with radiologists over decades, which led to improved discrimination of image detail; the same is now true of digital imaging. As with conventional x-ray filmscreen imaging, radiographic image quality for digital imaging remain driven by radiologist preference and their tolerance for image noise. Through much consultation with radiologists and the American College of Radiology, digital standards that display fine image details and yield high sensitivity and specificity are now in place. These standards are continuously being evaluated and are a part of an ever evolving Digital Imaging and Communications in Medicine (DICOM) language. Notwithstanding, the radiographer still controls certain factors that determine the quality of a digital image including: the use of ionizing radiation, handling raw image data to be sent to Picture Archiving and Communication Systems (PACS) or to film printing, and patient positioning. In this module we will discuss some of the principles of digital radiographic imaging that when practiced by the technologist may enable the radiologist to resolve diagnostic issues. Time has proven that the generic performance of x-ray equipment, radiographic technique selection (mAs and kVp), and film processing within a given institution and between institutions is variable enough to make optimal imaging for all viewers under screen-film standards impossible. The need for optimization of radiographic images has spawned a new way in which radiographs are acquired-digitally. The use of computers to capture and process radiographs have given the viewer new tools that allow for dynamic manipulations of digital images through processes like changing algorithms and windowing. Windowing allows the viewer to change the contrast and density of an image to ones liking but does not permanently change the stored raw data. With digital imaging each viewer has the flexibility to control subject and radiographic density while viewing

Computers Application in Radiology

Dr. Awadh Alqubati

a radiograph. A fundamental difference between PACS and computed radiography (CR)/direct digital radiography (ddR) is that CR/ddR allows the technologist to change the raw data prior to saving it. If the technologist changes the raw data prior to sending it to PACS it is permanently lost to PACS and therefore to diagnostic and clinical workstations. Radiographer professionals must understand when and how we may manipulate raw digital image data and its impact on others who may make algorithm and windowing changes when viewing stored images from PACS. In addition to achieving high quality digital images with CR/ddR imaging, implementing ALARA (as low as is reasonably achievable) has been very difficult. The difficulty lies in trying to use dose reduction techniques commonly practiced with analog film imaging. Principles that apply to film/screen imaging, mainly selecting mAs, kVp, source-to-image distance (SID), decreasing object-to-image receptor distance (OID) , or trying to achieve wide latitude techniques with automatic exposure control have transferred nicely to digital imaging. But maintaining high diagnostic imaging standards within the noise tolerance most radiologists will accept and practicing ALARA has been very difficult with digital imaging. Analog film production has reached its full potential for achieving wide exposure latitude and minimal patient dose; however, better communication and display of radiographic images, as well as film duplication and archiving are fixed in antiquity. Fixed images on a film can only be viewed by one set of observers and requires shuttling between physicians to be viewed. Furthermore, the incidences of lost films and archiving pitfalls of analog imaging have reached the limits of radiographers' tolerance. Digital computerized radiographic imaging (CR) has achieved technological improvement over analog film imaging by optimizing each function of radiographic imaging from production and its subsequent communication layers of image display, archiving, and image retrieval as independent developments that enhance the total diagnostic process. The basic advantages of CR and direct digital radiography over analog imaging is the optimization of image acquisition, optimization of image display, optimization of image transmission, and optimization of image storage as independent

Computers Application in Radiology

Dr. Awadh Alqubati

but closely networked functions. The key word here is optimization. The management of digital images through PACS has many functions within each specific layer, for example, digital images can be stored on multiple servers, on optical disk, and on digital linear tape for back-up files. The advantage is that these images are never lost, easy to retrieve, easy to purge, easy to distribute, and privacy is protected by passcode and user authorization. These optimizations are not possible or cost effective with analog films. PACS should be an integral part of any CR/ddR system, and existing radiographic equipment can be used with CR and PACS with minimal modifications. Computerized x-ray like imaging is not unique to radiography; it is used throughout the scientific community in areas like molecular biology and chemistry for autoradiography and pulsed-field gel electrophoresis. Its wide spread use is due to the sensitivity of photostimulable phosphors and improvements in light detector technology. Modern

detectors

can

differentiate

light

emission

by

photostimulation

for

electromagnetic radiation exposures of slightly greater than 100 milliroentgen (mR), and as low as 0.195 alpha particles per square millimeter equivalency for particulate radiation. This makes photostimulable phosphor technology a very useful and powerful tool in resolving radiation patterns traditionally captured in radiographic film from X-ray diffraction, protein crystallography, and electron microscopy techniques. Computerized radiography is a digital imaging science that uses photostimulable phosphors to create images rather than photographic screens and film. In this module we will discuss the characteristics of these phosphors, and how CR images are formed as well as discuss the various components of the computed radiography system.

Computers Application in Radiology

Dr. Awadh Alqubati

Section I: History of computerized radiography As early as 1975 the Eastman Kodak company patented a device that used thermoluminescent infrared stimulable phosphors thereby releasing a stored image. Unfortunately its design application was towards improving a nearly antiquated microfilm storage system. The FUJI Photo Film Company recognized the far reaching possibilities of this new discovery and in 1980 patented the first process that made use of photostimulable phosphors to record a reproducible radiographic image. The basic common finding of both applications was that some phosphors (called storage phosphors, a.k.a. photostimulable phosphors) could capture an image from absorbed electromagnetic or particulate radiation. Part of the energy stored in the phosphor was afterwards released when stimulated by a high frequency helium-neon laser. By detecting the phosphor’s luminescence using a photomultiplier tube (PMT) to generate an electrical signal that was ultimately reconstructed into a digital radiographic image-computerized imaging was born.

Computers Application in Radiology

Dr. Awadh Alqubati

Section II: Computer basics Computers are ubiquitously used throughout radiology; however, the focus of this module is their use in medical imaging modalities such as: nuclear medicine (NM), ultrasound (U/S), magnetic resonance imaging (MRI), computed tomography (CT), direct digital radiography (ddR), computed radiography (CR), digital subtraction angiography (DSA), bone densitometry (DEXA), and others. Because there are so many different vendors, each with their own special equipment features it is impossible to cover all the particulars of any given manufacturer. But what we can do is give an overview of how digital imaging processors, computer hardware, and software are designed to function together to produce electronic patient image files. However, before we can indulge in the smorgasbord of information on the subject we must discuss some of underpins of computer technology in order to place our discussion into its proper context. Computers manipulate data based on what is called a binary numbers meaning two digits. A binary system requires that any binary number can have only one of two possible values. For computer technology the two digits used are zero or one (“0” or “1”), and are referred to as binary digits or "bits". Using these digits many combinations of numbers are spread out on a grid of rows and columns called a matrix. The matrix can have thousands even millions of tiny “bits” of information in the form of varying densities that make a digital image. Digital medical imaging is now mainstream radiology being validated monthly in every medical journal nationally and internationally. Almost all dialog on radiology imaging issues, case studies, new procedures and the like, are referenced to digital imaging. Therefore, it is utterly important for the radiographer to understand how digital imaging works as this “not so new” technology is now an integral part of our armamentarium of imaging skills. No doubt, you have heard or seen computer advertisements that use the words bits and bytes, such as an ad for a 16-bit Pentium processor with 256 megabytes of RAM.

Computers Application in Radiology

Dr. Awadh Alqubati

These words have meaning to our profession and practice of modern radiography. How digital information is acquired and displayed are partly in the control of the radiographer. In this section, we will look at some concepts in a way that can edify our understanding of digital imaging, particularly its application in computed radiography and direct digital radiography science.

Bits and Bytes Mathematics uses numbers called digits to represent dimension having a magnitude ranging from 0 to 9. They can be combined in a variety of ways to create large or smaller values and fractions thereof. Numbers and therefore bits have weighted value. For example, the number 4,325 is understood to mean that the 4 fills the 1000s place, 3 the hundreds place, 2 the 10s place, and 5 fills the 1s place. Mathematically numbers can be express in a variety of equivalent ways. Using the number 4,325 we can illustrate this point: (4 x 1000) + (3 x 100) + (2 x 10) + (1 x 5) = 4000 + 300+ 20 + 5 = 4325 Or, as powers of 10 (4*10^3) + (3*10^2) + (2*10^1) + (5 x10^0) = 4000 + 300 + 20+ 5 = 4325 The number system we all learned in elementary and secondary schools taught us the basic functions of the base-10 system: addition, subtraction, multiplication, division, algebraic and geometric expression, and the like. This system used ten different digits with values from zero to nine. Nevertheless, we will see that any base system of numbers can be adapted, such as a base-8 digit system, or a base-14 digit system that would require us to invent new digits. So long as we all agree to the terms of numerical use and its meanings, the number of digits can vary. In computing, we use what is called a binary number system or base-2 system because it is simple and limitless data combinations are

Computers Application in Radiology

Dr. Awadh Alqubati

possible without redundant lettering. As we have stated, in the binary number system there are only two digits, zero and one (“0” and “1”). But in all fairness to our base-10 system we could have computers operate on ten digit technology: the expense of doing so would be outrageously pricey. Computer binary codes can have only two digits "0" and "1" that are used to make numbers of all mathematical magnitude. Consider the use of binary coding to count from 0 to 20:

To our advantage, bits are not referred to singly or used singly in assembling computed data; instead, they are bundled together as a collection consisting of 8 bits, which is called a byte. Therefore, eight bits equals one byte. This terminology is more than just an arbitrary arrangement for our numbers to have like meaning analogous to a dozen being equal to twelve. What is gained by grouping numerical bits into bytes are more mathematical combinations for our two digits that permit more discretely identifiable values. For every one byte of numerical formatting 256 values or details can be represented. Each value can be a letter of the alphabet, a character like those on a typewriter, a symbol, or part of a language, a representation of light brightness, a unique radiographic density, or any of many other possibilities. Byte groupings give our numbering a slightly different representation. Consider the numbers “0”, “1”, and “245”below p>

Computers Application in Radiology

Dr. Awadh Alqubati

0 = 00000000 1 = 00000001 2 = 00000010 ... 254 = 11111110 255 = 11111111

With 1 byte it is possible to differentiate 256 shades of grey in a matrix. Then, with each added bit, the number of potential details is at least doubled. For example, 9 bits will discriminate 512 density differences, and with 10 bits 1,042 shades of grey are possible, 11 bits correspond to 2048 densities, 12 bits can show 4096 contrast grey shades, and so forth. A computer device that uses 16 bits will give each sample a density range of 0 to 65,535: 0 = 0000000000000000 1 = 0000000000000001 2 = 0000000000000010 ... 65534 = 1111111111111110 65535 = 1111111111111111 Consider that each bit represents a yes/no, or on/off switch for a specific detail or density. By design “on or yes” is typically controlled by a low voltage. This voltage is about 5 volts or lower, and “off or 0” is near zero voltage. Modern electronics manage the change in low voltage using microchip technology. These highly complex circuitries are compressed to form small plastic circuit boards and given the name an ‘integrated circuit’. Such circuits are made of silicon or other semiconductor materials which have

Computers Application in Radiology

Dr. Awadh Alqubati

the ability to move electrons thereby performing compound electrical processes. These circuits are best known as “silicon chips.” Voltage within the chip (generally 5 volts) represents the binary digit “1,” and the binary digit “0” is represented by zero voltage. The same is true for a value of off or on, off being zero voltage and 1 being 5 volts. Another way of looking at the binary code is that each of the digits either represents an event or the absence of an event. Voltage or absence of a voltage can represent a yes/no switch, or can be a point on an optical disc which is marked or unmarked, or a magnetized part of a streamer tape or LED carrying information such as ones ATM bank card number. What is amazing about the electrical component of digital information is the very high speed at which the two voltage levels can be changed within a circuit resulting in the manipulation of digital binary information. We should all applaud those scientists who developed low voltage micro circuitry. Not only because of the applications of low voltage, but because it allows computers to have a low heat output, greatly reduced the size of all components. As a result, computers do not require special air cooled rooms to help with the distribution of heat as was once required. Low voltage micro chip technology help bring computers into the mainstream for all occupations, business, and personal use. Now getting back to our discussion of the byte we can see why it is the most common base unit of binary-coded information. In computer language a byte is also called a character, often abbreviated char. Bytes are used to hold individual coded characters in a text document. An example of how the code is applied to a character set is the ASCII character set. ASCII is a character code language that makes use of binary numbers to store text documents both on disk and in memory. You may be using this code when you type a document on your computer such as the one you are now reading. The binary coding is used to create numbers and character as well as the space bar between words, punctuations, etc.

Computers Application in Radiology

Dr. Awadh Alqubati

Basic coding requires a lot of memory and bytes; therefore, plenty of memory is required for ongoing operations of computers used in radiographic imaging. Prefixes such as kilo (kilobyte), mega (megabyte), etc. are common terms; however, they do not correspond exactly to their conventional S.I. units because their reference is to eight digits that make up the unit called a byte. A chart such as the one below should be referenced for exact bit size conversions. When we consider the enormous size of medical image documents that will make up the patient’s electronic film file, most institutions will need memory on the order of terabytes to accommodate growth.

Computers Application in Radiology

Dr. Awadh Alqubati

Section III: Pixels and Voxel A digital radiographic image is formed as an electronic image that is displayed on a grid called a matrix. The image is laid out in rows and columns called an image matrix. An image can be made of thousands, preferably millions of these small cells. Each cell in the image matrix is called a picture element, or pixel (yellow cells). With digital imaging, each pixel will have a numerical value that determines the brightness (density) or other details of the cell. Each box has its on dynamic range of values according to the number of bytes of processing; this is called a gray-scale range. Remember that for one byte there are 256 possible values for the density of each pixel, and with 16 bit processing there are 65,535 possible densities any cell can have. These densities can be correlated with the energy of the photons that strike phosphors in the recording medium from which the image will be reconstructed. So, if we use for example, 16 bit processing, and millions of cells in our matrix, we can have tremendous latitude for exposure and image details. Using our binary code of “0” and “1” a different density is assigned for each of our 65,535 numbers in our gray scale range. The brightness of the phosphor corresponding to that area covered by each pixel can be assigned.

Computers Application in Radiology

Dr. Awadh Alqubati

Our example above of a knee radiograph shows a 10 x 10 matrix which contains 100 pixels. A digital computerized radiography image matrix is at least 512 x 512 which contains 262,144 pixels. This pixel size is comparable to analog screen-film imaging. Advanced CR systems can produce images using a 1024 x 1024 matrix or greater which will contain more information than a comparable analog image. To determine the number of pixels in an image matrix, simply multiply the column length by its width. How many pixels are there in a 1024 x 1024 matrix?

 Answer: 1024 x 1024 = 1048576 pixels Spatial resolution of a digital image is related to pixel size. The smaller the pixel size the greater the spatial resolution. Therefore, a 1024 x 1024 matrix will provide better resolution than a 512 x 512 matrix The picture to the left demonstrates the dynamic range of gray that can be achieved with each pixel to form the digital image. Pixel size alone does not determine the detail of an image; the range of values each pixel may have is also very important, as well as the number of pixels. We have already stated that the range of values each pixel may have in a matrix is called the dynamic range. The dynamic range is a function of both the hardware and software in converting the image into digital form. For example, a dynamic range of an 8-bit processor is 0-256 densities or details. With all other factors equal, 8-bit processing will have less gray scale resolution than an image produced by 9-bit or 10-bit processing. The dynamic range is expressed in bits, meaning an 8-bit image will have less clarity and gray scale than a 10bit or 12-bit processor. Many of today's computed radiography and direct digital radiography images use 16-bit processing or higher.

Computers Application in Radiology

Dr. Awadh Alqubati

Voxel We are all familiar with 2D imaging commonly used in some radiography modalities. For example, computed tomography (CT) uses thin slice axial images to reconstruct coronal and sagittal 2D images. In some cases like a displaced acetabular fracture or pelvic ring fracture, 3D images may be requested. Computer scientists have make great improvements in 3D imagery proven by its reliability for diagnostic information. If we consider the CT axial image as our starting point, successive pixels are strung in depth order to form a three dimensional representation of the scanned part. The process includes the converting of geometric representations into volume sets called voxels. These voxels approximate a continuous object using a process called voxelization. Each data point is a geometric cube is called a voxel, and a volume of voxels are called a voxel space. It is sometimes easier to think of a voxel as a volume pixel element.

The formation of a 3D radiographic image is very complex process and will not be discussed further here; however, the viewing window of a 3D image is worth mentioning. The viewing window defines the orientation of the voxel space and what part of that

Computers Application in Radiology

Dr. Awadh Alqubati

space is presented on the monitor. This is important because 3D images are not transparent like a radiograph. There are now new algorithms that demonstrate a transparent 3D view, but this requires expensive software upgrades to the workstation. In our picture of the hip (above) the posterior aspect of the pelvis and hip are demonstrated. It would have been just as easy to demonstrate the anterior view or any number of views in cine format from 0 through 360 degrees of rotation.

Computers Application in Radiology

Dr. Awadh Alqubati

Section IV: Hardware used in digital radiography Computers are an integral part of radiographic imaging whether it is a CT scanner, a MR scanner, an ultrasound machine, or a CR reader; they are all designed for compatibility on a PACS network. When considering the purchase of a new piece of equipment the administrator must make sure it is compatible with their institutional network strategy. For example, new equipment must be compatible with radiology and hospital information systems. And the input/output speed of the computer must not slow down an existing PACS network. The hardware in these devices include one or more central processing units (CPU), a main memory capacity, a secondary memory device, input/output data transfer devices, and network connectivity interfacings. For the most part hardware should only be purchased if it meets DICOM connectivity standards. For instance, one would not purchase a CD-ROM burner thinking it will reduce the need to print films without checking to make sure it meets current DICOM connectivity standards. Perhaps the most important hardware component of a computer is the central processing unit (CPU), which is the brain of the computer. It uses an integrated circuit called a ‘microprocessor’ to interpret and execute functions and to manipulate data. The CPU has two main components: the Control Unit (CU), and the Arithmetic/Logic Unit (ALU). The control unit interprets instructions contained in the computers programs as well as executes those instructions. For example, the CPU often sends commands to other components of the computer to control internal as well as external operations. Another component of importance to our study is the ALU. Its functions include manipulations of data that require a mathematical application. Remember, bytes are essentially numbers that have a functional component and can be used for all mathematical applications. Just think of all the computing applications of a high quality calculator (addition, subtraction,

Computers Application in Radiology

Dr. Awadh Alqubati

multiplication, algebraic expressions, geometry, etc), the same is true of the computer’s ALU. A computer’s main memory consists of a large number of integrated circuits that store information the user requires for immediate performance. This circuitry is generally called Random Access Memory (RAM) because it is a volatile form of memory that can be lost if power to the computer is lost (e.g. electrical glitch). The contents of RAM is rapidly erased and refilled as new information is added to a document. For instance, as an image is acquired by a CT scanner it may be sent to PACS from the computer’s RAM store. RAM is more volatile with a home computer than with computers used in radiographic imaging equipment because when power is lost it is usually erased. Manufacturers of digital radiographic equipment so not rely on RAM, instead, images are immediately stored in secondary memory within the base unit. Another option that is popular with manufacturers is to provide battery back-up that maintain the electrical supply for a few minutes in case of a power glitch. Notwithstanding, a radiographic image must be permanently saved by converted it to Read Only Memory. This is the type of memory that is on a magnetic disk like the hard drive, or an external drive like an optical disk, or a CD-ROM, et cetera. Secondary memory is used to store information in permanent, erasable, rewritable form for long term purposes. In part I of this module we talked about the optical disk and the optical disk jukebox used to store PACS images. We also talked about PACS having quick access to large image files through a network attached server. In any case, memory is component that allows computers to store and retrieve data. Memory is based on principles of micro magnetisms on many localized domains. A magnetic disk is made of aluminum or a glass plate on to which a magnetic material is applied. These materials can store “bits” as local magnetism at different points on the storage material called writing. Data is recovered by detecting these magnetisms and assembling them into bytes, a process called reading. When we speak of the hard drive we are referencing a hardware component that contains fixed memory on multiple plates. This memory is divided into sectors that can

Computers Application in Radiology

Dr. Awadh Alqubati

identify the location of named files. Magnetic labeling of files allows the computer’s CPU to access its operating systems files, and its installed program software files. The PACS server contains multiple hard drives on which radiographic images and text data are stored. Memory is a way of storing radiographic images on to hard drives. Having a storage component as a node on the PACS network is a huge advantage because most digital radiographic equipment does not contain enough memory for vast long-term image storage. Besides most CR operator panels can retrieve images from PACS just like a workstation and display them. This is because memory on the PACS network is an open software program that allows a user computer to access image files. The user is also permitted to manipulate image quality and perform various software functions on it without permanently changing it in PACS. Another method of data storage is digital linear tape used to archive data for disaster recovery. The difference between tape and an optical disk memory system is the linear nature of DLT storage makes routine recovery from a tape a lengthy process. With disk technology the read/write arm can access any data point on the disk effortlessly. Because data is not sequential on a disk, writing and retrieval of data is faster than with linear tape. DLT is therefore only good for back-up disaster recovery of stored images and is not accessed by the PACS archive server for retrieval of image documents to workstations. An optical disk, which is similar to a DVD, contains more memory and takes less recovery time than tape media. In order to communicate with the computer’s CPU to give it instructions, the user will need certain peripheral devices. Some of these we are all familiar with such as the mouse, bitpad, joystick, keyboard, and so forth. These devices are quite handy especially since computers today are windows driven and the keyboard is almost always used to enter text data into specific data fields. As we will discover later in our discussion on digital radiography most manufacturers of digital equipment are now providing a touch screen keypad called a remote operator processor (ROP) which is a peripheral input device. Output devices like a laser printer or a CD-ROM burner are quite common in radiology practice as well.

Computers Application in Radiology

Dr. Awadh Alqubati

Communication pathways for a digital imaging system can be compared to the central nervous system. Image data is communicated along specific routes controlled by instructions that direct it to various network components. Communication pathways route data to memory and retrieve it, and transmit in DICOM subclass protocols for display, printing, and the like. This is why with PACS networking, the bus topology works better than other architectural schemes. The topology of a network plays a role in the speed at which communications are handled. To have smooth flow of information between computers on a PACS network, which consist of all imaging computers and radiology information systems, they must have compatible send/receive rates. Network cables must be equal to or preferably greater than the input/output speeds of all computers on the network. Each device’s CPU including the archiving server should have compatible data transfer rates to prevent “network failure” discussed earlier in part II. The hardware alone does not determine the functionality of a computer system; special software is required to orchestrate how its components will operate. For the CPU to perform its duties precisely, instructions from the computer’s operating system (DOS, UNIX, or MacOS) are required. In addition to the computer’s operating system applications, software is required to manage specific functions such as database access, graphics, and in our case digital imaging processing for computed and direct digital radiography. Other software needs include programs software, data Editor, Library of subroutines, a Linker to link the user written programs to the subroutine library, a Compiler for translating user written programs into binary computer code and the like. All of these functions are controlled by specific software. For direct digital and computed radiography imaging, the software is just as important as is the hardware. Software upgrades are routinely needed with digital imaging and are relatively expensive.

Computers Application in Radiology

Dr. Awadh Alqubati

Section V: The digital imaging processor Some computers are used to process radiographic images. They are greatly different from general purpose computers. They are specialized to handle large volumes of information quickly. These computers must capture, store, and retrieve information, as well as perform manipulations on it at the users command. Digital image processing is a complex process of data analysis and image analysis which is a function of the software the computer uses. In traditional film screen radiography many of these features are a result of quality of the exit radiation used to form the image. The image processor is a component of the base device’s computer. It is concerned with specific tasks like image acquisition, image display, image archiving, image arithmetic functions, and transfer speed capabilities. In other words, an image acquired by a base device is acted upon by its software. If the device is connected to a PACS network the software communicates with the CPU’s of PACS servers (archive server, workflow server, RIS/HIS server, etc.). A base device is one that produces primary image information. These devices include: digital fluoroscopy, digital ultrasound, MRI scanner, gamma camera acquisition, positron emission tomography (PET), CT scanning, CR and ddR radiographic equipment and so forth. In most scenarios the base unit sends image data to the acquisition circuitry of the digital image processor and then to the PACS server if networked. Most base devices produce images as an analog picture that must be converted to a digital image. An image acquisition component of the digital processor is responsible for converting analog information produced by a base unit into digital binary coded numbers. The device that performs this function is called an Analog-to-Digital Converter (ADC). In addition to converting image data to digital data the converter may manipulate the data and correct any deviations in it using an Input Look-Up Table. In addition to converting image data to digital data the converter may manipulate the data and correct any deviations in it using an Input Look-Up Table. Look-up-tables contain registers of data

Computers Application in Radiology

Dr. Awadh Alqubati

points the computer uses when interpolating a connection between disjointed data bits. It stands to reason that an ultrasound image is not initially acquired as a digital image; therefore, ultrasound image data must be processed into digital information. The acquisition of a sound image into a digital signal will require some data interpolation which is found in look-up-tables etched into the computer’s operating system memory. Likewise, logarithmic transformation of fluoroscopy data must be accomplished in order to have digital fluoroscopy. The digital acquisition circuitry will manipulate any data it receives that have “gaps” in it and interpolate data points using look-up-tables . Digital images should be displayed on a high resolution monitor or printed for viewing. Binary language is used only for transferring and storing data, radiographic images are displayed in analog form. Whether viewed on a monitor, or printed, both require that digital images be converted to analog form. A Digital-to-Analog Converter is a component used for this purpose. Devices such as a high resolution monitor and most printers used in radiology today require analog formatted data for displaying images rather than binary formatted data. The digital-to-analog converter contains a complex circuitry system for this purpose. In addition to converting signals, it is responsible for some of the image manipulations we call post processing functions. Special functions such as ‘windowing, magnification, multiple image display, measurement functions, annotation of images, and so forth are all processed by these circuits. Two other components that handle image data are the image ALU and the Array Processor. The image arithmetic/logic unit (ALU) is also a component dedicated to managing image data. It performs complex calculations on image data, such as subtraction of binary digits to produce an image subtraction mask during digital subtraction angiography (DSA). In analog radiography, a subtraction mask must be made of an image and overlaid on the film; however, in digital imaging our picture data can be subtracted; a function performed by the image ALU. The ALU is also responsible for reducing image graininess also called noise through a process called image averaging. The graininess of a digital image is based on the radiologist tolerance for image noise. In digital imaging much more complex

Computers Application in Radiology

Dr. Awadh Alqubati

manipulation of image data is required than is performed by a standard home computer’s ALU. This is because the speeds at which these manipulations must be performed are a critical component of the computer's workflow management. For radiographic imaging an additional component is needed to handle digital imaging data at workflow pace. The hardware component that assists with fast data manipulation is the array processor. Essentially the array processor is a separate CPU that is designed for computational speed in parallel mode rather than in sequential mode. The array processor is a CPU that has diminished operational flexibility with a gain of computational speed. Consider that many of the calculations used in digital imaging need to be done simultaneously rather than in sequence and the amount of data flowing in a networked system can be enormous. Having a separate fast computer brain dedicated to calculating is a must. Examples of array processor functions include reconstruction of axial CT images into coronal and sagittal planes which is useful to imaging modalities like MRI, CT, and SPECT nuclear medicine imaging.

Computers Application in Radiology

Dr. Awadh Alqubati

Section VI: Fundamentals of Computed Radiography (CR) The fundamental difference between computed radiography and analog imaging is the replacement of film-screens with photostimulable phosphor plates and the successive innovations that followed. Digital plates require a plate reader, a port of linkage to patient text data (i.e. RIS, or HIS), and connection to an output device such as a printer, or to a PACS network. The technologists need a CR imaging system that includes storage phosphor cassettes, storage phosphor reader(s), bar code scanner, remote operator panel for entering patient data, and a clinical workstation for reviewing and printing from PACS.

Computers Application in Radiology

Dr. Awadh Alqubati

Currently CR is a more popular purchase over ddR because existing radiographic equipment (X-ray tube systems, x-ray tables, portable machines, etc) does not have to be modified. These pieces of equipment alone do not constitute the full requirement to operate a CR system. It should be remembered that a major reason for investing in CR/ddR imaging is that it is the entry point for general diagnostic imaging into PACS. The advantages of CR and DR imaging over conventional analog imaging are huge and well worth the upgrade.

Photostimulable plate and cassette Radiographers have needed to understand the mechanism of image production using screen-film technology in order to maximize image quality; the same is true of the photostimulable phosphor plate technology used in CR imaging. Furthermore, it is imperative that the radiographer understands the basic characteristics of storage phosphors and how they differ from their analog counterpart. Computerized radiography and direct digital radiography will in the near future become the standards of radiographic imaging because of its digital link to PACS and potential for internet connectivity. In this section we will discuss the characteristics of these storage phosphors and what is accepted as the "theoretical" mechanism by which they store and release a latent image. The structure of the phosphor screen and cassettes is also important to our study, as well as the process of digitation of the storage phosphor image.

Computers Application in Radiology

Dr. Awadh Alqubati

The basic component of CR image capture is the photostimulable phosphor cassette. The phosphors used to coat the screen are europium-activated barium fluorohalide crystals (BaFX:Eu2+ where X is a halogen of either iodine or bromine). These phosphors are not all together unique to CR imaging, for years screens made of photostimulable phosphors have been used in intensifying screens for conventional film-screen imaging. The phosphors in these screens fluoresce upon exposure to ionizing radiation emitted from the x-ray tube. Radiation energy causes the phosphors to fluoresce, releasing a high fraction of the absorbed energy from the screens; the remnant energy is stored in the phosphors as a latent image. It is the stored energy in the form of a latent image that is used to produce the CR image, but the image must be released from the phosphors and further processed. When stimulated with infrared or white light photostimulable phosphors release light proportional to the stored energy which can be detected by a photomultiplier tube(s) as an image signal.

Computers Application in Radiology

Dr. Awadh Alqubati

Mechanism(s) of image storage in phosphors The exact mechanism(s) of photostimulated luminescence is not completely understood; however, there are a few very good current theories that explain luminescence and the linear response of photostimulation over wide exposure values seen in diagnostic imaging. Consider that the dynamic range of exposure for photostimulable phosphors is linear over a range of greater than 10,000 to 1, whereas for analog radiographic images produced by screens it is roughly 40 to 1. What this means is that over exposure or underexposure of radiographic images seen in conventional film-screen imaging is virtually eliminated by photostimulable phosphor technology imaging. This does not mean that images acquired at the extreme low and high values can be optimized into high a quality image, it simply means that all values of an exposure can be represented on the final image and be discriminated. Computed radiography can detect exposures up to and greater than 100 milliroentgen (mR) which is far beyond D max for screen-film imaging. Digital radiography has been demonstrated to produce images at high energy values used in radiation oncology to treat cancer. It even can detect low energy from particulate radiation, (0.195 alpha particles per square millimeter). Although there are several theories on the mechanism of photostimulated luminescence we will describe the most commonly accepted model for BaFBr:Eu 2+ phosphor photostimulated luminescence: The simplest explanation for luminescence is that impurities in the crystal lattice are responsible for luminescence. As the concentration of impurity ions increase the greater the intensity of the luminescence. CR screens use barium fluorohalides doped with europium (europium is the impurity in the crystal). When phosphors are stimulated with x-ray photon energy electron pair holes are created. In effect, europium is raised to an excited state and upon luminescence it is returned to its ground Eu 2+ state. This mechanism holds for both spontaneous luminescence and photostimulated luminescence. The shifting of europium from its excited state back to its ground state for both

Computers Application in Radiology

Dr. Awadh Alqubati

spontaneous and photostimulated luminescence is about 0.6 - 0.8 microseconds. With screen-film imaging these crystals spontaneously luminescence to expose a film, but with CR imaging the luminescence occurs, then there is also photoluminescence that occurs when the screen is stimulated by a narrow beam of infrared light. The holes or vacancies in the lattice are portions of the lattice normally occupied by halogens (fluoride, bromide, or iodine). These vacancies will trap free electrons when irradiated and are called Farbzentren centers or F-centers. Within the BaFBr:Eu phosphors there are two potential types of F-centers that trap electrons: F(Br-) and F(F-), these represent electrons trapped in the bromide and fluoride vacancies. When the photostimulable plate is exposed to high frequency light, usually from a helium laser, the electrons in these F-centers are liberated and cause luminescence at readout.

Structure of the phosphor screen and cassette There are some differences in the structure of phosphor screens and cassettes by different manufacturers, for example, Kodak cassettes are designed to withstand 400lbs of pressure. The strength of a cassette system is very important, for example, if standing feet x-ray images are routinely performed at an orthopedic clinic, the technologist must be able to safely obtain them on patients of varying weights. The cassette front is made of carbon fiber and the backing of aluminum. But notwithstanding, the phosphor screens are made of a base, a phosphor layer, and a protective coating The figure below demonstrates a cross section of a Kodak Photostimulable plate and cassette.

Computers Application in Radiology

Dr. Awadh Alqubati

These screens are designed slightly different than screens for film imaging. They are balanced for x-ray absorption characteristics, light output, laser light scatter and screen thickness. These variables affect electronic noise, image resolution properties, and the speed of the imaging system. BaFBr:Eu2+ phosphor is coated onto base (Estar) using polymers that act as glue to hold it. Then a clear coat solvent is coated over the phosphor to seal it, protecting it from physical damage. A black reflective base under the phosphor helps improve image resolution by reducing dispersion of light as the laser exposes the phosphors at reading; the black base also allows for a thicker phosphor layer into which photon energy is trapped. These are all mounted onto a lead sheet that absorbs excess photons and reduces backscatter, and to an aluminum panel that is mechanically removed from the cassette during scanning. On the back of the panel is a label indicating the speed of the cassette, which in CR imaging is the brightness of the phosphor, speed is also used in calculating the exposure index.

Cassette scanning and plate reading

Computers Application in Radiology

Dr. Awadh Alqubati

The three pictures above are of the Kodak series of CR units: the first picture is of the 8 cassette multiloader, and the other two are of the single loader reader that is sufficient for low volume institutions. There are five processing functions of the reader that are important to the technologist: Unloading of the photostimulable plate, laser scanning of the plate, light collection onto the PMT, erasing the plate for reuse, and reloading of the plate into the cassette. Unloading the cassette is all mechanically driven with care not to touch the photostimulable phosphor side of the plate. The purpose of the reader is to scan the photostimulable phosphor plate releasing the latent image. Within the "reader" light emitted by stimulating the phosphor to luminescence is converted to an electrical signal. The plate is then erased and reloaded into the cassette for reuse. To recover the latent image the screen is scanned with a helium-neon laser that uses a low 20 milliwatt 633 nm wavelength output laser. The photostimulable screen is scanned in a raster fashion. The wavelength of light required to stimulate phosphor luminescence is different from the wavelength released from the phosphors during luminescence. From the figure below we see that the wavelength of light released from the phosphor screen is about 400 nanometers. The laser emits light in the range of 600 nanometers, which is required to cause stimulation of photostimulable phosphor luminescence. Thus there is an energy difference between the emitted light at stimulation, and the light emitted from the laser to cause photo stimulated luminescence. The light from the laser should not be part of the CR image and must be extracted from the image data.

Computers Application in Radiology

Dr. Awadh Alqubati

In addition to the difference in the wavelength of light required to stimulate phosphor luminescence and the wavelength of light thereby emitted, there is residual energy trapped in the phosphors following stimulation. To release all of the stored energy the phosphor plate must be exposed to white light following stimulation by the laser in a process that erases the plate for reuse (the picture below shows the white fluorescent bulbs used to erase the plate after acquiring the latent image).

The picture to the left shows the inside of a Kodak multi-loader reader unit. For safety reasons the helium laser is enclosed; however,

a

sliding

pull-out

rack

demonstrates the fluorescent light bulbs that expose the phosphor plate following it being "read" to clear any stored energy in the phosphors before reuse. The laser reads a preset number of line-pairs based on the size of the cassette's screen. The Kodak CR system reader is a computer that includes software that reads the screen size and its associated image size according to the table below:

Did you notice how large the study size is for each image? This is why the storage capacity for PACS must be sufficiently large enough to accommodate long-term image capture from not only CR, but from all imaging modality. Especially if data is received from multiple radiology modalities such as CT, MRI, Nuclear Medicine, etc. The memory requirements for PACS networking in order to have fast archive retrieval need creative networking such as archiving on a (NAS) with a capacity in the order of terabytes.

Computers Application in Radiology

Dr. Awadh Alqubati

Light is emitted in all directions as an inherent physical characteristic of screen fluorescence; the same is true of photostimulated luminescence. Therefore emitted light must be focused by a collector onto the photomultiplier tube (PMT). The PMT is a device that converts light from the photostimulated screen to an electronic signal that can be further converted to digital "bits". Depending on the CR system there can be from one to five photomultiplier tubes. Remember, the laser's light is in the red spectrum in the order of 633 nm while the luminescent light is 400 nm. Therefore, an optical filter is placed in front of the collector to filter the laser light prior to it reaching the PMT. The PMT is calibrated to the storage characteristics of an exposed photostimulable phosphor plate. This calibration that affects the overall brightness of the extracted image is based on a delay of 15 minutes from the time of exposure to the time of scanning since the signal in the phosphor degrades exponentially over time. This time delay is not apparent to the technologist and the plate can be scanned anytime within 24 hours without appreciated loss of image data that would warrant a repeat exposure. Calibration of the image from the PMT is set at about 3000 pixels. All PMTS in a unit must be calibrated so that the reading across the plate is equalized and balanced. The electronic data signal from the PMT is then sent to a device that converts the analog data to digital data. This device is called an Analogue-to-Digital Converter and associated Input Look-Up Tables (ILUT) are referenced. These LUT contain circuitry for manipulating digitized data so as to correct for any aberrations in the image data caused by the converting of it from a light latent image to an electronic image, and to a digital image. The process of digitization is complex but briefly, the signal must be amplified and passed through several filters such as a Bessel Filter for anti-aliasing. An antialiasing filter is used to smooth edges in an image and smooth jagged diagonal lines caused by seamed transfers to produce seamless final images. Photomultiplier tubes are about 20-25% efficient in light compilation from the stimulated luminescence; therefore, the image is acquired over four decades of exposure and requires optimization before viewing. Tone scaling is a type of contrast enhancement that involves remapping of gray scale values using special look-up tables. Look up tables are a common way of converting digital data from different modalities such as ultrasound

Computers Application in Radiology

Dr. Awadh Alqubati

and MRI into digital format. The process of tone scaling involves transforming the raw data in three or 4 steps into a finished image. First the collimated field is detected using the raw data image as a guide. Next the anatomic region is defined, the image is then tone scaled, and final reprocessing is applied.

Top left picture shows the image as released from the phosphor and collected by the PMT tube. This is the first image produced by an electrical signal from the PMT. The top right picture

shows

the

establishment

of

the

collimated border of the film during tone scaling. The bottom left picture defines the anatomic region for specific algorithm, and the bottom right picture demonstrates the finished image produced by tone scaling.

The process of enhancing the raw image data is called image segmentation. The CR image is acquired over four decades of exposure, 1) light release from storage phosphors, 2) conversion to an electronic signal by the PMT tube, 3) identifying the collimated image border, and 4) tone scaling the image. These are the post processing functions that must take place before the image is presented on the CR reader monitor. The image must then be fixed before the data is sent to PACS, then to workstations, or is printed. The raw data is subjected to various algorithms and LUT that define areas of interest and collimated areas. The average density and LUT control the overall density and contrast of an image. The final image is first available on the CRT monitor at the reader or on remote operator panels (ROP). What is important for the technologist to understand is that the image released from stimulating the phosphor plate is not a readable diagnostic image and requires post processing. Specific software algorithms must be applied to the image prior to presenting it as

Computers Application in Radiology

Dr. Awadh Alqubati

a finished radiograph. These modifications of the image occur in the reader programs and at the workstation using look-up tables as references.

Inside the reader is its own central processing unit (CPU) that acts as the brains of the entire system. This unit contains various circuitries for image processing including Input Look-Up Tables and Output Look-Up Tables that process digital-to-analogue conversions for monitor display of the finished image.

Left picture. Kodak multi-loader is displayed to demonstrate that all CR images are displayed on the CRT monitor following processing. Here the technologist approves the images and sends them to PACS and/or prints them. Regardless of the CR imaging system the technologist must view the image on the CRT monitor and either accepts it based on the exposure index, or rejects the image. An accepted image is then sent to PACS for image review on network workstations, or the image can be printed for conventional reading and filing.

Three different vendor CR units are shown in the pictures above. The left image of the Agfa CR system multi-loader and CRT monitor for approving images; the middle

Computers Application in Radiology

Dr. Awadh Alqubati

picture is of the Kodak remote operator panel (ROP) which is a remote display CRT that can be mounted anywhere in the department to reduce clutter around the reader; the right picture is of a FUJI CR system multi-loader and CRT image monitor.

Exposure Index Because the CRT monitor image is post processed using workstation algorithms and Look-up Tables, the technologist needs feedback on the exposure to the phosphor screen that produced the image. Most technologists understand that storage phosphor screen exposure can be optimized and therefore is not overly concerned with over or under exposure. Because of the increased exposure latitude enjoyed with CR imaging radiographers tend towards higher than necessary exposures desiring to see less noise on radiographs displayed on the CRT. The exposure index is a tool provided for the technologist to monitor their plate exposure; it is analogous to the optical density used in

Computers Application in Radiology

Dr. Awadh Alqubati

screen-film imaging. The exposure index is not a measure of the patient’s exposure; however, if the exposure is greater than the recommended exposure index range the patient has been overexposed. The degree of that over or underexposure can be correlated but is not commonly done except for the log of the exposure index recorded for viewing on the workstation and film. The PMT calibrated exposure index is set by the manufacturer and this calibration of the PMT is not variable. Then it follows that when different speed screens are used (for Kodak the phosphor speed is equivalent to 200 screen-film speed, FUJI plates are approximately 400 screen-film speed) the PMT reads an exposure index of 2000 for a 1 mR screen exposure. Each vendor will calibrate the exposure index differently, for example, Kodak sets the exposure index reading at the PMT at 1 mR is equal to an exposure index of 2000. Ideally, the technologist should strive to keep the exposure index consistent from patient to patient. Kodak recommends that the exposure index for any image should fall in the range of 1800 to 2200; each increase in the initial exposure index of 300 is a doubling of the screen exposure.

Computers Application in Radiology

Dr. Awadh Alqubati

Images outside the acceptable exposure index range do not necessarily need repeating; however, the technologists should use their judgment as to when an image should be repeated. CR image processing cannot compensate for too little exposure, such as an exposure index of 300, or an extremely overexposed image outside the range. There are several possible factors within the technologist controls that can alter the exposure index. The primary controller is technique selection. Others include improper centering on the cassette, and placing two or more views on the same cassette. Most CR readers calculate the exposure index starting from the center of the cassette and outward, even though the cassette is read in raster fashion. Sometimes when three views such as a finger or the wrist is placed on one cassette the anatomic and non-anatomic regions of the image are not correctly identified by post processing software. This causes an improper calculation of the exposure index that is not taken from the relevant portions of the image and the image may appear dark. The improper reading of the CR image due to multiple images on a plate that give false over or under exposure indices are called image segmentation failure. Although in theory, it is impossible to over or under expose an image the image may appear over or underexposed due to the image segmentation algorithm that handles the raw data. Generally speaking a segmentation failure results in a high exposure index. What is important is how the technologist handles these awkward exposure indexes when they occur. The scenario is that the radiographic image on the CRT monitor appears overexposed and the technologist desires to manipulate the raw data to make an eye-pleasing image prior to sending it to PACS.

Computers Application in Radiology

Dr. Awadh Alqubati

In each of the three graphs above the technologist adjusted the image using the raw data controls below each picture. Notice that the slope of the line also changed indicating that raw data is being lost that may affect image detail characteristics that can be windowed at the workstation. The technologist should remember that workstation software can adjust windowing and leveling. Therefore, if the image can be windowed from the raw data on the CRT monitor, it can also be windowed to form a high-resolution image on the workstation. In this regard it should be left alone and the data saved to be manipulated at the workstation. In this way pertinent image data is not erased just to make an eye-pleasing radiograph on the CRT, a low resolution monitor.

Computers Application in Radiology

Dr. Awadh Alqubati

Section VII: Overview on using the CR System

One of the many advantages of CR imaging is that existing radiography equipment can be used with just a few modifications in how images are acquired by the system. In this section we will look at how information such as from the radiology Information System (RIS) and Hospital Information System (HIS) into the CR system and ultimately into PACS for local and wide area networking. For the most part there are four equipment items that sponsor the trafficking of information into the CR/DR system for image display. These are the data entry, examination algorithm selection, post image processing, and networking into PACS for storage/retrieval. A CR system can be added to PACS network as a node on the Bus topology with servers that share patient file information from the HIS/RIS broker. The first step in digital and CR imaging is that specific data fields must be entered into the CR or DR unit. This is because all digital images must have patient information such as the patient's name, medical record number, exam number, date, and time, etc., printed on each image document as it is sent to storage, else it is inaccurately retrievable from archive. Specific data fields are filled by the HIS/RIS broker, a type of server that links text information

Computers Application in Radiology

Dr. Awadh Alqubati

from RIS to the base CR/DR unit and to PACS. Once the RIS/HIS server receives data that is the patient's radiology request any base unit on the network can accesses this information as part of the examination database using a workflow dialog box. There are several ways to begin the process of patient selection. Some unites such as the FUJI system uses a magnetic I.D. card that can access the patient file. Other systems such as advanced FUJI and Kodak systems use a barcode reader to directly populate text fields through a HUB to the RIS broker.

Above. The earlier models of FUJI CR systems rely on a magnetic card that interfaces with the ROP to enter patient data. Through the appropriate server(s) this system draws data from RIS and pushes it onto PACS images for print and memory into long-term storage. Magnetic stripe cards are fully reusable; however, the down side of this technology is that the magnetic card system does not reference a specific exam and is generic for patient text information transfer. The device on the left is used to create the patient data card (usually made by the radiology clerk), and the device on the right receives the swipe-technology to enter patient data into the CR system. Advanced FUJI and Kodak systems use the full capabilities of the HIS/RIS server and CR units to transfer patient information data directly to a workflow manager. Radiology orders are entered into the RIS computer from any link and are received by the radiology clerk. The RIS broker is a server that networks patient information directly to the base CR unit and to PACS and can be accessed from the workflow list functions of the base unit. This is generally done using barcode technology and the patient radiology

Computers Application in Radiology

Dr. Awadh Alqubati

request. Barcode technology linkage is ubiquitous throughout the PACS network because DICOM contains a barcode subclass operations protocol. The technologist uses the patient request and a barcode reader to access the patient file already in the workflow list. Each study will have its own request, study I.D. and barcode as part of the workflow manager function. So the technologist uses the patient exam request and a barcode reader to begin the imaging process.

Above. The image to the left is of a Kodak ROP with barcode reader. To the right is the radiology request with barcodes containing patient medical record number, exam number, and information for interfacing with the RIS broker. The barcode device is used to access the workflow manager of the RIS and PACS servers. The radiologist only needs the radiology request and can barcode it to bring up images in the patient’s data file. The next step in the CR imaging process is to set the study algorithm the reader should process the exposed plate under. This function is also controlled by a barcode reader. The technologist selects the appropriate study, e.g. ABDOMEN, CHEST, FOREARM, KNEE, etc., from the programmed list. The reader must then be told what cassette contains the image. This is done by barcoding the cassette with the appropriate algorithm selected at the reader or ROP.

Computers Application in Radiology

Dr. Awadh Alqubati

These three pictures demonstrate the CR cassette and barcode system for matching cassette to pre-selected processing algorithm the reader is to use. The cassette is registered using barcode at the remote operator panel. This information can be entered either before exposure or after exposure.

Once the study is selected and the cassette is bar-coded and the technologist may proceed using the cassette just as they would a screen-film cassette. In digital imaging algorithms are selected rather than cassette types. In screen film imaging the technologist may use a different screen-film type for a KUB than they would for a forearm image. In digital imaging the same cassette is used but the computer’s software selects the appropriate processing algorithm to process the photostimulable plate. This is a very important difference between screen film imaging and CR. Being able to use any cassette for imaging is a huge time savings to the technologist. It eliminates darkroom time spent to load special extremity cassettes with extremity film, or the repeats that occur when the extremity cassette is loaded with non extremity film. Remember having to load special chest film into "chest" cassettes and a failure to do so resulted in a high contrast chest x-ray? These issues are eliminated by algorithms that can be changed if the image of a chest is processed under a foot algorithm, a feature unique to digital imaging.

Left. This picture demonstrates all of the components of the CR system required for imaging. The patient data entry panel uses a magnetic card to enter patient information (white arrow). Once the technologist selects the proper image processing algorithm the cassette can be barcoded with the barcode reader (blue arrow) and placed into the reader for processing. The CRT monitor on the unit will display the processed image for the technologist to approve and send to PACS or to be printed.

Computers Application in Radiology

Dr. Awadh Alqubati

Steps in the CR imaging process: 1) patient information data is entered into the CR unit or is accessed through RIS using a barcode or magnetic stripe card, 2) the appropriate algorithm is selected (e.g. chest, hand, C-spine, etc), 3) the cassette’s unique barcode is entered into the CR system so the reader can identify the image and process it according to the pre-selected algorithm.

Acquiring the CR Image A characteristic that is unique to CR imaging is that there is only one screen type for all studies so that the same cassette is used for portable radiography, bucky radiography, tabletop radiography, and the like. There is no need to look for special detail cassettes for extremity work, or high speed screen with low scale contrast for chest radiography. These functions are handled by the software performing algorithm functions. Even the grid lines commonly seen with screen-film imaging can be removed from the digital image using LUT for that specific function.

Computers Application in Radiology

Dr. Awadh Alqubati

The CR cassette can be placed in the Bucky tray or used tabletop just as would a screen-film cassette. If Automatic Exposure Control (AEC) is used it may have to be calibrated for CR cassette exposure otherwise the technologist must strive through manual techniques to produce consistent exposure index in the range of 18002200 for Kodak CR, and 50-200 for FUJI CR. Manual techniques are extremely important in digital imaging for tabletop radiography because a variable-kVp or variable mAs Chart will help the technologists achieve uniform exposure indexes for tabletop and portable images.

Above. The same cassettes can be used in the bucky tray or tabletop.

Left. By placing ROPs in locations near the exposure console, the technologist is able to enter and approve images between radiographic exposures.

The chronology of the image processing following exposure is as follows: the exposed cassette is placed on the reader where the cassette is mechanically opened and the photostimulable plate removed. Inside the reader a laser is passed over the plate in raster fashion using a wavelength of 633nm to stimulate luminescence of the phosphors. This stimulated luminescence releases the latent image in the form of light that is filtered and collected onto a photomultiplier tube (PMT). The PMT converts the light signal to an electrical signal that is then converted from analog-to-digital data bits by a special converter. The raw data is subjected to algorithms and Look-up Tables (LUT) that

Computers Application in Radiology

Dr. Awadh Alqubati

interpolate data points and allow for manipulation of digital information. Through a process of image segmentation it is optimized. Finally the image is presented on the CRTmonitor for technologist viewing. All of this takes place in a matter of seconds rather than minutes as in conventional screen-film image processing.

One of the niceties of computed radiography is that image data is already in digital form so it can easily be linked onto the PACS network. Because computerized radiography adheres to DICOM standards, these units adhere to the various subclass standards for compatibility. From the reader a link can be established directly to a wet or dry laser printer using DICOM Print Management Service Class, and to PACS storage servers using DICOM Query/Retrieval Service Class. The images can also be displayed to any workstation in the PACS network which significantly decreases ER/Trauma wait time.

This is a summary of the special advantages of digital computed radiography that cannot be achieved by analog screen-film imaging for the following reasons. 1) X-ray exposure and display of the image are uncoupled; therefore characteristics of image

Computers Application in Radiology

Dr. Awadh Alqubati

presentation, mainly optical density and contrast become less significant in the raw data. 2) There are a limitless number of “original images” available for viewing which can be outputted to multiple stations simultaneously without intermediate copying of the images as with screen-film radiographs. 3) Digital images can be transferred over a LAN or WAN without any deterioration for all degrees of image spatial frequency. This includes CD-ROM, Internet, and teleradiology. 4) A film cost savings is definitely possible if viewing over a workstation is the primary means of display and multiple images printed on a single sheet when measurement is not a consideration. 5) The digital image can be adapted to any viewer’s requirements by image processing algorithms and post processing functions of software.

Computers Application in Radiology

Dr. Awadh Alqubati

Section VIII: Concepts of Direct Digital Radiography (ddR) Unequivocally, direct digital radiography is fast becoming the industry leader in the direction that diagnostic radiographic imaging is developing. The reason for the dramatic challenge to computed radiography (CR) is that it offers full resolution images that are displayed and stored in about 8 seconds. This translates into faster throughput of imaging procedures; some imaging centers report a conduction time of 2-4 times faster than with traditional screen-film-darkroom based technology. As with CR, direct radiography adheres to DICOM standards for connectivity and workflow operations making it fully compatible with existing PACS sharing. Direct digital radiography is developing on the principles of amorphous silicon technology that uses a cesium iodide scintillator to perform x-ray detection. These systems are well thought out productions that allow for modification of existing x-ray equipment such as replacement of the bucky tray with a detector array. Cost advantages of direct digital radiography are already proving to be more cost effective than CR equipment purchase and replacement of outdated conventional radiography equipment. This is because only one technologist is needed to throughput 2-4 times the workflow as with conventional systems using cassettes of any type (CR or screen-film). Notwithstanding, direct digital radiography does not have the total flexibility that CR image has particularly in the area of portable imaging.

Computers Application in Radiology

Dr. Awadh Alqubati

Computers Application in Radiology

Dr. Awadh Alqubati

SummaryPoints



The bass of CR and digital imaging is optimization of image acquisition, image transmission, and image display and image storage as independent optimized functions.



Photostimulable phosphors have demonstrated a range of exposure greater than 100 mR and as low as 0.195 alpha particles.



Eastman Kodak Company patented a thermoluminescent infrared stimulable phosphor system in 1975; however, FUJI Photo Film Company in 1980 patented the first radiographic imaging system using photostimulable phosphors.



Computers store and manipulate data in the form of binary digits or bits, or base-2 system of digits.



A bit is either a "1" or a "0" with 1 being represented as a 5 volt charge and 0 is represented by a zero voltage.



A bundle of 8-bits equals one byte and one byte can display 256 shades of gray or other details.



A radiographic image is laid out in rows and columns called an image matrix. Each cell in the matrix is called a pixel which for each byte can have a value of 256 possible details.



The number of pixels in an image is calculated by multiplying the number of matrix columns by the number of rows. For example a 10 x 9 matrix will contain 90 pixels.

Computers Application in Radiology 

Dr. Awadh Alqubati

A volume pixel element is called a voxel because it contains the voxel space of a 3D image.



The Central Processing Unit (CPU) is the brains of the computer, it has an integrated microprocessor to interpret, execute, and manipulate data.



Two parts to the CPU are the Control Unit (CU) that interprets programs and executes them, and the Arithmetic/Logic Unit (ALU) that performs mathematical operations of the computer’s component programs.



RAM or random access memory is a volatile form of memory that is rapidly erased and refilled as new information is added to a document; it is temporary memory that is lost if the computer is turned off.



The digital network of PACS is of the BUS topology as architecture does affect data transfer speed; however, network design speed must be consistent with that of other components of the computers such as the CPU speed and main memory.



Besides hardware, specific software is necessary to operate the CR system: Operating systems software, program software, Editor, Library of subroutines, a Linker, a Compiler, etc.



The digital imaging processor is a device that is responsible for converting analogue information produced by the base unit into digital or binary coded numbers. The device that performs this function is called an Analog-to-Digital Converter.



An Array processor is a separate CPU used by the computer for computational speed in parallel mode rather than in sequential mode. This allow for simultaneous computer processing rather than a linear sequence of processing functions.



Hardware components of a CR system include: Photostimulable phosphor cassettes, Cassette reader, Remote Operator Processor/Panel, Printer and/or Workstation.



PACS is a network of computers into which a CR unit may input data for display and storage.

Computers Application in Radiology 

Dr. Awadh Alqubati

The basic component of CR image capture is the photostimulable phosphor screen and cassette.



Photostimulable phosphor screens are composed of europium-activated barium fluorohalide crystals (BaFX:Eu2+) where X is a halogen of iodine or bromine.



Photostimulable phosphors fluoresce from radiation energy just as do analog screens; however, to release the latent image contained in the storage phosphors the screen must be subjected to light from a finely collimated laser beam.



The wavelength of light used to release a storage phosphor’s latent image is about 633 nm.



The wavelength of light emitted during photostimulation of the storage phosphor screen is about 400 nm.



The dynamic range of exposure for photostimulable phosphors is linear over a range of 10,000 to 1 vs. analogue screens which is roughly 40 to 1. This means that it is nearly impossible to overexpose or underexpose a CR phosphor image.



Light emitted from CR screens during photostimulation is filtered and collected by photomultiplier (PMT) tube(s) and converted to an electrical signal that can be digitized.



Light energy is stored in holes in the BaFBr:Eu2+ crystals in what are called Fcenters which are fluoride and/or bromide vacancies.



The structure of a photostimulable phosphor screen is from within outward: Aluminum panel, lead layer, black cellulose acetate layer, Estar support, phosphor layer, and a overcoat to protect the phosphor.



Each CR screen must be erased after use or before use if the cassette has not been used in over 24 hours. The reader erases the plate using fluorescent white light.

Computers Application in Radiology 

Dr. Awadh Alqubati

An optical filter is used to filter out the laser light from the luminescent light of the CR screen during read-out.



Electrical signal from the PMT is sent to the Analog-to-Digital Converter where it is converted to digital bits; an Input LUT within the device is used to correct any aberration in the data.



Anti-aliasing is a filtering process used to smooth edges in the image and to reduce jagged diagonal lines.



The raw data image is segmented and enhanced by specific imaging software before presentation on the CRT and workstations.



Image segmentation is a process by which the raw image is enhanced by software that locates the image field, the collimated field, and image edges and enhances each independently then compiles the final image from these enhancements.



The exposure index is a tool provided to the technologist to monitor the exposure to the screen. It is analogous to the averaged optical density reading of the exposed film.



The technologist sets the algorithm the CR screen is to be process under by the reader. This is a software function and is amendable upon the image at preprocessing and post processing from storage.



Digital data converted back to analog for printing or display on a workstation monitor by a Digital-to-Analog Converter.

Computers Application in Radiology

Dr. Awadh Alqubati

References 1. Sonoda, M., Takcno, M., Miyahara, H. Kato, "Computed radiography utilizing scanning laser stimulated luminescence," Radiology 148.;;.833-838, 1983 2. Thoms, m., Photostimulated luminescesce: a tool for the determination of optical properties of defermer.", Journal of Luminescence. 60-61, pp. 585-77., 1994 3. Cohen, D., Kaufman, A., "Scan Conversion Algorithms for Linear and Quadratic Objects", in Volume Visualization, IEEE Computer Society Press, Los Alamitos, CA, 1900, 280-301. 4. Glassner, A.S., "Space Subdivision for Fast Ray Tracing", IEEE Computer Graphics and Applications, 4, 10 (October 1984), 15-22. 5. Bushong, S. C., "Radiologic Science for Technologist: Physics, Biology, and Protection," 7th ed., pp 355-370, Mosby, St. Louis, Mo., 1997 6. Philips Medical Systems, "Radiography Manual" Revised edition.4512 158 04581/999*, 1994 7. Smith, R., "The digital effectiveness of CR," Journal of Imaging Technology Management., Available at: http://www.imagingeconomics.com/library/20010713.asp., 2001. 8. PC Consultant Group, Inc., "PACS & RIS, P practical outline," Available at: http://www.pccgroup.com/pacs_in_a_pic.htm 2004. 9. U.Ewert, H. Heidt, "Current Status of European Radiological Standards for DND, ASNT spring conference ANSD IIW micro symposium," Orlando, Fl. 03/22-03/27, 1999, proceedings p. 171-173 10.U.Ewert, H. Heidt, "Approach for Standardization of X-ray Film Digitizers and Computed Radiography," Spring conference ANSD IIW micro symposium,” Orlando,

Computers Application in Radiology

Dr. Awadh Alqubati

Fl. 03/22-03/27, 1999, proceedings p. 171-173. Kodak Learning Center., available at: http://www.kodak.com/global/en/health/learningCenter/elearn/pacs/adv_sys_con/cour se/pa... 2004.

Related Documents

In Computers
December 2019 20
Rotation In Radiology
October 2019 24
Computers
November 2019 39
Computers
May 2020 30
Computers
October 2019 49