Microprocessors

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Microprocessors as PDF for free.

More details

  • Words: 14,265
  • Pages: 43
EL21C - Microprocessors I Introduction Lesson 1: Basics Final Exam: 2005

Lesson 13: 2005: Class Test 1

2005: Class

Lesson 14: 2004: Class Test 1

2004: Class

Test 2 Lesson 2: Decimal, Binary, & Hex Final Exam: 2004 Test 2 Lesson 3: Microprocessor instructions Final Exam: 2003 Class Test 2

Lesson 15: 2003: Class Test 1

2003:

Lesson 4: Memory and Addressing Final Exam: 2002 Test 2

Lesson 16: 2002: Class Test 1

2002: Class

Lesson 5: The 8085 microprocessor Final Exam: 2000 Test 2

Lesson 17: 2001: Class Test 1

2001: Class

Lesson 6: What is Assembly Language? Final exam: 1999 Test 2

Lesson 18: The 8085 Instruction Set 1999: Class Test 1 1999: Class

Lesson 7: Using an Assembler Lesson 8: Address Decoding Lesson10: The Stack Lesson 11: A/D & D/A Lesson 12: Serial Communications

Lesson 19: Hints on Notes

EL21C - Microprocessors I Lesson 1: Basics The majority of people think that computers are some kind of complicated device that is impossible to learn and infinitely intelligent, able to think better than a person. The truth is much less glamorous. A computer can only do what the programmer has told it to do, in the form of a program. A program is just a sequence of very simple commands that lead the computer to solve some problem. Once the program is written and debugged (you hardly ever get it right the first time), the computer can execute the instructions very fast, and always do it the same, every time, without a mistake. And herein lies the power of a computer. Even though the program consists of very simple instructions, the overall result can be very impressive, due mostly to the speed at which the computer can process the instructions. Even though each step in the program is very simple, the sequence of instructions, executing at millions of steps per second, can appear to be very complicated, when taken as a whole. The trick is not to think of it as a whole, but as a series of very simple steps, or commands. The microprocessor itself is usually a single integrated circuit (IC). Most microprocessors, or very small computers, (here after referred to simply as micro's) have much the same commands or instructions that they can perform. They vary mostly in the names used to describe each command. In a typical micro, there are commands to move data around, do simple math (add, subtract, multiply, and divide), bring data into the micro from the outside world, and send data out of the micro to the outside world. Sounds too simple....right? A typical micro has three basic parts inside. They are the Program Counter (PC), Memory, and Input / Output (I/O). The Program Counter keeps track of which command is to be executed. The Memory contains the commands to be executed. The Input / Output handles the transfer of data to and from the outside world (outside the micro's physical package). The micro we'll be using is housed inside a 40 pin package, or chip or IC. There are many other actual parts inside our micro however, we will learn about each and every single one, one step at a time. A Simple Program As stated before, a program is a sequence or series of very simple commands or instructions. A real world example program might be the problem of crossing a busy street.

Step 1: Walk up to the traffic lights and stop. Step 2: Look at the traffic light. Step 3: Is your light green? Step 4: If the light is red, goto step 2. (otherwise continue to step 5) Step 5: Look to the left. Step 6: Are there cars still passing by? Step 7: If yes, goto step 5. (otherwise continue to step 8). Step 8: Look to the right. Step 9: Are there cars still passing by? (there shouldn't be any by now, but, you never know!) Step 10: If yes, goto step 8. (otherwise continue to step 11) Step 11: Proceed across the street, carefully!! Now this may seem childish at first glance, but this is exactly what you do every time you cross a busy street, that has a traffic light (at least, I hope you do). This is also exactly how you would tell a micro to cross the street, if one could. This is what I mean by a sequence or series of very simple steps. Taken as a whole, the steps lead you across a busy intersection, which, if a computer did it, would seem very intelligent. It is intelligence, people are intelligent. A programmer that programmed these steps into a micro, would impart that intelligence to the micro. The micro would not, however, in this case, know what to do when it got to the other side, since we didn't tell it. A person, on the other hand, could decide what to do next, at a moments notice, without any apparent programming. In the case of a person, though, there has been some programming, it's called past experiences. Another program might be to fill a glass with water from a tap. Step 1: Turn on the water. Step 2: Put the glass under the tap. Step 3: Look at the glass. Step 4: Is it full?

Step 5: If no, goto step 3.(otherwise, continue to step 6) Step 6: Remove the glass from under the tap Step 7: Turn off the water. This is a simpler program, with fewer steps, but it solves a problem, to fill a glass with water. In a micro, the problems are different but the logical steps to solve the problem are similar, that is, a series of very simple steps, leading to the solution of a larger problem. Also notice that since the steps are numbered, 1 through 7, that is the order in which they're executed. The Program Counter, in this case, is you, reading each line, starting with 1 and ending with 7, doing what each one says. In a micro, the Program Counter automatically advances to the next step, after doing what the current step says, unless a branch, or jump, is encountered. A branch is an instruction that directs the Program Counter to go to a specific step, other than the next in the sequence. The branch in this example is step 5. Not only is this a branch, but it is a conditional branch. In other words, based on whether the glass is full or not, the branch is taken, or not. A micro has both branch and conditional branch instructions. Without this ability to reuse instructions, in a sort of looping action, a solution would take many more steps, if it would be possible at all. The point of this lesson is to show how a simple set of instructions can solve a bigger problem. Taken as a whole, the solution could appear to be more complicated than any of the separate steps it took to solve it. The most difficult problem to be solved in programming a micro is to define the problem you are trying to solve. Sounds silly but I assure you, it's not. This is the Logical Thought Process I mentioned earlier. It is having a good understanding of the problem you're trying to solve. You must understand the information I'm presenting in order pass the course. Trying to remember everything does not work at university. On to lesson 2. Table of Contents

EL21C - Microprocessors I Decimal, Binary & Hex Most people have learned to use the Decimal numbering system for counting and calculations. But micros use a different system. It's called Binary and that’s why our computers are called binary computers! All numbering systems follow the same rules. Decimal is Base 10, Binary is Base 2, and Hex(adecimal) is Base 16. The base of a system refers to how many possible numbers can be in each digit position. In decimal, a single digit number is 0 through 9. In binary a single digit number is 0 or 1. In hex a single digit number is 0 through 9, A,B,C,D,E, and F. In decimal, as you count up from 0, when you reach 9, you add 1 more, then you have to add another digit position to the left and carry a 1 into it to get 10 (ten). Ten is a two digit decimal number. In binary, as you count up from 0, when you reach 1, you add 1 more, then you have to add another digit position to the left and carry a 1 into it to get 10B (two decimal). While this is exactly what you do in decimal, the result looks like ten so we usually write the letter ‘B’ after to distinguish it from ten decimal. [ We should really denote ten decimal as 10D but we usually leave off the ‘D’]. So while decimal 10 (ten) looks like binary 10 (two decimal) they represent different decimal values. It is still useful to think in decimal, since that's what we're used to, but we have to get used to seeing numbers represented in binary. In hexadecimal (hex), as you count up from 0, when you reach 9, you then go to A ten decimal), then B (eleven decimal), etc., until you reach F (15 decimal). When you reach F, you then add 1 more, then you have to add another digit position to the left and carry a 1 into it to get 10H (sixteen decimal). While this is exactly what you do in decimal, the result looks like ten or 10B, so we usually write the letter ‘H’ after to distinguish it from ten decimal and binary 10B. So while decimal 10 (ten) and binary 10B looks like hex 10H, they represent different values. It is still useful to think in decimal, since that's what we're used to, but we have to get used to seeing numbers represented in binary and hex. Another small difference between decimal terminology and binary is that in binary a digit is called a bit. It gets even more confusing by the fact that 4 bits make a nibble. Two nibbles make a byte. Two bytes make a word. Most numbers used in a micro don't go beyond this, although there are others. Using what I've just said, if two nibbles make a byte, you could also say that a byte is eight bits. To represent a binary number larger than 4 bits, or a nibble, a different numbering system is normally used. It is called hexadecimal, or Base 16. A shorter name for hexadecimal is simply hex, and that's what we'll use here after. In this system there are 16 possible

numbers for each digit. For the first 10, 0 through 9, it looks like decimal. Unlike decimal, when you add 1 more to 9, you get A. I know that having a letter to represent a number is really confusing, but they had to call it something, and A is what they chose. So a hex A is a decimal 10 (ten). The numbers count up from A through F. Just to clarify this here is the sequence of counting in hex from 0 to where you have to add another digit position to the left... 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F (no G). This represents a decimal count of 0 through 15. At a count of F (15 decimal), if you add 1 more you get 10 (oh no! .. not another 10 that means something else!!). Sad but true. Let's regroup here. A binary 10 (one zero) is decimal 2, a decimal 10 is ten, and a hex 10 is decimal 16. If you can get this concept, you will have conquered the most difficult part of learning micros. I need to get past one more obstacle, the idea of significance. In a decimal number like 123, 3 is the least significant digit position (the right most digit) and 1 is the most significant digit position (the left most digit). Significance means the relative value of one digit to the next. In the number 123 (one hundred twenty three) , each number in the right hand most digit position (3) is worth 1. The value of each number in the next most significant digit position (2) is worth ten and in the most significant digit position (1) each is worth a hundred. I'm not trying to insult your intelligence here, but rather to point out the rule behind this. The rule is that no matter what base you're working in, as you start adding digits to the left, each one is worth the base times (multiplied) the digit to the right. In the decimal number 123 (base 10), the 2 digit position is worth 10 times the 3 digit position and the 1 digit position is worth 10 times the 2 digit position. Hence the familiar units, tens, hundreds, and so on. For some reason, for most people, this makes sense for decimal (base 10) but not for any other base numbering system. The very same is true for binary. The only difference is the base. Binary is base 2. So in binary the least significant bit (remember bits?) is worth 1 ( this happens to be the same for all bases). The next most significant bit is worth 2, the next worth 4, the next worth 8, and so on. Each is 2 times (base 2) the previous one. So in an 8 bit binary number (or byte, remember bytes?), starting from the right and moving left, the values for each of the 8 bit positions are 1, 2, 4, 8, 16, 32, 64 , and 128. If you've got this, you have passed a major milestone, you should go celebrate your passage. If you haven't, I would re-read the above until you do, and then go celebrate!! ( By the way, if you didn't get this the first time through, join the crowd. I didn't either!!) In hex (base 16) the same rule applies. In a 4 digit hex number, starting at the right and working left, the first digit is worth 1 ( hey that's just like decimal and binary!!), the next is worth 16 (base times the previous digit), the next is worth 256 (16 X 16), and the most significant is worth 4096 (256 X 16). One last note, hex is just binary described another way. A hex digit is a binary nibble ( remember nibbles?). Both are 4 bit binary values. Trying to wrap all this confusion up in review, 4 bits is a nibble or a hex digit. Two hex digits is a byte or 8 bit binary. A 4 digit hex number is a word, or 16 bit binary or 4 nibbles, or 2 bytes. You may have to review the previous paragraphs a few times (I did!!)

to get all the relationships down pat, but it is crucial that you are comfortable with all I've said, so that what follows will make more sense. Let's take a few example numbers and find the decimal equivalent of each. Let's start with the binary number 1011, a nibble. Starting at the right and moving left, the first digit is worth one, because there is a 1 there. The next is worth two, because there is a 1 in it. The next would be worth 4, but since there is a 0, it's worth zero. The last or most significant is worth eight, since there is a 1 in it. Add all these up and you get eleven. So a binary 1011 is decimal 11. Also, since this could be a hex digit, the hex value would be B. Let's take a longer binary number 10100101. Starting at the right moving left, the first is worth 1, the next is worth 0, the next is worth 4, the next is 0, the next is 0, the next is worth 32, the next is 0, and the last is 128. Adding all these up you get decimal 165. Dividing this number up into two hex digits, you get A5. So binary 10100101, decimal 165, and hex A5 are all the same value. Using hex, the rightmost digit is worth 5, and the most significant digit is worth 160 (10 X 16), resulting in decimal 165. If you understand this, you’re ready to move on, leaving the different systems behind. If you don't, keep reviewing and studying until you do. If you don't understand and still continue, you will be confused by what follows. I promise that once you get this, the rest is easier. This ends the second lesson. I hope this wasn't too daunting or difficult, but there was a lot to get through. The rest of the course should be a little easier. Learning a new numbering system is like learning a new language. It's a little cumbersome at first, but it gets easier. On to lesson 3. Table of Contents

EL21C - Microprocessors I Microprocessor Instructions As mentioned earlier, we refer to a binary number like 1111 as 1111b, a decimal number like 123 as 123, and a hex number like A5 as A5h. So don't be confused by the letters following numbers, we use both caps and lowercase to denote binary, decimal, or hexadecimal so there is no doubt what base a multidigit, or multibit, number is in. Also there is another kind of memory, called flags. Flags are single bit numbers used to indicate different conditions. They are called flags because they flag the program of events or conditions. If a flag is raised, or has a 1 in it, it is said to be SET. If it is a 0, it is said to be CLEARED or RESET. One other thing, in an 8-bit byte, the 8 bits are referred to as bits 0 through 7, with bit 0 being the right most, or least significant (lsb), and bit 7 as the left most or most significant (msb). Lastly, there are various Registers inside a microprocessor or Central Processing Unit (CPU). These vary from CPU to CPU, but all contain a register called the Accumulator. It is also referred to in some as the A register. We will be using the accumulator in the following discussion. It is a type of memory for storing temporary results and is 8 bits wide, or a byte, as are most places that data can be put inside the CPU. In the CPU we will be using, there are 5 different types of instructions and several variations of each, resulting in over 100 different instructions. These 4 types are ARITHMETIC, LOGICAL, BRANCHING, and DATA TRANSFER. ARITHMETIC The arithmetic instructions usually include addition, subtraction, division, multiplication, incrementing, and decrementing although division & multiplication were not available in most early CPU’s. There are two flags used with arithmetic that tell the program what was the outcome of an instruction. One is the Carry (C) flag. The other is the Zero (Z) flag. The C flag will be explained in the following example of addition. The Z flag, if set, says that the result of the instruction left a value of 0 in the accumulator. We will see the Z flag used in a later lesson. Addition This is straightforward and is simply to add two numbers together and get the result. However there is one more thing. If, in the addition, the result was too big to fit into the accumulator, part of it might be lost. There is a safeguard against this. Take the case of

11111111b (255) and 11111111b (255). These are the largest numbers that can fit into an 8-bit register or memory location. You can add these as decimal numbers, since I gave you their values in decimal also, and you would get 510. The binary value for 510 is 111111110b (9 bits). The accumulator is only 8 bits wide, it is a byte. How do you fit a 9bit number into 8 bits of space? The answer is, you can't, and it’s called an OVERFLOW condition. So how do we get around this dilemma? We do it with the CARRY (C) flag. If the result of the addition is greater than 8 bits, the CARRY (C) flag will hold the 9 th bit. In this case the accumulator would have in 11111110b (254) and the C flag would be a 1, or set. This 1 has the value of 256 because this is the 9th bit. We haven't covered a 9-bit number, but they come up all the time as overflows in addition. Since we are using base 2, and we found out in lesson 2 that the 8th bit (bit 7) in a byte is worth 128, then the 9th bit is worth 2 times that, or 256. Adding 254 and 256, we get 510, the answer, and we didn't loose anything, because of the C flag. Had the result of the addition not caused an overflow, the C flag would be 0, or cleared. Subtraction In the case of subtraction, the process is more difficult to explain, and as such, I'm not going to cover it here. It involves 1's compliment and 2's compliment representation. But I will tell you this, you can subtract two numbers and if there is an under flow, the C flag will be a 1, otherwise it will be a 0. An under flow is where you subtract a larger number from a smaller number. Multiplication and Division In the micro we will be using, the 8085, multiply and divide instructions are not available so we will wait till later (EL31G-Microprocessors II) to talk about them. They do, however, do just what the names suggest. Increment & Decrement Two other instructions are included in the arithmetic group. They are increment and decrement. These instructions are used to count events or loops in a program. Each time an increment is executed, the value is incremented by 1. A decrement, decrements the value by 1. These can be used with conditional jumps to loop a section of program, a certain number of times. We will see these used later. LOGICAL In micros there are other mathematical instructions called logical instructions. These are OR , AND, XOR, ROTATE, COMPLEMENT and CLEAR. These commands are usually not concerned with the value of the data they work with, but, instead, the value, or state, of each bit in the data. OR

The OR function can be demonstrate by taking two binary numbers, 1010b and 0110b. When OR'ing two numbers, it doesn't matter at which end you start, right or left. Let's start from the left. In the first bit position there is a 1 in the first number and a 0 in the second number. This would result in a 1. The next bit has a 0 in the first number and a 1 in the second number. The result would be 1. The next bit has a 1 in the first number and a 1 in the second number. The result would be a 1. The last bit has a 0 in the first number and a 0 in the second number, resulting in a 0. So the answer would be 1110b. The rule that gives this answer says that with an OR, a 1 in either number result in a 1, or said another way, any 1 in, gives a 1 out. AND AND'ing uses a different rule. The rule here is a 0 in either number will result in a 0 , for each corresponding bit position. Using the same two numbers 1010b and 0110b the result would be 0010b. You can see that every bit position except the third has a zero in one or the other number. Another way of defining an AND is to say that a 1 AND a 1 results in a 1. XOR (eXclusive OR) XOR'ing is similar to OR'ing with one exception. An OR can also be called an inclusive OR. This means that a 1 in either number or both will result in a 1. An eXclusive OR says that if either number has a 1 in it, but not both, a 1 will result. A seemingly small difference, but crucial. Using the same two numbers, the result would be 1100b. The first two bits have a 1 in either the first or the second number but not both. The third bit has a 1 in both numbers, which results in a 0. The fourth has no 1's at all, so the result is 0. The difference may seem small, even though the OR and XOR result in different answers. The main use of an XOR is to test two numbers against each other. If they are the same, the result will be all 0's, otherwise the answer will have 1's where there are differences. Compliment Complimenting a number results in the opposite state of all the 1's and 0's. Take the number 1111b. Complimenting results in 0000b. This is the simplest operator of all and the easiest to understand. Its uses are varied, but necessary, as you'll see later. Rotate These instructions rotate bits in a byte. The rotation can be left or right, and is done one bit each instruction. An example might be where the accumulator has a 11000011b in it. If we rotate left, the result will be 10000111b. You can see that bit 7 has now been moved into bit 0 and all the other bits have move 1 bit position in, the left direction. Clear

This instruction clears, or zero's out the accumulator. This is the same as moving a 0 into the accumulator. This also clears the C flag and sets the Z flag. BRANCHING There are also program flow commands. These are branches or jumps. They have several different names reflecting the way they do the jump or on what condition causes the jump, like an overflow or under flow, or the results being zero or not zero. But all stop the normal sequential execution of the program, and jump to another location, other than the next instruction in sequence. Jump on Condition (of a Bit) These instructions let you make a jump based on whether a certain bit is set (a 1) or cleared (a 0). This bit can be the CY (carry) flag, the Z (zero) flag, or any other bit. Call There is also a variation on a jump that is referred to as a CALL. A CALL does a jump, but then eventually comes back to the place where the CALL instruction was executed and continues with the next instruction after the CALL. This allows the programmer to create little sub-programs, or subroutines, that do repetitive tasks needed by the main program. This saves programming time because once the subroutine is written, it can be used by the main program whenever it needs it, a kind of way to create your own instructions. DATA TRANSFER Moving These instructions do exactly what you would think. They move data around between the various registers and memory. Exchanging Exchanging is a variation on the move instruction. Here data is exchanged between two places. This is the end of lesson 3. I've tried to briefly explain all the possible instructions without actually showing each. I will, in a later lesson, go into each of the 100+ different instructions and explain each one. In the next lesson we will learn more about memory and all the possible ways to get to a memory location. On to lesson 4. Table of Contents

EL21C - Microprocessors I Memory and Addressing There are several different types of memory in a micro. One is Program memory. This is where the program is located. Another is Data memory. This is where data, that might be used by the program, is located. The neat (or strange) thing is that they both reside in the same memory space and can be altered by the program. That’s right, a program can actually alter itself if that was necessary. Two terms are used when talking about memory. Reading (load) is getting a value from memory and Writing (store) is putting a value into memory. There are three buses (not the kind you ride in) associated with the memory subsystem. One is the address bus, the second is the data bus, and the third is the control bus. It's important for you to know exactly how all this works, because these busses transport data and addresses everywhere. All three are connected to the memory subsystem. Its also good to know the function of each to better understand what's happening. In the 8085 CPU, the address bus is 16 bits wide. It acts to select one of the unique 216 (64K) memory locations. The control bus determines whether this will be a read or a write. In the case of an instruction fetch, the control bus is set up for a read operation. Data is read or written through the data bus, which is 8 bits wide. This is why all registers and memory are 8 bits wide, it's the width of the data bus on the 8085 CPU. A bus is just a group of connections that all share a common function. Instead of speaking of each bit or connection in the address separately, for example, all 16 are taken together and referred to simply as the address bus. The same is true for the control and data buses. A byte is the most used number in a micro because each memory location or register is one byte wide. Memory has to be thought of as a sort of file cabinet with each location in it being a folder in the cabinet. In a file cabinet, you go through the tabs on the folders until you find the right one. To get to each memory location, a different method is used. Instead, a unique address is assigned to each location. In most micros this address is a word or 16 bits, or 4 digit hex. This allows for a maximum of 65536 (216 or 64K) unique addresses or memory locations that can be accessed. These addresses are usually referred to by a 4 digit hex number. Memory usually starts at address 0000h and could go up to FFFFh (216 or 64K or 65536 in total). To access these locations, a 16 bit address is presented to memory and the byte at that location is either read or written. The Program Counter is what holds this address when the micro is executing instructions. The reason instructions are read sequentially, is because the program counter automatically increments after fetching the current instruction. It does this even before the current instruction is acted upon. The sequence is that the program counter's contents are placed on the memory address bus and the instruction is fetched from memory through the data bus, and immediately the program counter is incremented by 1. Then the

micro looks at the instruction and starts processing it. If the instruction is not some kind of jump or call, the instruction is completed and the program counter is presented to the memory address bus again and the next instruction is fetched, the program counter is incremented and the process starts over. This is referred to, in computer jargon, as fetch, decode, and execute. In the case of reading or writing data, the process is a little different. Data can be read from or written to memory in similar fashion to the fetch. But data does not need the decode & execute steps. In the next lesson we will see this in more detail. In the next lesson we start looking at the micro we will be using – the INTEL 8085. On to lesson 5. Table of Contents

EL21C - Microprocessors I The 8085 Microprocessor Well, finally, this is what all the previous lessons have been getting you ready for. To start looking at the micro we will be using in this course. All the previous lessons have been laying the ground work and basic concepts common to most micros. The one we will be using is the INTEL 8085 microprocessor. This chip was the last 8-bit general purpose CPU made by INTEL and has 40 pins. The address bus requires 16 pins and the data bus requires 8 pins but INTEL cleverly decided to share or multiplex these two busses so the data bus share the lower (A0-A7) 8 pins of the address bus. This caused no problem since address and data are never on the bus at the same time. Two pins are for serial communications with the 8085. Through these pins, serial data can be sent or received with another computer. This is how we will load a program into the 8085 kit used in the lab, from a PC. Five more pins are for a different kind of input called interrupts. In our example in lesson 1 of the program where we are standing at the street corner, watching the light and the traffic, if a person walked up and tapped us on the shoulder and asked what time it is, this would be an example of an interrupt. It doesn't alter the program we are doing, it just temporarily stops us while we tell the time to the person. As soon as we tell the time, we go back to watching the lights and traffic as before. This describes the action of an interrupt. The interrupt has a program associated with it to guide the micro through a problem. In the case of the above example, this program would be to look at our watch, read the time, and then tell it to the person. This is called an interrupt service routine (ISR). Each time an interrupt occurs, the current program is temporarily stopped and the service routine is executed and when complete, returns to the current program. We will spend a lot more time later describing interrupts and how we'll use them. Inside the 8085 there are 10 seperate registers. They are called A, B, C, D, E, H, L, PSW, PC, and SP. All but the PSW, PC, and SP registers are used for temporary storage of whatever is needed by the program. The accumulator called A is also different from the other registers. It is used to accumulate the results of various instructions like add or sub (subtract). The Program Counter (PC) we have already mentioned while the Stack Pointer (SP) actually holds addresses and is 16 bits wide. All the others are 8 bits wide. There are other features to be covered later, as they come up. In the next lesson we will start looking at assembly language, the method we will use to write a program.

On to lesson 6. Table of Contents

EL21C - Microprocessors I What is Assembly Language? Inside the 8085, instructions are really stored as binary numbers, not a very good way to look at them and extremely difficult to decipher. An assembler is a program that allows you to write instructions in, more or less, English form, much more easily read and understood, and then converted or assembled into hex numbers and finally into binary numbers. The program is written with a text editor (NOTEPAD or similar), saved as an ASM file, and then assembled by the assembler (TASM or MASM or similar) program. The final result is an OBJ file you download to the 8085. Here is an example of the problem of adding 2 plus 2: mvi A,2 mvi B,2 add B

; move 2 into the A register ; move 2 into the B register ;add reg. B to reg. A, store result in reg. A

The first line moves a 2 into register A. The second moves a 2 into register B. This is all the data we need for the program. The third line adds the accumulator with register B and stores the result back into the accumulator, destroying the 2 that was originally in it. The accumulator has a 4 in it now and B still has a 2 in it. In the program above all text after the ‘;’ are treated as comments, and not executed. This is a very important habit to acquire. Assembly language follows some rules that I will describe as they come up. With most instructions, especially those involving data transfer, the instruction is first, followed by at least 1 space, then the destination followed by a comma, and then the source. The destination is where the result of the instruction will end up and the source is where the data is coming from. Next we will read a switch, and light an LED if the switch is pressed. This happens quite often in your lab experiments. Bit 0 of Port 0 will be the switch. When the switch is closed or pressed, bit 0 will be a 1, and if the switch is open or not pressed, bit 0 will be a 0. Bit 0 of Port l be the LED. If bit 0 is a 0 the LED is off and if bit 0 is a 1, the LED will be on. All the other bits of reg. A will be ignored and assumed to be all 0's, for the sake of discussion start:

IN

0

; read Port 0 into reg. A

CMP

1

; compare reg. A with the value 1

JNZ

start

; jump to start if the comparison does not yield 0

OUT

1

; send a 1 to Port 1, turning the LED on

JMP

start

The first line has something new. It's called a label. In this case it is start: . A label is a way of telling the assembler that this line has a name that can be referred to later to get back to it. All labels are followed by the symbol : , which tells the assembler that this is a label. In the first line we also read the switch by reading Port and putting it into the accumulator. Reg. A is the only register that can read in/send out data via ports or perform compares. Thus, we need not write ‘A’ in the command…it’s implied! The next line compares the value in reg. A with the value 1. If they are equal, the Zero flag is set (to 1). The next line then jumps to start: only if the Zero flag is not set ie: the value in reg. A is not 1 therefore the switch was not pressed. The program will therefore keep looping until the switch is pressed! If the switch is pressed then the penultimate line writes the value 1 to the accumulator, therefore bit 0 = 1, and the LED comes on. The last line jumps back to start. This completes the loop of reading the switch and writing to the LED. This particular problem could have been solved with just a switch connected to an LED, like a light is connected to a wall switch in your house. But with a micro in the loop, much more could be done. We could have a clock that also turns on and off the LED based on time. Or we could monitor the temperature and turn the LED on and off based on what temperature it is. Or we could monitor several switches and turn the LED on and off based on a combination of switches, etc….it’s up to the imagination what can be controlled. In the above example we assumed that the other bits of ports 0 and 1 were all zeros. But in reality, each of these bits could have a function assigned to them. Then we would need to look only at bit 0 in port 0 and bit 0 in port 1. This further complicates the problem. Also, we assume that port 0 was previously defined as an input port whereas port 1 was defined as an output port. In assembly we can assign a name to a port and refer to it by that name, instead of port 0 or port 1. This is done with an equate directive. Directives are assembler commands that don't result in program but instead direct the assembler to some action. All directives start with a period. .equ

switch, 0

;port 0 is now called switch

.equ

LED,1

;port 1 is now called LED

start:

IN

switch

; read Port 0 into reg. A

CMP

1

; compare reg. A with the value 1

JNZ

start

; jump to start if the comparison does not yield 0

OUT

LED

; send a 1 to Port 1, turning the LED on

JMP

start

This has the same result as the previous program. Also the equate only has to be made once at the start of the program, and thereafter the name or label is used instead of the port number. This makes things much simpler for the programmer. All equates must be defined before they are used in a program. This holds true for labels also. Another advantage of naming ports with an equate is that if, later in the design process, you decide to use a different port for the LED or the switch, only the equate has to be changed, not the program itself. Please note that comments are very important. When you initially write a program, the tendancy is not to write much in the comment field because you're in a hurry. But if you have to come back to it a few weeks later, it's much easier to understand what you've written if you've taken the time to write good comments. Also good comments help in debugging. To digress just a little here, an instruction like add B is a one byte instruction. In other words this instruction would end up inside the 8085 as one byte. Part of the byte is the opcode and the other part is which register is affected or used. The reason for this is that a prime concern in programming a micro is how may bytes the program will actually take up inside the micro, after it's been assembled. The idea is to cram as much as possible into as few bytes as possible. This is why implied addressing is used. It limits choices in the use of the instruction, you always have to use the accumulator as either the source or the destination, but it shrinks the size of the instruction, so that more instructions can fit inside the micro. This is a choice made by the maker of the micro, and is not up for discussion. It's a trade off of flexibility vs. size. That's why you'll see lots of instructions that use the accumulator. This is the best way to describe implied addressing. In the case of an instruction like mvi A,1 ,two bytes are assembled. The first byte says that this is an move instruction and that the accumulator is the destination. The second byte is the immediate data itself. Thus we see that an instruction can have it’s data ‘next’ to it. It is transparent to the programmer where the bytes are actually stored in memory. Once we can ‘find’ it by an instruction is all that matters. We will get into this again, later. Another form of addressing variables is called register indirect or just plain indirect addressing. This is a little more complicated. Here the address is held in a register, usually the H & L registers but sometimes the B & C or D & E. Since an address is 16 bits long, we need two registers (a register pair) to store an address. We will also get into this again, later.

Lastly, I want to explain something else about the assembler. The source file is what the above program, or any program that has been written, is referred to. It is the source for the assembler, or the file that is going to be read by the assembler to generate the object file (the object of the assembler) from. The object file is the file that will be download to the 8085 kit in the lab. They are two different files. One you've written with a text editor (the source or ASM file) and the other is created by the assembler (the object or OBJ file) when you assemble the source file. You use an assembler with the object in mind of generating a file to download to the micro, hence the name, object file. I've left out some directives, for simplicities sake, that I need to mention now. One is the .org directive. It is the originate or origin directive. This tells the assembler at what address the first byte of assembled code is to be placed inside the 8085. It is the origin of the program or the beginning. Here's how this would look for our last example program: .org 2000H

; begin using memory address 2000H

Well we've covered quite a lot in this lesson, and I hope you've gotten most of it. If not, I would suggest re-reading it until you do. I would also suggest that you print out all of these lessons so you can refer to them later. In the next lesson we will actually be assembling some programs and looking at the object files. On to lesson 7 Table of Contents

EL21C - Microprocessors I Using an Assembler If you are using a DOS based assembler, you'll need to get to DOS. The first step in writing a program is to define the problem as completely as possible. You will always think of things as you go along that you left out. After thinking out the problem, you start by typing up a source file. Source files all end with the .asm extension. The extensions created by the assembler are .obj which is the object file and .lst which is the listing file. The listing file shows all the addresses resolved by the assembler, all the code generated and all the comments. This file is very important in for assisting you in the lab. The listing file also shows any errors in the assembly process so that you can correct them. It is crucial that you look at the listing file to be sure that there are no errors listed. The way you would use a program in real life is to write each program, assemble it, and then execute it. For instructions on how to use the 8085 microprocessor kit used in the lab (the SDK-85), please purchase the book “Using the SDK-85 Microprocessor Trainer” which is a required text for the course and available in the university bookstore. On to Lesson 8 Table of Contents

EL21C - Microprocessors I Address Decoding As an assembly language programmer, you know that the CPU registers have no address. Instead, you specify them by name eg: A, B, C, etc.. As a result, it may seem confusing to you that to access an I/O port, you use an address, even though they may be thought of as similar to registers. CPU registers use the fastest (and most expensive) memory technology, then cache memory uses a slightly slower technology and then system RAM uses a slower and cheaper technology. Nevertheless, all are forms of memory. In fact, you could think of standard RAM as a large collection of slow registers. The difference is simply in the way that we name them. You should also realize (especially if you have installed additional memory in your own computer) that RAM consists of multiple chips, each of which contains a number of memory locations. Each chip is physically just like every other chip. There is nothing about the chip itself that makes it hold a particular range of addresses. The locations on a single chips are linearly ordered, but there is no inherent ordering among the separate chips. The ordering comes from the way the chips are connected to the address bus. When you specify a particular address, the corresponding location exists only in one of those chips. In a very real sense, part of the address selects the correct chip (the upper part of the address), while the rest of the address selects the correct location on that chip. You can look at the low order bits as forming an offset from the first location on the chip to the correct location on the chip for the address you are specifying. The method that we use to select the correct location on the correct chip is called address decoding and we use the voltages carried on the wires of the address bus to accomplish the selection. Notice that it is critical that each address selects a unique location. In additon to talking about memory addresses in memory space (or area), we also have an area the microprocessor treats slightly different called the port (I/O) space or area. On the 8085 CPU, we know the memory space is 64K bytes as there are 16 address lines. The port (I/O) space on the 8085 is 256 bytes as there are only 8 address lines used to select a port address. More on this later. Each chip (whether it is a memory chip or a peripheral device chip) has an input called chip select ( ) or similar. To activate the chip, we must send a logic "0" (0 volts) to this input because it uses negative logic. If there is a logic "1" (+5 volts) on the wire connected to this input, the chip is inactive. Some chips also have an enable (EN) input. Chips of this type must receive a logic "1" on this input and a logic "0" on the input to be active. The fact that we can "turn on" (activate) or "turn off" (deactivate) a chip using signals like these allows us to select the correct chip for a particular address. In order to see how this works in practice, we can design an address decoder for a very tiny memory, made up in part of "conventional RAM" and in part of I/O ports. You will recall from the beginning of the semester that the number of wires on the address bus

determines the number of memory locations to which we have access. Conversely, the number of locations to which we need access determines the number of address lines we need. For this example, there will be sixteen 8-bit memory locations to which we want access. This means that we will need four address lines (24 = 16). Assume that we have eight memory locations in conventional memory and eight I/O ports. If we assume that each conventional memory chip contains four memory locations, we will need two chips or banks of memory. Assume the I/O ports are on eight separate chips, each with one register (in reality, peripheral devices typically have several registers each, but the concept is the same). Arbitrarily, we will say that the eight lowest addresses will be on the conventional memory chips and the eight highest addresses will correspond to the I/O ports on peripheral devices. First, we will specify all sixteen addresses in binary, so that we can see easily which address lines will have high voltage and which will have low voltage for any particular address. A3 0 0 0 0 0 0 0 0

A2 0 0 0 0 1 1 1 1

A1 0 0 1 1 0 0 1 1

A0 0 1 0 1 0 1 0 1

1 1 1 1 1 1 1 1

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

The first thing we need to do is figure out how we can use the first eight patterns to activate the conventional memory chips and the second eight patterns to activate the I/O ports. We will use the EN input to enable the appropriate chips. Later, we will need to worry about the input, because both the EN and must receive appropriate inputs in order to activate a chip, but for now, we will just consider how to "enable" some of the chips. We need something that is common to all eight of the first set of patterns and that distinguishes them from the second set of eight patterns. If you look at the values of the A3 line, you can see that this line is always 0 for the first eight addresses and always 1 for the second set of eight addresses. We can use this line to provide the EN signal for both conventional memory and I/O ports (provided that the input also receives an appropriate input). Since A3 = 0 for the conventional memory addresses, we must invert

it to obtain a 1 for the corresponding EN inputs. Since A3 = 1 for the I/O port addresses, we simply feed A3 directly to the EN inputs of the I/O ports.

Address Line A3 Connected to the EN Inputs Now, we need to activate a particular chip. Since we have "used up" line A3, we are down to three bits of the address bus. Logically, we have "eliminated" the most significant bit of the address and we can now consider a three-bit address that we will use to access bank 0 or bank 1 of conventional memory (when A3 is 0) or a three-bit address that we will use to access one of the I/O ports (when A3 is 1). We will consider the conventional memory chips first. We have essentially the same problem that we had before. We need to find something in common for the first four addresses that distinguishes them from the second four addresses in order to activate (via the input) either bank 0 or bank 1, but not both. Again, if we look at the high-order bit of these 3-bit addresses, we see that line A2 carries 0 volts for the first four addresses and +5 volts for the second four addresses. We can use this line to activate (select) one or the other of the memory banks. In this case, we want bank 0 to be active if A2 is 0 and bank 1 to be active if A2 is 1. The input is active

low, however, so we will need to invert line A2 for bank 1 and feed it straight through to bank 0 in order to accomplish our goal.

Address Line A2 Connected to the

Inputs of the Conventional Memory Chips

We have a different problem to solve when we look at the I/O ports. We have eight different chips and we must somehow turn on exactly one of them with the inputs. Obviously, we cannot use a single bit. In fact, we will need to use all three of the remaining bits to differentiate the eight inputs. We want each pattern to activate one and only one of the chips. In other words, we want the pattern 000 to put a 0 on the input of port 0 and a 1 on the

inputs of all the other ports; we want the pattern 001 to

put a 0 on the input of port 1 and a 1 on the inputs of all the other ports, and so on. Fortunately, there is a commercially available chip that will do exactly what we want: a 3-to-8 decoder (this is similar to the 74LS138 or 8205 chip we saw in a lecture, but without the gating signals). We can feed lines A0, A1 and A2 to the inputs of this decoder and connect one of its active-low outputs to each of the I/O ports.

Address Lines A0, A1 and A2 Connected to the Decoder

Inputs on the I/O Ports via a 3-to-8

Finally, we can use lines A0 and A1 to select one of the four possible memory locations on either of the banks of conventional memory. You should notice that even though all the address lines reach every chip, unless a chip is active, the values on the low-order lines are immaterial.

Full 4-bit Address Decoder

Acknowledgement Diane Law & Jorge Orejel On to Lesson 9 Table of Contents

EL21C - Microprocessors I Interrupts In a typical computer system, the software can be divided into 3 possible groups. One is the Operating Loop, another is the Interrupt Service Routines, and the last is the BIOS/OS functions and subroutines. The Operating Loop is the main part of the system. It will usually end up being a sequence of calls to BIOS/OS subroutines, arranged in an order that accomplishes what we set out to do, with a little manipulation and data transfer in between. At the same time, at least it looks like it's happening at the same time, interrupts are being serviced as they happen. In the 8085, there are thirteen (13) possible events that can trigger an interrupt. Five of them are from external hardware interrupt inputs (TRAP, RST 7.5, 6.5, 5.5, and INTR), that can be from whatever hardware we've added to the 8085 that we deem to need servicing as soon as they happen. The remainder are software instructions that cause an interrupt when they are executed (RST 0 – 7). To digress just a moment, there are two ways to service, or act on, events that happen in the system. One is to scan or poll them and the other is to use interrupts. Scanning is just what is sounds like. Each possible event is scanned in a sequence, one at a time. This is ok for things that don't require immediate action. Interrupts, on the other hand, cause the current process to be suspended temporarily and the event that caused the interrupt is serviced, or handled, immediately. The routine that is executed as a result of an interrupt is called the interrupt service routine (ISR), or recently, the interrupt handler routine. In the 8085, as with any CPU that has interrupt capability, there is a method by which the interrupt gets serviced in a timely manner. When the interrupt occurs, and the current instruction that is being processed is finished, the address of the next instruction to be executed is pushed onto the Stack. Then a jump is made to a dedicated location where the ISR is located.. Some interrupts have their own vector, or unique location where it's service routine starts. These are hard coded into the 8085 and can't be changed (see below). TRAP - has highest priority and cannot be masked or disabled. A rising-edge pulse will cause a jump to location 0024H. RST 7.5- 2nd priority and can be masked or disabled. Rising-edge pulse will cause a jump to location 7.5 * 8 = 003CH. This interrupt is latched internally and must be reset before it can be used again. RST 6.5 – 3rd priority and can be masked or disabled. A high logic level will cause a jump to location 6.5 * 8 = 0034H.

RST 5.5 – 4th priority and can be masked or disabled. A high logic level will cause a jump to location 5.5 * 8 = 002CH. INTR – 5th priority and can be masked or disabled. A high logic level will cause a jump to specific location as follows: When the interrupt request (INTR) is made, the CPU first completes it’s current execution. Provided no other interrupts are pending, the CPU will take the INTA pin low thereby acknowledging the interrupt. It is up to the hardware device that first triggered the interrupt, to now place an 8-bit number on the data bus, as the CPU will then read whatever number it finds on that data bus and do the following: multiply it by 8 and jump to the resulting address location. Since the 8-bit data bus can hold any number from 00 – FFH (0 – 255) then this interrupt can actually jump you to any area of memory between 0*8 and 255*8 ie: 0000 and 07FFH ( a 2K space). N.B: This interrupt does not save the PC on the stack, like all other hardware and software interrupts! You will notice that there isn't many locations between vector addresses. What is normally done is that at the start of each vector address, a jump instruction (3 bytes) is placed, that jumps to the actual start of the service routine which may be in RAM.. This way the service routines can be anywhere in program memory. The vector address jumps to the service routine. There is more than enough room between each vector address to put a jump instruction. Looking at the table above, there are at least 8 locations for each of the vectors except RST 5.5, 6.5, and 7.5. When actually writing the software, at address 0000h will be a jump instruction that jumps around the other vector locations. Besides being able to disable/enable all of the interrupts at once (DI / EI) ie: except TRAP, there is a way to enable or disable them individually using the SIM instruction and also, check their status using RIM. There are other things about interrupts that we will cover as they come up, but this lesson was to get you used to the idea of interrupts and what they're used for in a typical system. It’s similar to the scene where one is standing at a busy intersection waiting for the traffic light to change, when a person came up and tapped us on the shoulder and asked what time it was. It didn't stop us from going across the street, it just temporarily interrupted us long enough to tell them what time it was. This is the essence of interrupts. They interrupt normal program execution long enough to handle some event that has occurred in the system. Polling, or scanning, is the other method used to handle events in the system. It is much slower than interrupts because the servicing of any single event has to wait its turn in line while other events are checked to see if they have occurred. There can be any number of polled events but a limited number of interrupt driven events. The choice of which method to use is determined by the speed at which the event must be handled.

The software interrupts are the instructions RST n, where n = 0 – 7. The value n is multiplied by 8 and the result forms an address that the program jumps to as it vector address ie: RST 4 would jump to location 4*8 = 32 (20H). On to Lesson 10 Table of Contents

EL21C - Microprocessors I The STACK The stack is one of the most important things you must know when programming. Think of the stack as a deck of cards. When you put a card on the deck, it will be the top card. Then you put another card, then another. When you remove the cards, you remove them backwards, the last card first and so on. The stack works the same way, you put (push) words (addresses or register pairs) on the stack and then remove (pop) them backwards. That's called LIFO, Last In First Out. The 8085 uses a 16 bit register to know where the stack top is located, and that register is called the SP (Stack Pointer). There are instructions that allow you to modify it’s contents but you should NOT change the contents of that register if you don't know what you're doing! PUSH & POP As you may have guessed, push and pop “pushes” bytes on the stack and then takes them off. When you push something, the stack counter will decrease with 2 (the stack "grows" down, from higher addresses to lower) and then the register pair is loaded onto the stack. When you pop, the register pair is first lifted of the stack, and then SP increases by 2. N.B: Push and Pop only operate on words (2 bytes ie: 16 bits). You can push (and pop) all register pairs: BC, DE, HL and PSW (Register A and Flags). When you pop PSW, remember that all flags may be changed. You can't push an immediate value. If you want, you'll have to load a register pair with the value and then push it. Perhaps it's worth noting that when you push something, the contents of the registers will still be the same; they won't be erased or something. Also, if you push DE, you can pop it back as HL (you don't have to pop it back to the same register where you got it from). The stack is also updated when you CALL and RETurn from subroutines. The PC (program counter which points at the next instruction to be executed) is pushed to the stack and the calling address is loaded into PC. When returning, the PC is loaded with the word popped from the top of the stack (TOS). So, when is this useful? It's almost always used when you call subroutines. For example, you have an often used value stored in HL. You have to call a subroutine that you know will destroy HL (with destroy I mean that HL will be changed to another value, which you perhaps don't know). Instead of first saving HL in a memory location and then loading it back after the subroutine, you can push HL before calling and directly after the calling pop it back. Of course, it's often better to use the pushes and pops inside the

subroutine. All registers you know will be changed are often pushed in the beginning of a subroutine and then popped at the end, in reverse order! Don't forget - last in first out. If you want to only push one 8 bit register, you still have to push it's "friend". Therefore, be aware that if you want to store away D with pushing and popping, remember that E will also be changed back to what it was before. In those cases, if you don't want that to happen, you should try first to change register (try to store the information in E in another register if you can) or else you have to store it in a temporary variable. Before executing a program, you should keep track of your pushes and pops, since they are responsible for 99% of all computer crashes! For example, if you push HL and then forget to pop it back, the next RET instruction will cause a jump to HL, which can be anywhere in the ROM/RAM and the ccomputer will crash. Note however, it’s also a way to jump to the location stored in HL, but then you should really use the JMP instruction, to do the same thing. Push and pop doesn't change any flags, so you can use them between a compare and jump instructions, depending on a condition, which is often very useful.

Acknowledgement Jimmy Mårdell On to Lesson 11 Table of Contents

EL21C - Microprocessors I Analog to Digital Conversion A/D & D/A chips allows us to interface with the analog (real) world. Most sensors and many output devices are analog in operation. What follows below is a attempt to fill in some gaps left over from the lectures.

Basic interface To control an A/D from a microprocessor, the A/D converter must have, at least, the following three (3) signals: Start Conversion (SC) – sets the A/D converting Output Enable (OE) – places the digital reading on the microprocessor’s data bus/port End of Conversion (EOC) --- tells the microprocessor when the A/D is busy (optional) These signals may come under other names but their operation would be similar.

Resolution The resolution of an A/D conversion is defined as the smallest voltage difference that can be detected. The resolution is also referred to as the magnitude of the least significant bit (LSB) in the conversion. The fact that an A/D conversion has a finite (non-zero) resolution results in what is called quantization error. Quantization error results because a continuously varying analog signal is represented digitally as a series of discrete steps differing by the resolution of the conversion process. The resolution depends on two other quantities: Number of digital bits in the conversion This is the length of the digital word that the A/D conversion produces as its output. Typical values for this are 8, 12 and 16 bits. The higher the number of bits the longer the conversion takes and the more accurate it is. This number is fixed for a given converter. The number of discrete values that can represented (quantized) by a given length digital word is equal to 2 raised to the number of bits. For example, if the converter is a 12 bit system then 2 = 4096 values can be represented. 12

Input voltage range

This is the total range in volts of the A/D converter and depends on the amount of gain that the converter has. Typically the amount of gain is adjustable. The resolution in volts is defined as: Resolution = (Input Voltage Range) / (2 Number of bits - 1) For example, Input Voltage Range = ±10 V = 20 V Number of Bits = 12 Resolution = 20 V / (2¹² - 1) = 0.0049 Volts Note that if the Input Voltage Range is decreased to 0.050 (50 millivolts) the resolution = 0.10 / 4095 = 0.0000244 volts or 2.e-5 volts. The choice of the input range value is critical in ensuring that we obtain enough resolution to accurately measure the input signal. In both cases above, the resolution expressed an a percentage of the Input Voltage Range is the same. Resolution / Input Voltage Range * 100 0.0049 / 20 * 100 = 0.0245 % 2.e-5 / 0.10 * 100 = 0.02 % This is certainly low enough for almost any application. However, unless our signals use the input voltage range we do not obtain this percent resolution. Consider the case of measuring a signal of 0.1 volts using a input voltage range of 10 volts. The percent error in our measurement of this signal using a 12 bit A/D is: 0.0049 / 0.1 * 100 = 4.9 % which is probably not acceptable. In this case an input range of 1.0 or 0.1 volts should be used, giving either 0.49 or 0.049 percent resolution. An alternative to adjusting the gain of the A/D is to provide the A/D with signals that use the entire ±10 volt range. This is generally preferable due to signal noise. Each amplification stage introduces noise into the signal and each succeeding amplification amplifies the noise from the previous stage. Thus it is generally preferable to have the first stage provide as much gain as possible. And the A/D, which is the last analog stage, should have the least amount of gain necessary to resolve the signals. That is, the best

procedure is to choose the gain range on the the load cells, etc., so that a ±10 volt range on the A/D results in acceptable resolution of the signals.

Sampling Frequency The basic idea behind sampling frequency considerations is that the A/D conversion must occur quickly enough to capture the rate at which the signal to be measured is changing. The primary consideration is to obtain a reasonable amount of data. This is enough data so that the changes in the signals can be resolved (sometimes this topic is called consideration of the time resolution of the A/D process). But not so much data that it is difficult to analyze. In fact sometimes that lack of resolution in time can be put to advantage by sampling too slow to resolve high frequency noise in the signal. The time resolution of a A/D conversion is usually expressed in terms of the maximum frequency that can be resolved. This frequency is called the Nyquist frequency and it is equal to half of the sampling frequency. The basic idea is that at least two data points are required in each cycle of a waveform to just start to resolve it (one data point at the maximum and one at the minimum in the waveform). Note that any shape signal (sinusoidal, triangular, or square wave) with the Nyquist frequency will appear the same digitally. Thus in practice a signal must be significantly below the Nyquist frequency (in the area of biomechanics it is often up to ten times bleow) to be accurately measured and its wave form determined. For example audio CD's are a digital recording of sounds and they use a sampling frequency of 44 kHz. This is roughly 3 times the maximum frequency of human hearing which is roughly 18 kHz. The minimum frequency that can be resolved is determined by the length of time data is collected. The total time of the data collection is equal to the number of scans collected divided by the number of scans per second. For example if 100 scans are collected at 100 scans/second then the total data collection time = 100 scans / (100 scans/sec) = 1 second. The lowest freqency that can be resolved has a period equal to the data collection time. For this example a period of 1 second corresponds to a frequency of 1 Hz.

Multiple Channel A/D Conversion Typically one wishes to measure several analog signals simultaneously. There are several methods this can be accomplished: • •

One A/D for each channel (expensive!) One A/D multiplexed (switched) between the channels, using either o o o

a sample and hold amplifier for each channel (also expensive) a single sample and hold after the multiplexer (our case) no sample and hold amplifiers

To understand this you first need the following definition:

Sample and Hold Amplifier This is an amplifier (usually with a gain of 1.0) which can store an analog signal for a period of time, which is typically just longer than an A/D conversion takes. A capacitor is a crude sample and hold amplifier, in that it can store a voltage for a given amount of time determined by the size of the capacitor and the resistance (load) that is across the capacitor. Because an A/D conversion takes time, reading multiple channels simultaneously is a problem. It is possible with multiple A/D converters, but that is expensive. Instead if truly simultaneous readings are required (say for real time control) then a sample and hold amplifier can be used to remember the signals at the same time and then a single A/D converter can be used to convert the stored voltages. However this multiplexed approach reduces the maximum sampling frequency, because the conversions are done serially instead of at the same time. A sample and hold amplifier will also tend to increase the accuracy of the conversion. If the input voltage to an A/D converter is changing while the conversion takes place errors can result. This is because the A/D conversion process reads the input voltage several times (typically once for each bit) during the conversion process. A sample and hold is used to keep the voltage the A/D converter sees during the conversion process from changing.

How does it work? The typical A/D converter uses some variation of a successive approximation scheme to determine what the input voltage is. This process uses a digital to analog converter (DAC, described below) and compares the output of theDAC to the input signal. The procedure works as follows. The A/D converter sets the highest order bit on the D/A to 1 and all other bits to 0. It then compares the DAC output voltage to the input signal (using an analog comparator). If the input is higher than the DAC signal, the bit is left at 1, if the input is lower the bit it set to 0. Then the procedure is repeated with the next lower order bit leaving the higher order bit(s) with the previously determined value(s). Thus the digital value of the input signal is successively approximated until finally the least significant bit (LSB) is determined and the conversion is complete. This process is all controlled by a clock that must run at the number of bits times the maximum sampling frequency (at least). This is why for a given clock speed, a conversion with more bits takes longer. A DAC is a much simpler device which essentially consists of a summing amplifier with an input for each bit. When a bit is set the summing amplifier adds in the voltage corresponding to that bit. This process operates at the speed at which the summing amplifier can settle (reach a stable output value), which is extremely short compared to the time an A/D conversion takes with its iterative successive approximation process. Here are some A/D types:

Flash Analog-to-Digital Converter Flash Analog-to-Digital Converters are used for systems that need the highest speeds available. Some applications of flash ADC include radar, high speed test equipment, medical imaging and digital communication. The difference with this and other types of ADC is that the input signals are processed in a parallel method. Flash converters operate by simultaneously comparing the input signal with unique reference levels spaced 1 least significant bit apart. This requires many front-end comparators and a large digital encoding section. Simultaneously, each comparator generates an output to a priority encoder which then produces the digital representation of the input signal level. In order for this to work you have to use one comparator for each least significant bit. Therefore, a 8-bit flash converter requires 255 comparators along with high speed logic to encode the comparator outputs. BASIC ARCHITECTURE OF A FLASH A/D CONVERTER.

Like in the diagram note that input signal is simultaneously measured by each comparator which has a unique resistor ladder generated reference. This produces a series of 1s and 0s such that the outputs will be all 1s when below the reference levels. The comparator output is called a

thermometer. Following the comparator outputs is the digital section consisting of several logic gates for encoding the thermo codes. The thermometer decoder determines the point where the series of 1s and 0s form a boundary. The priority encoder encoder uses this boundary threshold for conversion to binary output. Output from the priority encoder are then available to the system memory. It is important that the memory system be designed properly to prevent lost data since every new conversion will overwrite the previous result. CMOS Flash ADC There are two types of Flash ADC where one is bipolar and the other is CMOS. The difference is in how the front end of the comparators are created. CMOS is used for the ease of using analog switches and capacitors. CMOS flash converters can equal the speed of all except the bipolar designs with emitter-coupled logic. The advantage of using CMOS is that the power consumption is less with the N and P channels. Bipolar Flash ADC Using Bipolar components gives a different frequency response limitation due to the transistors. Using buffers are used to prevent the input and reference signal from excessive comparator loading. These buffers responsible for the dynamic performance of ADC. Although it is possible to use TTL or CMOS, ECL (emitter-coupled logic) is used for the highest speed. The high speed is possible by using ECL for the encoding stage which requires a negative supply voltage. This means that the bipolar comparators also need a negative supply voltage. ECL is faster because is keeps the logic transistors from operation in the saturated state (restricted to either cutoff or active). This eliminates the charge storage delays that occur when a transistor is driven in the saturated mode.

Tracking Analog-to-Digital Converter Tracking uses a up down counter and is faster that the digital ramp single or multi-slope ADC because the counter is not reset after each sample. It tracks analog input hence the name tracking. In order for this to work the output reference voltage should be lower than the analog input. When the comparator output is high the counter is in counting up mode of binary numbers. This as a result increases stair step reference voltage out until the ramp reaches the input voltage amount. When reference voltage equals the input voltage the comparators output is switched to low mode and starts counting down. If the analog input is decreasing the counter will continue to back down to track input. If the analog input is increasing the counter will go down one count and resume counting up to follow the curve or until the comparison occurs.

An 8 - bit tracking ADC

Single-Slope Analog-to-Digital Converter Single-slope ADCs are appropriate for very high accuracy of highresolution measurements where the input signal bandwidth is relatively low. Besides the accuracy these types of converters offer a low-cost alternative to others such as the successive-approximation approach. Typical applications for these are digital voltmeters, weighing scales, and process control. They are also found in battery-powered instrumentation due to the capability for very low power consumption. The name implies that single-slope ADC use only one ramp cycle to measure each input signal. The single-scope ADC can be used for up to 14-bit accuracy. The reason for only 14-bit accuracy is because singleslope ADC is more susceptible to noise. Because this converter uses a fixed ramp for comparing the input signal, any noise present at the comparator input while the ramp is near the threshold crossing can cause errors. The basic idea behind the single slope converter is to time how long it takes for a ramp to equal and input signal at a comparator. Absolute measurements require that an accurate reference (Vref) matching the desired accuracy be used for comparing the time with the unknown input measurement. Therefore the input unknown (Vun) can be determined by: Vun = Vref(Tun/Tref) Where the ration is directly proportional to the difference in magnitudes. The main part of the single-slope analog to digital converter is the ramp

voltage required to compare with the input signal. If the ramp function is highly linear then the system errors will be completely cancelled. Since each input is measured with the same ramp signal and hardware, the component tolerances are exactly the same for each measurement. Regardless of the initial conditions or temperature drifts, no calibration or auto zero function is required

Dual-Slope Analog-to-Digital Converter

Dual-Slope ADC operate on the principle of integrating the unknown input and then comparing the integration times with a reference cycle. The The basic way is to use two slopes (dual) as in this diagram:

This circuit operates by switching in thr unknown input signal and then integrating for full scale number counts. During this cycle the reference is switched in and if the reference is of opposite polarity, the ramp will be driven back towards ground. The time that is takes for the ramp to again reach the comparator threshold of ground will be directly proportional to the unknown input signal. Since the circuit uses the same time constant for the integrator, the component tolerances will be the same for both the integration and differentiation cycles. Therefore the errors will cancel except for the offset voltage that will be additive during both the cycles. The main benefits of this Type is the increased range, the increased accuracy and resolution, and the increased speed.

Successive-Approximation Analog-to-Digital Converter The successive-approximation ADC is also called a sampling ADC. The term successive-approximation comes from testing each bit of resolution from the most significant bit to the lowest. These are the most popular today because there is a wide range in performances and levels of integration to fit a different amount of task. Newer successiveapproximation converters sample only sample the input once per conversion, where as the earlier ones sampled as many times as the number of bits. Sampled converters have the advantage over the one that don't sample because they can tolerate input signals changing between bit tests. Non sampled converters performance would be downgraded if the input only changed more than a 1/2 if a least significant bit. This is due to the inputs being sampled n times for every conversion, where n is the bits of resolution. The basic principle behind this device is to use a DAC approximation of the input and make a comparison with the input for each bit of resolution. The most significant bit is tested first generating 1/2Vref with the DAC and comparing it to the sampled input signal. the successiveapproximation register (SAR) drives the DAC to produce estimates of the input signal and continuing this process to the least significant bit. Each estimation more accurately closes in on the input level. For each bit test, the comparator output will determine if the estimate should stay as a 1 or 0 in the result register. If the comparator indicates that the estimated value is under the input level, then the bit stays set. Otherwise, the bit is reset in the result register. An Example of successive-approximation (sampling) ADC:

Acknowledgements: Ray Smith, Fardeen Haji , Eric Jeeboo , Richard Phagu

On to Lesson 12

Table of Contents

EL21C - Microprocessors I Serial Communications There are a number of excellent websites that more than adequately cover this topic. They are listed below for your use and I acknowledge their authors. Click on any and/or each of these sites for detailed notes on serial communications relevant to this course. http://www.mindspring.com/~krwatson/serial.html http://physinfo.ulb.ac.be/cit_courseware/datacomm/dc_000.htm

Table of Contents

Related Documents