Os Notes.pdf

  • Uploaded by: Viraj Kumar
  • 0
  • 0
  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Os Notes.pdf as PDF for free.

More details

  • Words: 28,007
  • Pages: 78
IKI

User

Shell

Kernel

Hardware

A Guide for the bachelors of OPERATING SYSTEM by Ajit Kumar Singh

Dedicated To,

Department Of Computer Science & Information Technology Bihar National College Patna University

1/78

ACKNOWLEDGEMENT

This piece of study of Operating System is an outcome of the encouragement, guidance, help and assistance provided to me by my colleagues, Sr. Faculties, friends and my family members.

Here, I am taking this opportunity to express my deep sense of gratitude to everybody whose roles were crucial in the successful completion this study report, especially to my sr. students.

I mention special thanks to Prof. Dr. B. G. Prasad, whose permission enabled me to submit this study to the premier Educational Organization where the advance IT Curriculum provides a platform for the students to make their dreams come true.

My primary goal is to provide a sufficient introduction and details of the Operating System so that the students can have the efficient knowledge about Operating System. Moreover, it presupposes knowledge of the principles and concepts. While reading this guide, students may find the going somewhat tough in spite of my treating the topics in depth but in simple manner. In that case, reread the guide, if still it doesn’t do the trick, then only you may need the help of expert. Any remaining errors and inaccuracies are my responsibility and any suggestions in this regard are warmly welcomed!

Last but not the least, I pay my sincere regards to my Mother and my Daughter who’re ever obliged for bearing with me from time to time.

Ajit Kumar Singh

2/78

PREFACE Operating Systems are an important aspect of any computer system. Similarly, study of OS is an essential part of any computer-science education and of course for the BCA (Voc.) of Patna University, Patna, Bihar, IND. This study is intended as a text for an introductory course of OS for Junior (1st Year’s Students) and Sr. graduates (2nd & 3rd Year’s students).

In my study, I did not concentrate on a particular commercial OS; instead I tried to explain fundamental concepts of OS that are applicable to various brands & versions of operating systems.

I have attempted to wash out every error in my first edition of guide after being reviewed by lots of bachelor of Computer Science, but as happens with OS – “A few difficult to understand bugs shall remain” - and I shall highly welcome the suggestions from the part of students that may lead to improvement of next edition in shortcoming future. I would be glad to hear suggestions from you. Any correspondence should be sent to;

Center Of Compter Science & Information Technology [ CCSIT ] Bihar National College (Patna University) Ashokraj Path; Patna-800004, Bihar, IND.

MCDBA / MCSE / MCSD Ajit Kumar Singh

3/78

CONTENTS Unit

Topic

Pg No

1

Acknowledgement

05

2

Preface

06

3

Introduction to various categories of software

07

4

Introduction to Operating System

07

Need For Operating System Linker, Loader, Macro And Utilities Interaction of OS with Hardware and Users. Historical Prospects of Operating System Types of Operating System Functions of Operating System Components of Operating System Services of Operating System

5

Memory Management

12

Hierarchy of storage The OS and Memory management Allocation & Deallocation of memory space First, Best and Worst Fit Program Execution Life Cycle-IBM Based PC Brief Overview of Registers & Data Transfer Memory Management strategies Contiguous Memory Management Memory Inference Memory Fragmentation Internal Fragmentation External Fragmentation Differences b/w Internal & External Fragmentation Memory Compaction Partitioned Memory Management Fixed Partitioned Management Variable Partitioned Management Paged Memory Management Virtual Memory Advantages & Disadvantages of Virtual Memory. Paging & Demand Paging Logical, Physical & Virtual Address Conversion of Logical to Physical Address Page Thrashing Page Replacement Algorithm Least Recently Used First In First Out Cache Memory. Type of Cache Memory S/W & H/W Implemented SPOOLING & Buffer Types of Buffering - Anticipatory & Demand Buffer & Cache Memory Function Of Cache Memory. Hit Ratio & Locality of Reference Security of Main Memory Swapping

4/78

6

Input/Output Device Management

30

Input/Output Organization Bus & types of Buses System Vs I/O Bus Shared Vs Multipoint Bus Devices-Block & Character Devices Synchronous & Asynchronous Devices Data Transfer Methods Software Polled Interrupts Driven Types Of Interrupts Trap & It’s Function Interrupt-Maskable & Non-Maskable Operation of Interrupt ISR & IVT Interrupts Vs. Traps Exceptions DMAC Device Controller & device Driver Types of DMA Transfer DMAC Architecture & It’s Functions

6

40

CPU Management Introduction to Process Types of Process I/O Vs. CPU Bound Process State of Process & Process Control Block Register Introduction to Kernel & System Calls Mode of CPU Operation & Process Status Word Register Privilege & User Modes Process Communication Critical Section Problem & It’s Solutions Process serialization Process Synchronization Semaphore & It’s Types Monitors Process Creation Process Flow Graph CoBegin-CoEnd Construction Multitasking & Timesharing Types of Multitasking Multitasking Vs Multiprogramming Time quantum Timer Interrupt Features of the OS, Needed for Multiprogramming Requirements of Multitasking OS Process Status Register & Process Descriptor The CPU Scheduling Components of CPU Scheduling Dispatcher Scheduler Scheduling Algorithm & Evaluation Types of Scheduling Queue I/O, Process Waiting & Process Ready Queue Decision Mode Preemptive & Non-Preemptive Priority Function Arbitration Rule Types of CPU Scheduling Algorithms First Come First Serve Short Job First Serve Priority-Based Round robin Scheduling Single Level Process waiting Queue Multilevel Feedback Queue Selection Criteria for the CPU scheduling Algorithm Total & Average Waiting Time of different algorithms Types of Scheduler Short-Term, Mid-Term & Long-Term Scheduler Long-Term Vs Short Term scheduler

5/78

7

Disk/File Management

61

Functions of OS File-System Directory Structure Single, Double, Tree & Acyclic Structure Disk Scheduling Algorithms

8

9

First Come First serve Shortest Seek Time First Serve Scan / Elevator File System Concerns Read-Ahead and Read-Behind Protection & Security of Resources Type of Resources Logical & Physical Resources Protection Deadlock Deadlock Environment Conditions for a Deadlock situation Mutual Exclusion, Hold & Wait, No Preemption, Circular wait Prevention from Deadlock Recovery from Deadlock Security Distributed System Loosely Coupled System Networking Types of Distributed System

6/78

67

71

Introduction to various categories of Software SOFTWARE H A R D W A R E

System Software Operating System

Application Software

Translating Programs

Utilities

Standard Software

• Assembler • Interpreter • Compiler • Spcl. Compiler

Customized Software

H U M A N W A R E

Introduction to Operating System

T

he Operating System is the logical set of programs that efficiently manages overall operations of the computer resource(s)/Hardware and provides an interface to the user(s) to interact with the computer with minimum amount of

interventions. UserA

UserB

UserC

Usern

Compiler

Assembler

Text-Editor

UserD

……

Shell

UserE

RDBMS

UserN

Utilities

System & Application Program

Operating System Hardware Abstract View of System’s Components. Interaction of OS with Hardware and User.

Objectives/Purposes of Operating System:  Convenience - Make the computer system convenient to use.  Efficiency - Use the computer hardware in an efficient manner  Ability to evolve - Execute user programs and make solving user problems easier.

7/78

Need of Operating System : 1. User Interaction (Bridging Gap B/W User & H/w Memory -------------------------- CPU ------------------------- Communication Subsystem

Secondary Memory

User

2. Utilities – Utilities are the general routines of the computer included by the Operating System, having some fundamental functions like 1. Help on commands 2. Error Logging 2. Diagnostics 3. Time Zone and Date 3. ASCII Text Editor 3. Resource1 Sharing - Hiding H/w Level Limitation & Complexities. 4. Handling Initial Load Problem - Using Small BOOTSPRAP LOADER. Bootloader is a piece of code that runs before any operating system is running. Bootloader are used to boot other operating systems, usually each operating system has a set of Bootloaders specific for it. Bootloaders usually contain several ways to boot the OS kernel and also contain commands for debugging and/or modifying the kernel environment. In modern computers the bootstrapping process begins with the CPU executing software contained in ROM (for example, the BIOS of an IBM PC) at a predefined address (the CPU is designed to execute this software after reset without outside help). This software contains rudimentary functionality to search for devices eligible to participate in booting, and load a small program from a special section (most commonly the boot sector) of the most promising device. If the BIOS finds a bootable device during startup process, it loads and executes its boot sector. In the case of a hard drive, this is referred to as the master boot record (MBR) and is often not operating system specific. Usually, the MBR code checks the partition table for an active partition. If one is found, the MBR code loads that partition's boot sector and executes it. The boot sector is often operating system specific, however in most operating systems its main function is to load and execute a kernel, which continues startup. If there is no active partition or the active partition's boot sector is invalid, the MBR may load a secondary boot loader and pass control to it and this secondary boot loader will select a partition (often via user input) and load its boot sector, which usually loads the corresponding operating system Kernel. 5. Security & Protection - Main memory is divided into no of blocks and every block has a memory Protection Key. PK is small integer value having only three bits like 000,001,010 etc-etc. PK is assigned by the kernel2 during allotment of block to a program like 001 For Read / 010 For Write / 100 For Execute etc-etc. OS maintains all such PK3 of allotted blocks in Protection Key Area inside the main memory. Hence a block can only be accessed according to its assigned PK by any other instruction/program. For example a block containing a constant of program has protection key value 001 so it cannot be modified by the any other instruction of the program. Whenever any instruction of program needs to access a block, the block can only be accessed according to its protection key maintained inside PKA4. 1

A part of computer system (S/W & H/W) that directly communicate with the user. E.g. Printer, H/D, File, Cache, I/O Devices. But Main Memory is not a resource. 2 A program running at all times during the session. 3 Protection Key 4 Protection Key Area

8/78

6. Low Level Activity Controller Linker / Loader. Compilation -> Linking -> Loading -> Execution Linker – Collects all the translated program/functions (in object code) and links them together properly for execution as one unit called as Absolute Load Module.

Loader – Loads Absolute Load Module from Hard Disk to main memory for further execution. Address binding of instructions and data to memory addresses can happen at three different stages. Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. Load time: Must generate relocatable code if memory location is not known at compile time. Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps Dynamic Loading Routine is not loaded until it is called. Better memory-space utilization; unused routine is never loaded. Useful when large amounts of code are needed to handle infrequently occurring cases. No special support from the operating system is required; implemented through program design. Dynamic Linking Linking postponed until execution time. Small piece of code, stub, used to locate the appropriate memory-resident library routine. Stub replaces itself with the address of the routine, and executes the routine. Operating system needed to check if routine is in processes’ memory address. 7. Low Level Language Processor like MACRO Processor & Assembler acro – Assembly/ High Level Language(s) needs to be repeatedly used as a sequence of instructions in a program that can be achieved in the simplest way by using functions/procedures. But it is not quite so good for small set of instructions. Hence to impart the same we need to define MACRO - that is a method for giving a name to a piece of text/expression/statement, repeatedly used.

M

E.g.

#define SUM (x, y, z) x+y+z

There is a macro processor (Provided by OS) that is responsible for replacing all the assurance of defined macro name by the defined instructions. Whenever Assembler/Compiler encounters macro definition, it immediately stores its definition to the table for further statement substitution in the program.

9/78

Types of Operating System Size Micro Mini Mainframe Super

Level of Resource Sharing

Various Types of Interaction

Single User Single Tasking Single User Multi Tasking Multi User Single Tasking Single User Multi Tasking

Batch Processing OS Real Time OS Interactive OS Distributed OS

Historical Prospect of Operating System Generation/Duration Gen-I 1949-1955 Gen-II 1956-1965

Gen-III 1966-1975

Gen-IV 1975-1998 And Beyond………..

Type Of Operating System        

Batch Processing OS Multiprogramming-Multitasking (VM) Complicated I/O System Complicated system of Interrupts. Multiprogramming & Time-Sharing Large On-Line Storage Devices. Resource Allocation& Protection of critical items. OS for PC, LAN, WAN, Parallel System, Super Computer. Distributed OS (LOCUS-1983, MICROS, AMOEBO,MACH) - Capability to seamlessly integrate with my rid hardware, network and protocols.

Functions Of Operating System The functions of an operating system can be classified into two categories, that are;: •

Resource Sharing - To share/manage the hardware resources of a computer    



Processor management Memory management, Storage management and Device management

Resource Utilization - To provide the applications with an easy/abstract way to use the hardware resources of the computer without the applications having to know all of those details.  Application interface Application programming interface is that where the operating system creates link between the application and system providing an active link between the program and hardware of the computer.  User interface User interface provides a link between the user and the computer, which is one of the main functions of an OS.

10/78

Components Of Operating System The major components5 of Os are as follows; a) Storage Manager – Controls the utilization of main memory by different user’s programs. The specific tasks are as follows; 1) Allocation/Deallocation of memory to program. 2) Protection of program area in main memory from unauthorized access. 3) Organizes the memory usages by making use of swapping, demand paging.

4) Monitors the memory block. b) Process manager- keeps track of each program regarding the a) Process State. b) Priority associated with the process. c) Controls job scheduling techniques. c) File System Manager – Provides information supported to OS and user’s programs. It allocates the requisite space for a file on secondary storages. It also takes care of file protection against illegal access. A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data. The operating system is responsible for the following activities in connection with file management: – File creation and deletion. – Directory creation and deletion. – Support of primitives for manipulating files and directories. – Mapping files onto secondary storage. – File backup on stable (nonvolatile) storage media. %

c) I/O Control system – Logical IOCS provides the access method and data organization for the data stored in files. Physical IOCS is responsible for carrying out device level I/O operations to implement data transferred optimizes device performance by scheduling jobs waiting for the device. d) Console Manager- responsible for changing operating mode i.e. from supervisor to user mode and vice-versa. e) Logon /Logoff Manager – Performs initial tasks of validation of users. f) Protection System - Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources. The protection mechanism must: Distinguish between authorized and unauthorized usage. Specify the controls to be imposed. Provide a means of enforcement. g) Command-Interpreter System - The program that reads and interprets control statements is called variously: Control-card interpreter Command-line interpreter Shell (in UNIX) Its function is to get and execute the next command statement. h) Networking – See Distributed System Q. Explain various components of Operating System? 5

Component design varies from OS to OS.

11/78

Services of Operating Systems Program execution – system capability to load a program into memory and to run it. Output

Loading Instruction Into memory

Process Waiting/Input QUEUE

Instruction Register

Loading an Ins.

RAM

Program Counter

Instruction Decoding

Instruction Decoder Bringing Data from Memory

Operators

Control Unit

Loading address of operands Operation Signal

Sending result to memory

Memory Address Register

Memory Data Register

Sending data to ALU

Arithmetic Logic Unit Storing Intermediary or final result

Basic Instruction Cycle. Temp Register

I/O Operations –

since user programs cannot execute I/O operations directly, the operating system must provide some means to perform required I/O operations.

File-System Manipulation – program capability to read, write, create, and delete files. Communication – exchange of information among the processes executing either on the same system or on different systems tied together by a network. It is implemented via shared memory or message passing.

Error Detection – ensures correct computing by detecting errors in the CPU, memory hardware,

I/O devices, or

user programs.

Resource allocation – allocates available resources among multiple users or multiple jobs running at the same time.

Accounting –

keeps track/record of user’s activities like which user do use how much and what kinds of computer resources for account billing or for accumulating usage statistics.

Protection – ensures that all accesses to system’s resources are controlled. Note: The functions of operating system are classified into Resource Sharing & Resource Utilization.

12/78

Memory Management Register Cache Main Memory HD / FD / CD/ Magnetic Tape/ Electronic Disk (DRAM) Storage Hierarchy

The Main Memory is the vital resource of the computer system and needs to be managed effectively due to its limited availability and high cost against the demand of larger main memory for smooth functioning of the program, if it is larger one. Hence, OS has memory management as one of its prime function that includes; 1. Allocation of space in main memory for the processes kept in process waiting/input queue for further processing. It decides how to satisfy a request of process (size n) from the list of available free block list? The algorithm for memory allocation are as follows; a) First Fit - OS chooses the first available block from the free block list6 that is quite capable (equal/larger) to store/allocate a process from the process waiting/input queue7. The scanning of appropriate block (capable to store the process of requested size) begins from the beginning of list unless the block is found. The First Fit may lead to large Memory Gap if the size of program is very less to allocated block. P8

P65

P20

P60

P40 

Process Waiting/Input Queue

A 10

B 60

D 30

C 80

E 70

First Fit – The block A, B and D are already allocated. Hence the C is supposed to be allocated while the most suitable candidate is E-70 for the Process P40.

Free Block List (Unsorted)

When an N-word block is needed, the free space list is scanned for the first block that is capable to have N or more words, which is then splited into an N-word block, and the remainder is returned to the free-space list. b) Best Fit – The OS chooses the smallest block from the free block list (ordered) that is quite capable to store/allocate the process from the process waiting/input queue. The scanning of the smallest block (capable to store/allocate the process of requested size) begins from the beginning of list unless the block is found. Best Fit may lead to small Memory Gap resulting minimum internal fragmentation. P8

P65

P20

P60

P40 

Process Waiting/Input Queue

A

B

C

D

E

18

38

48

68

78

Best Fit – Since the list is sorted and the smallest among all the blocks is C that is capable to store the process P40.

Free Block List (Sorted)

When an N-word block is needed, the free-space list is scanned for the block with the minimum number of words greater than or equal to N. This is allocated as a unit, if it has exactly N-words, or is split and the remainder returned to the free-space list. 6

Contains a set of free blocks that may be allocated against the request made by a process. It can be sorted/ordered or unsorted/unordered. 7 Collection of processes on the disk that are waiting to be brought into the memory for execution.

13/78

c) Worst Fit - OS chooses the largest available block from the free block list (ordered) that is

capable to allocate the process from the process waiting/input queue. The scanning of the largest block begins from the beginning of list unless the largest block is found. Worst Fit may lead to Largest Memory Gap if the size of program is very less compare to the allocated block.. It may lead external fragmentation P8

P65

P20

P60

P40 

Best Fit – Since the list is sorted and the largest block among all the blocks is E. Hence it is selected for the process P40.

Process Waiting/Input Queue

A 18

B 38

C 48

D 68

E 78

Free Block List (Sorted)

Note

The best & worst fit is better than first-fit in term of decreasing both a) The time to search a block in the list is on average8 (N+1)/2, (N+1)/2 and N respectively where total number of available blocks are N.) b) And storage space also.+ First-Fit

Best-Fit

Worst-Fit Dynamic Y Largest Dynamic Partition Slower

Creation of partition Static Static Ordering X Y Memory Gap Larger Small Memory Partition Size Fixed Equal Fixed Variable Performance Faster Slower Int+Ext Fragmentation Int+Ext Fragmentation Only Ext. fragmentation Leads Disadvantages: All the three methods suffer from external-fragmentation problem where the storage method is non-contiguous, variable length of blocks and dynamic. All the above three may lead to internal & external fragmentation to some extent. Q. Explain the First, Best and Worst Fit memory allocation/management algorithms?

2. Deallocation of space after the completion of the execution of process that was currently being running, for allocation of same memory to other processes (kept in the Process waiting/input queue). The term liberation is used for the process of freeing an allocated block of storage using liberation alogothrim. 3. Provides Virtual Memory. 4. Linking (Creating abstract Load Module) and Loading (Loading Abstract Load Module) of process. 5. Load-Time And Run-Time Dynamic Binding. Source Program

Object Program

Linkage Editor

Load Module

System Library

Static Binding

Loader

Dynamically loaded system library

Dynamic Binding Binary Image of process

8

Other Object Module

Linear Search ((N+1)/2) & Binary Search requires log2n

14/78

6. Protection of memory location, containing system and user program from inference to other program. 7. Provides abstraction to user for the complexities of Memory. Register – A high-speed storage device, made of FLIP-FLOP in the CPU that is used to store the data temporary. Types of registers vary from the processor to processor but two types of registers are most frequently used; a. General Purpose b. Special Purpose IR – Contains the instruction to be executed by the CPU. PC – Contains the address of the next instruction to be executed. MAR – Contains the address of operands MDR – Contains the content/value of operand.

Decoder – Decoding is the process to extract the address of operands in memory and operation to be performed on operands and controlled by the Ins. Decoder.

Temporary Register – For temporary tasks. DATA Transfer – Transferring of data from one to another register is performed using a single data bus that could be shared by several registers, known as GATING. Whenever a source register sends the data, it uses gating signals to the common bus to activate a register for input from the bus. E.g. MOVE R1, R2 Means,

ADD R1, R2, R3 Means,

R2_OUT and R1_IN Gating Signal.

R3_OUT; X_IN R2_OUT; Y_IN X_OUTADD, Y_OUT,Z_IN Z_OUT, R1_IN

Memory Management Stratategies [MMS] Requirements of Memory Management. A. Relocation B. Protection C. Sharing D. Logical Organization E. Physical Organization

9

A) Contiguous Memory Management Strategies A. Single Partition B. Multiple Partition a. Fixed Equal MMS b. Fixed Variable MMS9 c. Dynamic Partition MMS B) Non Continuous MMS A. Paged Memory Management B. Segmentation MMS C. Combined Page-Segmentation Scheme D. Set Strategies

Memory Management Strategies

15/78

In contiguous memory management, the instruction, and data of a program are continuously/sequentially stored in main memory. 1. Single Partition MMS -> One partition of the OS and other one partition for the user’s program. There is no concept of Memory Fragmentation and Memory Inference10. User Program Area

Operating System

2. Fixed Equal Multiple11 Partition MMS  In this scheme, the OS occupies the low memory and the rest memory that is available for user’s program is divided into fixed partitions of equal size that vary from an OS to OS. E.g. Size of main memory=6MB, I MB occupied by the OS and the rest 5MB is divided into five partitions of 1MB i.e. 5X1 MB. Job12

Size of Job

Partition ID

Size of Partition

Allotment

OS A B C D E

1024KB 0800KB 1000KB 1024KB 1200KB 0624KB

P1 P2 P3 P4 P5 P6

1MB=00001024 KB 1MB=1024-2048 KB 1MB=2048-3072 KB 1MB=3072-4096 KB 1MB=4096-5120 KB 1MB=5120-6144 KB

OS A B C External Frag. E

Unused Space 0000KB 0224KB 0024KB 0000KB 1024KB. 0400KB

Assuming, the size of memory is of 6MB=6144KB

Total Internal Fragmentation = 0+224+24+0+400=648KB. And the wastage of an entire partition of P5 is 1024KB because there is no process(s) capable to be in this partition, is known as External Fragmentation.=1024KB. External Fragmentation - If an entire partition is wasted due the greater size of job than the size of partition then it is said to be external fragmentation. Therefore, the total external fragmentation = 1024kb. Internal Fragmentation – The unused space of the partitions that are occupied by the jobs are known as Internal Fragmentation. This means that some memory in the allotted partition might remain unused during other task steps that cause an internal fragmentation. Many small multiprogramming OS use this approach due to its simplicity, as OS overheads are low as no memory allotment actions are necessary during system operation. Therefore, the total internal fragmentation = 0+224+24+0+400 = 648kb 3. Fixed Variable Partition MMS  In this scheme, the OS occupies the low memory and

the rest memory that is available for user’s program, is divided into fixed partition of unequal size (capable to store biggest step of program). Job

Size of Job

Partition ID

Size of Partition

Allotment

OS A B C D E

1024KB 0590KB 0550KB 2000KB 1200KB 0900KB

P1 P2 P3 P4 P5 P6

0000-1024=1024KB 0700KB=1024-1724KB 0600KB=1724-2324KB 1700KB=2324-4044KB 1300KB=4044-5344KB 0820KB=5344-6144KB

OS B E External Frag. A External Frag

Unused Space 000KB 0110KB 0050KB 1700KB 0100KB 0820KB

Assuming, the size of memory is of 6MB=6144KB

The arrival orders of jobs are as following order A-> B -> C ->D->E. 10

Is the situation when the instruction or data of a program is attempting to allocate the space in main memory that has been occupied by other’s program instruction or data? 11 A type of partitioned MMS. 12 A set of computational steps packaged to run as a unit.

16/78

As per First Fit, the job A(590) may be allotted at partition P3 would be more efficient (Wasting of less space) but B is allotted to P2 where wasting of 100kb is Internal Fragmentation. The scheduling of job from the ready queue is based upon the arrival time of job in ready queue. Total Internal Fragmentation = 0+110+50+100+20=280KB. And the wastage of an entire partition of P4 and P6 is 1700+820=2520KB because there is no process(s) capable to be in partition, is External Fragmentation.=2520KB. 4. Dynamic Partition Memory Management  In this scheme, the OS occupies the low

memory and the rest memory that is available for user’s program (called Big-hole), the partition is created dynamically, so that each process is loaded into the partition of exactly the same size of that process. The partition is created dynamically as per the size of process that is supposed to be loaded into memory’s partition as per the available memory other wise the process has to wait until enough partition is available or deallocated (By Compaction Process) by some other process that may increase the throughput time of process. (Capable to store biggest step of program). And never raise the situation of internal fragmentation but suffers from External Fragmentation. Partition Table Partition ID Size To get the record of allocated and deallocated memory P1 1000KB partition, the OS keeps the track of memory partition into the P2 1460KB table known as Partition Table. Pn 0456KB Job OS A B C D E F

Size of Job 1024KB 0800KB 0590KB 1000KB 0600KB 1450KB 1000KB

Partition ID P1 P2 P3 P4 P5 P6 P7

Size of Partition 0000-1024=1024KB 1024-1824=0800KB 1824-2414=0590KB 2414-3414=1000KB 3414-4014=0600KB 4014-5454=1450KB 5454-6144=0690KB

Allotment OS A B C D E External Frag

Unused Space 000KB 000KB 000KB 000KB 000KB 334KB 690KB

Assuming, the size of memory is of 6MB=6144KB

Total Internal Fragmentation = 0+0+0+0+0+0=0KB. And the wastage of an entire partition of P7 690KB because there is no process(s) capable to be in partition, is known as External Fragmentation.=690KB. But whoever a job will be completed, the compaction process makes a large room=690+Space of deallocated process = N KB where the job F might be allocated. Memory Fragmentation  Is the situation when there is an existence of unused memory area in main memory. Fragmentation occurs when there is availability of space in memory but the instruction of program and data are unable to use it. Fragmentation leads to poor performance of execution of program. There are two types of fragmentation as given below: a) Internal Fragmentation  Whenever a job claims to have a memory block / partition of either fixed or variable size for allocation and if it gets the claimed partition (in case of Partition Size > Job Size) while leaving some unused memory space of that partition then the unused memory space is known as internal fragmentation. And generally occurs in the fixed equal/variable multiple partition MMS. Either Memory Compaction or implementation of Virtual Memory13 or paging scheme can handle it.

13

Please See the notes of Virtual Memory

17/78

b) External Fragmentation  Whenever a job claims to have a memory block / partition of either fixed or variable size for allocation and if it does not get the claimed partition (in case of Partition Size < Job Size) while leaving an entire memory space of that partition then an entire unused memory space of that partition is known as external fragmentation. Generally Found in the fixed equal/variable/dynamic multiple partition MMS. Implementing Virtual Memory, Memory Compaction and paging Scheme can handle it. Another example of fragmentation; Internal Fragmentation  Caused by already allocated (Internal Program) and running program in fixed partition memory14 and can be handled by the implementation of Virtual Memory. E.g. Lets suppose there are three programs say A, B and C ready to be executed; each occupies adequate space for execution. Lets suppose A is larger one and occupies 60KB, B and C are smaller and occupies 40KB each. A is larger one becoz it is expected to execute C while linking and loading the program C. Just after linking and loading program C the program A becomes smaller and removes 20 KB this is absolutely a case of Internal Fragmentation. External Fragmentation  Caused by External Program, during allocation of external program in dynamic/variable partition memory15either Memory Compaction or implementation of Virtual Memory16 can handle it. E.g. Suppose the task of A has been over and the space is allocated to program X, having the size of only 30 KB, making free remaining 30KB of that block. And same scenario can be encountered to program C that is replace to Z, having size of 20KB, letting remaining 20KB. All together there is 50KB inside the memory but the program R of size 45KB is unable to use the memory location as an unit that causes external memory fragmentation. A- 60KB

A-30KB

X-30KB

B-40KB

B-40KB

B-40KB

C-40KB

C-40KB

Z-20KB

Internal Fragmentation

External Fragmentation

Q. Explain the process of memory fragmentation? Explain the term memory fragmentation?

14

Please see the notes of Fixed Partition Memory Please see the notes of Variable Partition Memory 16 Please See the notes of Virtual Memory 15

18/78

VS

Internal Fragmentation

External fragmentation

The free space/portion of main memory is wasted within an allocated partition/block is known as Internal Fragmentation. More complicated and dangerous that causes the poor performance of the execution of process and system too. Generally caused by the internal resources i.e. the process that is currently being loaded in the memory and paging (Page Break) Scheme.

Generally Found in the fixed equal/variable multiple partition MMS. Washed out by implementing Segmentation Scheme, memory compaction and dynamic partition MMS.

The wastage of an entire partition in main memory or availability of many small noncontiguous free blocks known as External Partition Less complicated and dangerous as compare to internal fragmentation. Generally caused by the external resources i.e. the program, expected to be loaded immediately next in the memory but the process does require more memory than the size of partition. or segmentation where the wastage of an entire partition is an external fragmentation.. Generally Found in the fixed equal/variable/dynamic multiple partition MMS. Washed out by implementing Virtual Memory, Memory Compaction and paging Scheme.

Memory Compaction/Relocation  is the technique of collecting all the free spaces of internal and external fragmentation together in one block to have a larger block as an unit. The mechanism involved in this technique is to shift the process to make a larger continuous room for a larger program of waiting queue to be loaded as a unit instead of fragmented allocation. It is also known as memory-relocation where logical address17 is relocated dynamically at execution time. The process of compaction can be act upon Fixed and Variable Partition MMS. Example of fixed variable Partition Memory Management. Job OS A B C D E

Size of Job Partition ID 1024KB P1 0550KB P2 0590KB P3 1000KB P4 0600KB P5 1450KB P6

Size of Partition Allotment Unused Space 1024KB=0000-1024KB OS 0000KB 0700KB=1024-1724KB A 0150KB 0600KB=1724-2324KB B 0010KB 18 External Frag . 1200KB=2324-3524KB 1200KB 1000KB=3524-4524KB D 0400KB 1600KB=4524-6144KB E 0150KB Assuming, the size of memory is of 6MB=6144KB Total Internal Fragmentation = 0+150+10+400+150 = 710KB Total External Fragmentation due to job C= 1200KB

The internal and external fragmented memory space can be collected together by reallocation of the boundary of each partition, said as compacted memory to have a partition of 1910KB that enough for the allocation of memory for job C. The compaction process is an additional overhead upon the OS In Non-Contiguous memory management, the instruction, and data of a program are stored randomly in main memory. a) Paged Memory Management: Virtual Memory  Whenever a large program does not fit inside any one partition of main memory then the program/user job is dived into fixed size block known as overlays/pages and stored on the hard disk. Job OS A B 17 18

Size of Job 1024KB 1163KB 1050KB

Partition ID P1 P2 P3

Size of Partition 0000-1024=1024KB Free Area Free Area

Also known as virtual address. Fragmentation

19/78

Allotment OS Unable Unable

Assuming, the size of memory is only 2MB=20484KB

Both the job A and B are unable to be allocated using any continuous memory allocation strategies. ------------------0 4

-------------------

8

-------------------



-------------------

4096 -------------------

Virtual Memory

Hard Disk

Assumption – Capacity of each block/page is only 4KB and capacity of main memory is 4MB

Main Memory

As per the requirements, a particular overlay19/page is brought into main memory for execution under the control of OS. The method of handling overlay is called Virtual Memory. Virtual Memory guarantees to rule out the problem of external fragmentation. But unable to rule out internal fragmentation. The virtual memory can be implemented by demand paging or demand segmentation where the segments consist of pages/overlays. Eg. Let’s suppose that the size of a page (P1) of program Pr is 4100K and the size of respective frame (Fr) inside main memory 4000B. Hence, the remaining 100B crosses the boundary of frame (Frx) and placed into another frame (Fry) leaving 3900B that is internal fragment. More programs can be executed at same time. Hence, increasing the CPU Utilization and Throughput. Less I/O required to swap each process into memory entirely that improves the execution performance. Advantages Disadvantages Less requirement of physical memory. Supports Time-Sharing system. Does not effect from fragmentation. Supports Virtual Memory. Sharing of common code (Shared Page) between two or more process is possible.

No more increment in the response or turnaround time. Additional load on OS to manage the virtual memory.

Paging20 - Is the technique that automatically does memory management (allocation) for overlays/pages from virtual memory (secondary memory) into else where (randomly) in main memory as per memory word available in the system (A word addressable machine of 16 bits is capable to have all together 2^16=65536 locations. Word Address 0 4 8 12

Byte Address 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

The word addresses are often 0,4,8, but rarely 2,5,7, Used by the machines. Page – The logical address space divided into a fixed size blocks in virtual storage technique, each block is used as a page.

It is efficient MMS that is non-continuous memory allocation method and the basic idea behind paging is the physical memory is divided into fixed size blocks called frames where the size of frame in virtual memory is same to the size21 of frame in main memory. The OS maintains a data structure that is Page Table22 for the mapping purpose. The page table specifies the some useful information, it tells which frame is allocated, which frame is available, and how many total frames are there, and so on.

19 20

21 22

The technique of repeadly using the same area of internal storage during different stages of a program. A high paging rate may cause high I/O rate.

Generally the frame and page size is 4KB but may vary from the OS to OS. Each OS has its own method of storing page table and most of the OS allocate a page table for each process.

20/78

Logical Memory

Page

Frame

Main

Frame

for Job-1

No.

No.

Memory

No

OS

0

Page

1

Page 0

0

3

3(J1) Page 1

1

6

Page

2

2(J1) Page 2

2

2

Page

3

0(J1) Page 3

3

1

4 5

Page Map Table for Job 1

Page

6

1(J1)

The OS Logical Address  Generated by the CPU and the user program can only deal with the logical address. It consists of Segment Number along with Offset Value/Displacement. Logical Address Space –> Set of all logical addresses.

Physical Address  The memory locations available in the main memory starting from 0 to 4096 for 16 bits word addressable is known as physical address. It is kept into memory address register and seen by the memory unit. Physical Address Space –> Set of all physical address spaces. Q. Explain paged memory management& concept of virtual memory?

Segment – Logical grouping of instructions such as subroutine, array, data area that every program consists of. a program and managing these segments are known as segmentation. Each segment has a name and a length (fixed variable size). Address of segment = Segment No. + Offset with in the segment. Seg No. 0 1 2 3

Base Address 1500 0900 3000 4200 Segment Map Table

Limit23 1000 150 400 550

The segment map table is used for address relocation.

Segmentation / Segment Scheme – The memory allocation scheme subject to external fragmentation is called segmentation where the main memory is divided in to number of segments sizes need not be same. Each process is divided into number of segments. A process is loaded by loading all of its segments into dynamic partition that need not be contiguous.

23

Need not be same.

21/78

Q. A 2500 bytes long segment begins at physical address 9010. Assuming that a location can hold one byte, what will be the physical address of byte 187 and byte 3004 of a file named by seg.txt? Given Segment Size is 2500 bytes. Hence, Segment-1 begins at 9010 Physical Address of 187th Byte= 9010+187 = 9197th location in . same segment Segment-2 begins at Physical Address of 3004th Byte= 9010+3004 = 12014th 9010+2500=11510 location that in next segment at location 12014.

Address Mapping/Conversion  As soon as the scheduler schedules a process from process ready queue, the address of scheduled page/segment (contains instructions and operands) is generated by the CPU known as logical address. But as soon as the address of operands brings into MAR, it becomes physical address by adding the base address of segment and offset/displacement value of page. Eg. If a page (P2) is supposed to be loaded from process ready, the CPU automatically assigns a address known as logical address i.e. Segment No 1 and offset is 280. The CPU takes care regarding the logical address must be less than the length of register kept on limit register. Now, the base address of segment 4 is evaluated from the base register e.g. 7000 and the offset 280 is added to have the physical address 7280 of that page/segment. And the entire process is known as Address-Reallocation to load the page into memory at the evaluated physical address.

Page No P1 P2 P3 --

Logical Address Segment No. (20Bits) 0 1 2 -

By the CPU

Offset Value (12 Bits) 140 280 360 ---

Physical Address On the CPU Segment Base Address No. 0 00000 1 07000 2 14000 ------

=================================================================== CPU

Limit Register

Logical Address Process Waiting/Input QUEUE

BASE Register

 Physical Address RAM

The diagram of Dynamic Reallocation

There is a segment24 register in the CPU that has two registers; A.) Base Register/Reallocation Register – Points/keeps the starting address of each segment inside the main memory. E.g. 0,4,8,12,…………4096 if the capacity of every segment is 4KB in tabular form (Segment Table). Segment Register Base Register

Limit Register

24

Memory is divided into no of blocks having similar capacity known as segment and the process is known as segmentation. Segmentation is the Memory-management scheme that supports user view of memory. A program is a collection of segments. A segment is a logical unit such as: Main program, procedure, function, local variables, global variables, common block, stack, and symbol table, arrays.

22/78

B.) Limit Register – Contains the length of every segment that is how long a segment is? And the logical address must be less than the length of the respective segment. Hence, each segment in main memory has their corresponding base & length register entries kept in Segment Register, specifying starting address and length of each segment of main memory. While every page of the virtual memory has Logical Address that consists of two parts; a.) 20 Bits Segment No. – Contains the position of segment say 1,2,3,4… inside the main memory where the corresponding page is expected to be loaded from virtual to main memory. And used as an index to Segment Table. b.) 12 Bits Offset – Contains the position of page inside a particular segment of main memory that can be maximum up to 2^12 for 32 Bits addressable machine. Whenever a page is loaded from the virtual memory to a particular segment of main memory then the address of concerning segment’s base register is added to the offset of page so that to obtain the Physical Address. Hence a segment can have several disk pages that are differentiated in matter of their physical address by using offset and length register of a program. Q. Assuming the size of logical address to be 2^32 and page size to be 2^12, find the bits required for page number and page offset in the logical address? As we know, Logical Address = 2^M = 2^32 Hence M=32 & Page Size- 2^n =2^12 Hence N=12 No of Bits for Page Number = High Order Bits = M-N=32-12 =20 Bits No of Bits for Page Offset = Low Order Bits = N=12 Bits

Virtual Address  The compile-time and load-time address-binding method generate identical logical and physical addresses. While as the execution-time address-binding scheme generates different logical and physical address. In such a situation the logical address is called virtual address. Demand Paging  Whenever a page reference is not available in main memory, the Page Fault occurs and the OS treats it by reading the required page from the secondary storage while mapping memory and making entries in page table .IT repeats the instruction that causes page fault and This method of handling main memory in such a situation is known as demand paging. E.g. when an execution of a program begins, a page fault occurs since there is no such required page currently available in memory and OS demands and loads the required page in main memory by replacing an existing page from the main memory by using a specific Page Replacement Algorithm. E.g. whenever a program begins its execution, initially the page fault occurs and OS performs the task of Demand Paging by using OS’s Page Fault Handler program. Belady’s Anomaly – Belady’s invented a page sequence 1,2,3,4 / 1,2,5,1 / 2,3,4,5.. Using this the page fault rate increases if the number of frames increases. The FIFO page replacement policy, Belady’s anomaly occurs. Q. Explain with diagram segmentation & demand paging?

Swapping – A process needs to be in memory to be executed. There are situations in multiprogramming environment to switch from a program to another for their execution. And whenever the CPU switches from one process to another process, the spaces occupied by the previous process unnecessarily consumes unwanted space in main memory, this may lead to memory scarcity. To overcome such problem, as soon as the quantum time of a process expires the memory manager starts to swap out the process that has just finished and swaps in another process from the storage media so that to avail more & more spaces in main memory for next processes. When each process completes its quantum time, it will be swapped with another process by the Memory Manager. Normally, a process that is swapped out will be swapped in into the same location that it was occupied previously.

23/78

Operating System Process Waiting / Input QUEUE

Px Py

U S E R A R E A

Swap Out

Swap In

Swapping requires backing store that must be large enough to accommodate copies of all memory images for all users, and must provide direct access to these memory images. Q. Explain the swapping management scheme?

RAM

Page Thrashing25  Physical memory is divided into number of fixed-size block called blocks while as the logical memory is also divided into number of blocks of same size known as pages. As well as backing storage is also divided into number of fixed-size blocks that are of the same size as of memory frames. The poor paging algorithm may cause the thrashing and requires virtual system thrashing. The Thrashing is a situation when two processes A and B (or more) are currently being to be executed (Multiprogramming). The number of frames required by process A is less as required frames (Available Free Frames in memory), then it encounters the page fault and swap outs the occupied page of another process B from the frames to make room of it’s own. While doing so again the context-switches to process B where it also encounters the page fault and swap outs the page of process A from the occupied frames. Due to encountering page fault again and again repeatedly at very less interval, the level of multiprogramming decreases. Such type of high and frequent occurrence of page fault requires more time for swap-out and swap-in operations rather than the execution of processes, causes the Thrashing. Total available frame = 6

Process B B

B

B

B

B

Process A Q. Explain Page Thrashing?

1. Initially, the process B occupies 5 frames. 3. 1 frame is swapped out. 5. Context switches to process B and encounters the page fault. Page of process B is loaded and executed.

2. Process A requires 2 frames but the available frame is only one.. Hence the page fault occurs by the process A. 4. Process A get loaded and executed. 6. Page of process A is swapped out.

When a process(s) if swap in and Swap-out occur repeatedly again and again during the each context switch, causes Thrashing. I.e. the situation, in which system storage management routines such as garbage collector are executing almost all time, is also called thrashing. Consequently,  Decreases the CPU utilization because the CPU engaged in swapping In-Out operations rather than execution of process. Note  Effective memory access time increases due to high frequency of the occurrence of page faults. 25

High Paging Activity.

24/78

How OS does Demand Paging ? Whenever page fault occurs inside the main memory, the Page Fault Handler (A small program) of OS checks Page Table (Contains the list of all available pages currently in main memory, maintained by OS) and determines the page to be read and assigns available free page frame obtained from Free Page Frame List (Contains the list of all available free page frames inside memory, maintained by OS) to the disk driver. The disk driver reads the required page from the virtual memory and put backs into Page Frame and hence the situation of page fault rules out and the concept of demand paging is employed. Page Frame OS Memory Area User Pages Page Free Page Modified Free Page Replacement Algorithm Table Frame List Frame List Table Available Physical Memory Model

Paging Scheme

Segmentation26 Scheme

1. The main memory is partitioned in to fixed equal size of frames/blocks. 2. The logical address space divided into pages by compiler or memory management unit. 3. Suffers from Internal Fragmentation or page break. 4. Avoids external fragmentation. 5. The OS maintains a free frames list need not to search for free frame. 6. The OS maintains a page table for mapping b/w frames and pages. 7. Does not support the user view of memory. 8. Processor uses the page number and displacement/offset to calculate absolute/physical address. 9. Multilevel paging is possible. 10. Sharing of page is allowed. 11. Compaction is applicable.

1. The main memory partitioned into fixed variable size of segments. 2. The logical address space divided into segments, specified by the programmer. 3. Suffers from External Fragmentation. 4. Avoids internal fragmentation. 5. The OS maintains the particulars of available memory. 6. The OS maintains a segment map table for mapping purpose. 7. Supports the user view of memory. 8. Processor uses the segment number and displacement/offset to calculate absolute/physical address. 9. Multilevel segmentation is possible but no use. 10. Sharing of page is allowed. 11. Compaction is applicable.

Page Replacement Algorithm  OS replaces an existing page from the main memory to the page from the virtual memory, whenever page fault occurs and there is situation of demand paging use. An effective Page Replacement Algorithm can be one that is capable of producing least page faults and has simplicity. The situation of Page Replacement occurs only when there is no more free room/frame available in the main memory for demanding and loading the required page from virtual memory. Some of the few page replacement Algorithm that is frequently used by most of OS are as follows; 1.) Least Recent Used Page Replacement Algorithm  Least Recent Used Page Algorithm, only a page that has been very most recently used instead of very recently used is replaced to the demanded page from virtual memory. But, How can we determine the page that has been very least recently used? Definitely! By using LRU Approximation  which is the process to determine and decide the least recent used page among the all-available pages inside the main memory. The main objective of such algorithm is to perform the task of LRU Approximation. The LRU is used to load another page by replacing an existing page. Some of the most frequently used LRU Approximation algorithm by the OS is as follows;

26

Segmented memory can be paged.

25/78

Counter Approach  Most of the OS uses Counter Approach for LRU Approximation. In

b)

Counter Approach LRU Approximation, every page inside main memory has a corresponding Counter (An Integer Value) initially initialized to 0,whenever a page is created/loaded in the main memory. Again whenever a page is referenced, the counter of that page is re-initialized to 0 and rest of all counters of all other pages are incremented by 1 by the Daemon Process27 of such algorithm. And hence whenever page fault occurs, the page having the highest value for their counter is supposed to be replaced.

Reference Bit Approach  In Reference Bit Approach LRU Approximation, every page

c)

inside main memory has a reference bit, containing 1 for used and 0 for unused page. If a page is currently referenced then its reference bit is set to 1 and other page’s reference bit is set to 0. The page having reference bit set to 0 are considerer as to be replaced. But this algorithm is quite puzzlesome hence is not considered by most of the OS as compared to LRU Approximation.

2.) First In First Out Page Replacement Algorithm  In First In First Out Page Algorithm, only a page that has been loaded first among the several pages kept in main memory is considered to be replaced by the page of virtual memory. All blocks inside the main memory has their physical address in ascending order. Suppose there are four blocks in main memory addressed by a1
a1<-p1 a2<-p2

a3<-p3

a4<-p4

Consequently, the initial page to be replaced is p1 because it was First in and hence will be first out, thereafter p2, thereafter p3 and thereafter p4. Q. Consider the following page reference during a given time interval for a memory consisting of 3 frames: B,C,A,A,C,D,B,E,F,A,B,D,A,F,A. Using (a) FIFO and (b) LRU replacement strategy compare the result ?

FIFO Replacement Algorithm: Replaces a page, which is entered into the main memory first. B

B C

B C A

^

^

^

B C A

B C A

D C A

D B A

D B E

F B E

F A E

F A B

D A B

^

^

^

^

^

^

^

D A B

D F B

D F A

Frame-1 Frame-2 Frame-3

^

^

Total No_Page t=15 Total No_Page Hit=12 Total No_Page Mis=03

LRU Replacement28 Algorithm: Replace a page that has not been used for the longest period of time. B

B C

B C A

^

^

^

B C A

B C A

D C A

D C B

D E B

F E B

F E A

F B A

D B A

^

^

^

^

^

^

^

D B A

D F A

^

D F A

Frame-1 Frame-2 Frame-3 Total No_Page Hit=11 Total No_Page Mis=04

Cache Memory  Now it is almost clear that memory is most dominant unit in the computer system that affects the Execution Time of a program/instructions. Hence to speed up the execution time, there is need of fast memory, known as Cache Memory placed in between CPU and Main Memory. The speed factor of cache is 5 to 10 on a standard scale factor as compared to main memory. It is the fast static RAM where the storage is stored as voltage that is more stable.

27

Periodically Executed at regular interval.

28

Whenever the page fault happened, the page replacement is used.

26/78

CACHE

RAM

DISK

CPU

H/W Implemented

S/W Controlled

E.g. It is divided into number of blocks known as cache lines having the size varying from 16 to 64 KB. Let’s suppose there are two memories, one is slow and has 80ηs access times (ime taken to access an instruction) and another is fast having 10ηs access time. Let’s suppose there are 1000 memory accesses required by a program. Hence by using slow memory, time taken to access 1000 memory access is 80µs(80ηs * 1000). And by using fast memory, time taken to access 1000 memory access is 10µs(10ηs * 1000). Now suppose there are mixed memory (Slow & Fast) in the system in proportion of 20:80 or 20%: 80%. Then time taken to perform 1000 memory accesses is 24µs(200*80ηs + 800*10ηs). Now it is very straightforward conclusion that “By using slow & fast memory entire execution of a program can be enhanced.

Software Controlled

Key elements in the cache designing; 1. Cache Size 2. Block Size. 3. Mapping Function. 4. Replacement Algorithm. 5. Write Policy.

Types Of Cache

Hardware Implemented

Types of Cache Memory a) S/W Controlled  Where a portion of main memory acts as cache memory that is definitely fast with respect to Hard Disk and the function is controlled by the OS. It is mainly used during the communication in between Disk and Cache for data read/write operations.

CACHE

RAM

DISK S/W Controlled

CPU

H/W Implemented

b) H/W Implemented  Where one or more cache device is individually used (Cache slots) that are definitely fast with respect to RAM. And the functioning is controlled by the cache controller. It is used during the communication in between RAM and the CPU. Reading - When the kernel needs a particular piece of information, kernel looks to cache initially. If the information is available in cache, the kernel gets the benefits of locality of reference. Otherwise kernel looks to RAM and bring a copy of information from RAM into cache and continues to read operation. Otherwise looks to Hard Disk and if available then brings a copy from there to RAM and then to cache.

27/78

Writing - The inverse of above explained operations are performed to get writing operation while safekeeping of data on the disk. 1.3 RAM DISK S/W Controlled

Sequence of Writing Operation;

1.1  1.2  1.3

1 .2 CACHE

CPU

H/W Implemented

1.1

Function of Cache – CPU requests for a required block in cache memory. If request does get block in cache then it is said as Hit occurs. If request does not get block in cache then it is said Miss occurred. The performance of Cache is determined by Hit Ratio; No of Hits Hit Ratio/ Page Fault Rate = -----------------------No of Hits + No of Misses Hit Ratio of cache can have maximum up to 1/(1+0) where total number of request is 1 that is hit where the miss count is 0. Hence, the ratio can be 1/1-1, while thinking positively. But in case of worst case, it can be h/(h+m) where h=No of Hits and m is the number of miss and number of total request is h+m. Needless to say that a cache of less capacity has more possibility of miss and a cache of big capacity has less possibility of Miss and enhanced possibility of Hit. Q. Does always the page hit increases if the no of frames are increased? Give reasons for your answer? No – because it depends upon the size of process and it’s respective no of pages to be loaded inside the frame.

Locality Of Reference – The repeatable instructions of a program is kept in cache to take advantage of locality of reference i.e time taken to access an instruction from cache memory is definitely less than time taken to access the same instruction from main memory. Cache Memory d1

CPU

d1
Main Memory

Hence, the execution time of a program can be enhanced if it contains several repeatable instructions, by placing them on to cache memory instead of main memory. It is recommended to have all those data that is supposed to be shared among several applications and expected to be executed very frequently and repeatedly again and again. Due to locality of reference, it has better performance than Ram as well as it’s own elementary structure of static RAM. The Cache can improve I/O performance. In virtual memory, the page fault frequency is reduced when the locality of reference is applicable to the process. In virtual memory, the page fault frequency is reduced when the Locality of Reference is possible to the process. Q. Describe any one technique of memory management?

SPOOLING  Simultaneous Peripheral Operation On-Line – A process of transferring data intended for a peripheral device to a disk (temporary storage area) so that it can be transferred to peripheral at a more convenient time bulk. The use of buffer on secondary storage to reduce the effect of timing differences between the CPU and peripheral equipment. It is most beneficial in multiprogramming environment where jobs are evenly divided as I/O bound and CPU bound. 28/78

Q. What do you mean by SPOOLING?

CPU

Keyboard

DISK

Printer

Buffer  The temporary storage area, reserved for use in performing I/O operations, into which the data are read and which the data is written. It is used to compensate (to increase data transfer rate) for a difference in the rate of flow of data between mechanical and electronical devices (transfer b/w two devices or one application). It reduces disk overhead and utilization. Types of Buffering: a) Anticipatory Buffering – A buffer management technique where by data is read into a buffer before they are actually requested by program. b) Demand Buffering - A buffer management technique where by data is read into a buffer only when they are actually requested by program. Single Buffering – Only one buffer is used to transfer the data b/w two devices where the data transfer rate is very slow because the producer29 has to wait while consumer30 consuming the data. OS Producer

Single Buffer

Consumer

Double Buffering – Two buffers are used in place of one to transfer the data b/w two devices where the producer produces one buffer and in the same time the consumer consumes another buffer. So the producers need not to wait for filling the buffer. It is partial solution of single buffer problem. OS Producer

Buffer - 1

Consumer

Buffer - 2

29 30

All the input devices. All the output devices

29/78

Circular Buffering – More than two buffers are used in place of one/two to transfer the data b/w two devices. The collection of buffers is itself referred to as a circular buffer. Where the producer produces one buffer and in the same time the consumer consumes another buffer. So the producers need not to wait for filling the buffer. It is partial solution of single buffer problem.

OS Buffer - 1

Producer

Consumer

Buffer - 2

Buffer - n

Buffer Vs. cache - If the cache memory is implemented as software controlled then the buffer and cache both can share the portion of RAM just one after another and exactly controlled by the OS. BUFFER

CACHE

Used and implemented in the communication in between Disk and RAM Can have the data that is either used at once only or repeatedly again 7& again. It is recommended for single tasking. It is less costly and has slower speed. Placed at Far from the CPU Gets the data from disk

Used and implemented in the communication in between CPU and RAM Can have the data that is only used repeatedly again & again. And supposed to be shared among multiple applications. Cache is recommended for Multi-Tasking. It is costly and has fast speed. Placed at near from the CPU Gets the data from RAM

CACHE

BUFFER DISK

RAM

S/W Controlled

CPU

Placement of Buffer and Cache in memory hierarchy.

Protection & Security of Main Memory- Every page inside the main memory has a corresponding small integer of 3-bits known as protection key. It is mainly used to guard and protect the page from unauthorized access by the instruction of a program. The probable combination of all three bits and their respective meanings are as follows;

30/78

000 001 010 100 101 110 011

For Empty Page For Read Access For Write Access For Execute Access For Execute And Read Access For Execute And Write Access For Read And Write Access

All such protection keys are maintained in the page table kept on the reserve area of main memory. Whenever, an instruction wants to get access on a particular page, it first checks the page table for the current status of page and does accordingly. Hence, the OS maintains and protects the memory location from unauthorized access of instruction of a program.

31/78

I/O Management Input Output Organization – Every I/O device has its corresponding port31 that provides an interface to communicate with the devices. I/O from a processor is carried out by reading data from Input Port or by writing data on Output Port. PC Bus Architecture

S C S I

Processor

Cache

B u s

Monitor

Graphics Controller

Memory Controller

Disk

Disk

Disk

Small Computer System Interface (SCSI) Controller

Memory

PCI Bus

Disk Controller

Disk

Expansion Bus Interface

Keyboard

Disk -------Expansion Bus-------

Disk

Disk

Parallel Port

Serial Port

Controller – IS a collection of electronics that can operate a port, a bus, or a device. It has one or more registers for data and control signals. And the processor communicates with the controller by reading and writing bit pattern in those registers. Serial Port Controller – Is an example of simple device controller that is a single chip in a computer that controls the signal on the wires of a serial port. Port And Bus – Port is a connection point where the device communicates with the system where if one or more devices use a common set of wires, the connection is called a bus.

31

A channel through which data is transferred between a device and the microprocessor. The port appears to the microprocessor as one or more memory addresses that it can use to send or receive data. Input/output port is also called I/O port.

32/78

PCI Bus – Connects the processor – memory subsystem to the fast devices and an expansion bus that connects relatively slow devices such as Keyboard and serial and parallel ports. System Bus |

|

|

|

CPU

Memory

I/O Controller1

I/O Controller2

|- Hard Disk1

Multipoint I/O Bus

|- Hard Disk2 Shared I/O Bus

|- Floppy Disk |- CD ROM

Printer

Keyboard

Mouse

Modem

System Vs. I/O Bus Types of I/O Bus A) Shared (Serial)

B) Multipoint (Parallel)

Types of I/O Devices Block Devices

Character Devices

Captures all the essential aspects for accessing disk device and other blockoriented devices. Linear array of blocks. Utilized where block oriented data transfer is required e.g. RDBMS Products.

Captures all the essential aspects for accessing a byte/char oriented devices. E.g. Keyboard.

Utilized where char. Data for input is expected spontaneously against requirement of application.

Q. Explain block and character devices?

Differences b/w I/O Devices in various aspects Application Complexity of control Data Rate Data Representation Error Condition Unit of transfer

The function/purpose of I/O device is depending upon application. Some devices require simple control while as some require more complexities. Varies b/w I/O device to I/O device that is measured by KB/Sec. E.g. transmission rate of Keyboard is 0.01KB/Sec and Monitor is 30,000 KB/Sec. Different data encoding and decoding scheme are used by different devices, including differences in character code and parity convention. The nature of errors, the way in which they are reported, their consequences and the available range of responses differ widely from one to another device. Data may be transferred as a stream of bytes (characters) or in large block (A Disk).

Synchronous Device Vs Asynchronous Device Synchronous Device – Is one that Asynchronous Device – Is one that performs data performs data transfer with predictable transfer with unpredictable response time. response time.

33/78

I/O Communication Techniques / Data Transferring Method(s) / Organization of I/O devices The data transfer b/w the CPU and I/O devices may be handled in a variety of techniques. Some techniques use the CPU as an intermediate path; other transfers the data directly to and from the memory. Data transferred to and from I/O device is said to be I/O communication. The organization of I/O devices and their respective data transferring methods may consist of any one from the followings available methods;  Software Polled / Programmed I/O Method  Interrupt Driven Method  Direct Memory Access Controller [DMAC] Method

A.) Software Polled/Programmed I/O – In S/w Polled I/O operation (Read/Write), data transfer from an I/O device to memory requires the execution of Control Program. The CPU stays in a program loop until the I/O unit indicates that it is ready for data transfer that is time-consuming process for CPU. It is suitable for small and low speed computers.

Functioning of S/W Polled/Programmed I/O:

Start

1. The CPU issues Read command to I/O device so that to get status of I/O device and wait in the program loop until it receives response from I/O device. CPU  I/O

The CPU issues Read command to I/O Module. Read the status of I/O devices. Not Ready

Check Status Ready

2. Then I/O device sends the status information about whether the I/O device is ready or not. I/O  CPU 3. The CPU checks the status information, if the device is ready, the CPU read the word from I/O device, otherwise the CPU wait until it receives positive signal.

Read word from I/O device.

4. The CPU read the word from I/O device and processed / controlled by the Control Program. I/O  CPU

Write the word into memory.

5. The word is loaded into memory. CPU  Memory Done

No

6. And then the CPU performs the same for next word to be read/write.

Next Hence, we can say, all the I/O operations on the corresponding I/O devices of the concerned program are managed under the control of control program. Due to the intervention of Control Program that causes the CPU has to wait for the ready signal from the I/O device that is why the transmission of large data becomes very slow that affects the performance of the CPU b’coz it is an indirect approach while transferring data among devices, memory, and CPU also.

34/78

B.) Interrupt Driven I/O – In Interrupt Driven I/O, a device when ready for input or output, generates a signal known as interrupt to the CPU, brining/asking for the attention of CPU towards the corresponding device that has generated an interrupt. Types of Interrupt 1.) Traps – Traps are some sort of interrupts (S/W Interrupts) that are processor’s instruction and mainly caused by the internal resources of the CPU in response to program errors or requests for operating-system services say printer’s spooler service, scheduler, etc-etc. Operation of Trap - Whenever an application wants to have the services offered by the kernel, the application requests along with proper argument (specifies a particular service), the system call32 accepts the request, checks for the validation and only valid request is passed to kernel and simultaneously the system call executes a software instruction known as trap that identifies the required kernel’s service say Event Log, Scheduler, RPC, Computer browser, Printer Spooler, Net logon, Messenger. Operating System S KERNEL H E L L System Call

Functioning of System Call(s).

Process

Request for OS’s Services with an Argument (n)

Application

System Call

Validation of Request

Kernel

TRAP Executes a special S/W instruction (Trap) that identifies the requested Kernel’s service.

Eexecutes the service for application.

The trap is usually caught by the debugger. 2.) Exception/Program – Exceptions are some sorts of interrupts that are generated/caused by faults (illegal instructions of program - External resources) such as Divide By Zero, Arithmetic Overflow, Page Faults etc-etc. 3.) Timer - generated by clock in the processor. Whenever the time-slot of the processor expired, the OS sends the interrupt signal to the processor to suspend the current job. 4.) Hardware Interrupt - Interrupts are generated by various hardware devices to request service or to report problems to the CPU, A hierarchy of interrupt priorities determine which interrupt request will be handled first if more than one request is made. A program can temporarily disable some interrupts if it needs the full attention of the processor to complete a particular task. 4.) I/O Interrupt – Interrupts are the signals caused/generated by the I/O devices or a program’s instruction to bring the attention of CPU towards the devices or the program. In another word, by using interrupts, the devices acknowledge the CPU for their willingness to 32

System calls are low-level routine, kept in the kernel that provides the interface b/w a process and the OS. Generally these are available as assembly language instructions but some of them can also be written in high-level language. And systems calls e.g. fork(), exec() etc are also invoked by S/W interrupt (Trap).

35/78

read or write operation and CPU immediately responds against the encountered interrupt while interleaving the task that is currently being processed by the CPU and looks up the IVT for searching and execution of ISR. Most CPUs has two interrupt lines that are considered and listen by the CPU after executing every instruction. A device/ controller/ program puts the interrupts signal on these lines. Maskable Interrupt Non- Maskable Interrupt It turns off / preserved by the CPU before the It is reserved for events/situations such as execution of critical instruction sequences to memory unrecoverable error. protect them by any type of interrupts. CPU Maskable Wire

Non-Maskable Wire

Disabled Interrupt – A condition usually created by the OS during which the processor will ignore interrupt request signals of a specific class.

Start

Functioning Of Interrupt Driven I/O: The CPU issues Read command to I/O device so that to get status of I/O device and go on to do some useful work. CPU  I/O

The CPU issues Read command to I/O device. Read the status by the mean of interrupt of I/O device. Not Ready - Error

Check Status

Ready

When the CPU receives the interrupt signal from I/O device, it checks the status.

Read word from I/O device.

If the status is ready, then the CPU read the word from I/O device, otherwise produces error. I/O  CPU

Write the word into memory.

Done

When the I/O device is ready then it send an interrupt signal to the processor to acknowledge the CPU for it’s readiness.. I/O  CPU

No

And then, the CPU write/load the word into memory. CPU  Memory

Next

36/78

Operation of Interrupts – Whenever a device generates a Interrupt to acknowledge the CPU for its readiness, the processor receives an interrupt, it suspends its current operations, saves the status of its work, and transfers control to Interrupt Vector Table33 for searching and executing a special routine known as an interrupt handler or Interrupt Service Routine34, which contains the instructions for dealing with the particular situation that caused the interrupt. E.g., lets suppose, an interrupt, numbered by 12 is generated by the Output device say monitor. Then the probable location of ISR for Interrupt 12 is 12*4=48th location of IVT in main memory b’coz ISR address are generally 4-Bytes long in main memory. And the corresponding ISR is not available only on 48th location but also on 49th, 50th and 51st location in IVT. IVT Location No. ISR Address Address of 1st ISR =0x4=0 ADD-00 0 1 2 3 Address of 2nd ISR =1x4=4 ADD-01 4 5 6 7 Address of 3nd ISR =2x4=8 ADD-02 8 9 10 11 Address of 12th ISR =12x4=48 ADD-ADD-12 48 49 50 51 Interrupt Request (IRQ) Hardware lines over ADD-N which devices can send signals to get the Vector Intel Pentium processor attention of the processor when the device is Number event vector table ready to accept or send information. Interrupt 0 Divide Error request (IRQ) lines are numbered from 0 to 15. 1 Debug Exception Each device must have a unique IRQ line. 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 7 18 19-31 32-255

]

Null Interrupt Breakpoint Breakpoint INTO-detected Overflow Bound Range Exception Invalid Opcode Device Not Available Double Fault Coprocessor segment overrun Invalid task state segment Segment not present Stack Fault General Protection Page Fault Intel Reserved Floating-Point Error Alignment Check Machine Check Intel Reserved Mask able Interrupts. & Device generated interrupts.

Interrupt Vector Table

Q. What is the difference between a trap and an interrupt? What is the use of each function?

Interrupt Is Caused By Solution Execution Mode Raising Moment Utilization Execution Priority

A H/W Instruction (Signal). External Resources e.g Devices. ISR Asynchronous Event

Trap H/W

H/W Fault, During execution of System Call, Device controller fault. Time Sensitive Processing High

A S/W Instruction (Signal). Internal Resources E.g. CPU OS’s Services Synchronous Event

Exception External Resources E.g Programs Exceptional Handler if available. Asynchronous Event

When the application requires / requests Kernel’s Service

Low

33

IVT contains the memory addresses and the addresses of Interrupt Service Routine. IVT along with all probable ISR including their addresses are routines of ROM-BIOS and supplied with BIOS. IVT is loaded into the memory during the execution of ROM-BIOS routine at the start-up of the system. 34 ISR is Interrupt Handler that handles the generated interrupt by the I/O devices, or external programs.

37/78

INTERRUPTS A) Interrupts are special H/W instructions/signals that are generally caused by external resources like an I/O devices, an instruction of a program, etc.

VS TRAPS A) Traps are special S/W instruction/signals that are generally caused by internal resources like an instruction of processor placed on register, etc.

B) Interrupts are asynchronous to a program, which means whenever an interrupt is generated, the execution of the program is halted, and control goes to the ISR to handle the interrupt. And just after executing the ISR, the control resumes the execution of program from the same place where it was halted. C) Acknowledges CPU Your Suggestions?

B) Traps are synchronous to a program i.e whenever the trap is generated; the execution of the current program is continued along with the trap handler simultaneously.

C) Calls OS’s Services e.g Printer Spooler, RPC, Scheduler, Linker, Loader, Computer Browser, Net Logon, Messenger etc.

Due to the intervention of Interrupts and ISR, the transmission of large amount of data becomes very slow b’coz it is an indirect approach of transferring the data among several devices, memory and CPU also. C) Direct Memory Access Controller – Once again needless to say “Due to the intervention of Program Control Instructions (S/W polled) and ISR (Interrupts), the transmission of large amount of data among CPU, I/O Devices & Memory becomes very slow”. The utilization of above said approaches/methods are indirect approach for data transferring hence the transmission becomes slow. So there should be a proper method for direct data transferring among memory, I/O Devices and CPU and it is DMAC. Before proceeding towards DMAC we should have proper knowledge of Device Controller & Device Driver. The DMA provides the Memory access that does not involve the microprocessor hence, avoids the burdening of the main CPU by programmed I/O methods. DMA is frequently used for data transfer directly between memory and a peripheral device such as a disk drive with high speed. Direct memory access is also called DMA. Device Controller – is Hardware responsible for converting electronic signals inside the Computer into a form that the device can understand. Device Driver – is a program (S/W) that handles the hardware specific details of the device controller. It is a part of OS specific to a particular computer system. It mainly handles, hides the details of device controller and devices from the OS. DEVICE DEVICE CONTROLLER

Device Driver

Operating System

Role of Device Controller – Needless to say the speed of CPU is quite high as compared to the speed of I/O devices say Hard Disk, Keyboard, and Printers etc. Hence Device Controller accepts the data at the same speed as it was sent from CPU and holds the data for buffering and thereafter sends it to the corresponding device say Hard Disk so that to free up the CPU or fill the gap b/w the speed of I/O devices and CPU. So we can say “Device Controller is responsible for the task of data buffering and take care of the differences in speed b/w slower and faster devices. Q. Explain Software and Hardware protocols?

38/78

I/O Scheduling –

User Application Program

User

User Interface

API

Operating System Device Driver Software (OS)

CPU

Device Controller

Memory

I/O Devices

Hardware

DMAC- The data transfer speed inside the computer system can be enhanced and entire performance can be obtained increasingly because there is no need for intervention of Control Instructions or Interrupts using CPU while transferring data from device to memory or viceversa or from memory to memory. The DMA Controller takes over the buses to manage the transfer directly b/w the I/O device and memory/memory buses without any control of the CPU. The DMAC, as a controller device has multiple DMA channels so that multiple channels can directly communicate to several I/O devices, and Memory. Functionally DMAC provides read/write controls for devices and memory. Typically there are several types of DMA transfers; a) Device To Memory b) Memory To Device c) Memory To Memory d) Device To Device DMAC

Printer

Disk

Control Lines (Bus)

CPU

DMAC Architecture

39/78

RAM

Functioning of DMAC – The DMAC has 8 DMA channels that can be used for data transferring b/w memory and peripheral devices. A Channel is a device that connects the CPU and main memory with the I/O control units. The channels are numbered as 0,1,2…7. 0-DMA Refresh 1-Network Card 2-FD Controller 3-Scanner Card 4-Cascade Channel

DMAC 5-SCSI Controller X86 Family

6-Tape Backup Controller 7-Proprietary Printer Card

The device controller puts a signal on the DMA-Request wire when a word is supposed to be transferred. The signal causes the DMA-Controller to look/reserve the memory bus to place the required address on the memory address wire. The DMAC places a signal on DMA-Acknowledgement wire to the device controller. As soon as device controller receives the ACK-signal, then it immediately transfers the data to memory and removes the DMA-Request signal. The DMA-Controller interrupts the CPU when entire transfer of data gets over. Memory Address Wire D M A C

RAM

DMA-Request Wire DMA-Acknowledgement Wire

Device Controller

Device

Note: The DMA transfer scheme, where transfer of one data word at a time, is called Cycle Stealing.

Before an I/O operation begins, a path b/w main memory and the device must be established. If the channel, the control unit or the addressed device is busy, then the pathway construction must wait. In order to avoid such wait, DMAC has several channels that are fast processor. The responsibilities of channels are as follows; 1. Test I/O to determine whether or not the pathway to the addressed device is busy. 2. Starts I/O on a particular device. 3. Halts I/O on a particular device. The CPU first initializes the DMA and the CPU should send some useful information to DMAC that includes; 1. The starting address of the memory block where data are available for read or write. 2. The word count, which is the number of words in the block. 3. Specify the mode of transfer such as read or write. 4. A control to start the DMA transfers. When DMAC receives the above said information, the DAM starts and continues to transfer data b/w memory and I/O devices until an entire block is transferred. When the assigned job of the DMA is done, it informs to the CPU about the termination from the job. 40/78

5. DMA sends interrupt Signal when done.

DMAC Block address Word Count

CPU

1. Initialize the DMA Controller

Control I/O Devices H/D Cntrl

4. Acknowledgement. 2. DMA request to transfer the data into memory. 3. Data Transferred.

Memory

All the channels commonly communicate through interrupt. Whenever a program requests R/W operation on a device the following steps are taken; 1. It interrupts the respective I/O Controller. 2. Then the I/O controller builds a channel program main memory. 3. The channel program35 is executed by the addressed channel. 4. Appropriate signals are transmitted to the addressed control unit. 5. These signals are interpreted by the Control Unit and used to control the device operations. 6. An interrupt is issued by channel to signal continuation of the program’s execution. 7. Then finally, the control returns to the program.

35

The instructions that are executed by a channel to access devices and control the pathway of data. And also controls the single read/write operation to only one device at a time.

41/78

Process Management Consists of Is Reside in Span of time Entity

Program Instruction in any programming language. Static Object Secondary storage devices Unlimited Passive

Process Instruction Executions in machine code. Dynamic Object Main memory (RAM) Limited Active

Process - A Program/Task36 in execution is known as Process that requires/consumes the time of CPU. Now it is time to clear the differences b/w a process and a program – Several process may share common program or a program may have several processes that can be processed sequentially or virtually parallel. A process can be independent of any other process or it might interact with other processes. An interaction b/w processes occurs when processes store data or exchange messages to their activities by using Synchronization. Synchronization is the process to hold/acquire lock on the shared data by a process and other process waits till release of the lock by that particular process so that to avoid deadlock and it is offered by the OS. A Process can be accomplished by a program when it gets executed by single or multiple processors. Virtually, a multitasking CPU executes multiple programs simultaneously, by switching from one process to another process. Processes run one after another turn-wise which means a few instruction of every program in each turn. Needless to say the CPU is the fastest component hence processing of few instructions of programs happen to be very fast and it seems and looks as if the all program are being executed simultaneously. During scheduling of a process from one process to another process, the status of last processed process is stored in the Process Control Block Register (handled by the OS). The stored information in Process Control Block Register is again needed when the corresponding process is again given the CPU time to resume the execution after the last processed instruction of that particular process. Process Control Block (PCB) - Information associated with each process.  Process state  Program counter  CPU registers  CPU scheduling information  Memory-management information  Accounting information  I/O status information

The operating system is responsible for the following activities in connection with process management:     

Pointer to the parent process Pointer to the child process

Process State

Process Identification Number (PID) Process Priority Program Counter Pointer to process memory Memory limits List of open files -

Process creation and deletion. Process suspension and resumption. Provision of mechanisms for: Process synchronization Process communication

A process includes:  Program counter  Stack  Data section Attributes of process:  Process Identification  Process State Information  Process Control Information

Process Control Block

36

The word process loosely referred to, as task is acceptable in many literatures on OS. A program is passive while as a process is an active entity. And other difference is – program is treated as static object while as process is treated as dynamic object. The numbers of programs may be unlimited but no of processes are limited for any specific machine.

42/78

Types of Process – A process may be of two types, one is based upon the resource consumption and another is based upon resource sharing and communication; Resource Consumption

Resource Sharing & Communication

a) I/O Bound Process – The process that a) Independent process - cannot affect or be requires I/O operation by using I/O affected by the execution of another process. Processor, is known as I/O Bound b) Cooperating process - can affect or be affected by the execution of another process. Process. Advantages of process cooperation: • Information sharing b) CPU Bound Process – The process that • Computation speed-up requires computational operation by using • Modularity CPU, is known as CPU Bound Process. • Convenience

State of Process – A process may have following states while execution of CPU is going on; a) New – A process is being created. b) Running – The instruction of a process that are currently being executed by CPU. c) Waiting - A process that has been processed by the CPU and is waiting for some events (I/O operation) to be completed is in Waiting State. d) Ready37 - A process that has been processed by the CPU and has completed its I/O operation so that to participate in next execution cycle of the CPU. e) Terminated – A processed has finished execution. The traffic controller (a part of the OS) constantly checks the processor, and the status of processes. Diagram Of Process State. Admitted

Scheduler Dispatch

NEW

READY

RUNNING Interrupt

I/O or Event Completion Exit

WAITING/BLOCKING I/O or Event Wait

TERMINATED 4 fork

1

exec

2

kill

wait 3

Q. Explain process state?

Role of Kernel – The kernel of OS is responsible for virtualizing the Physical CPU38 among multiple processes to create an illusion of separate CPU for each of the processes and it can also provide virtual memory for processes. Hence, kernel hides the physical CPU by performing the functions associated with CPU allotment and management to the User. 37 38

Ready Queue is implemented by the Linked-List. It is responsibility of KERNEL to divide all the available time of processor among all the to provide/create virtual CPU for every active process.

43/78

System Programs - System programs provide a convenient environment for program development and execution. They can be divided into: – File manipulation Most users’ view of the operation system is defined – Status information by system programs, not the actual system calls. – File modification – Programming-language support – Program loading and execution – Communications System Calls – There are some primitives functions built in the kernel of the OS and invoked by S/W interrupts, known as system calls, which are responsible for; 1) Creating Process 2) Starting Process 3) Stopping Process 4) Destroying Process All the library functions and utilities are written using the system calls. It is basic routine, defined in the kernel, which lets a user remain transparent to all complex processes in the system (UNIX). The Interaction through system calls represents an efficient mean of communication with the system. It is an interface between a process and the OS. Generally, these are available/written in either assembly language or high-level language. Operating System S KERNEL H E L L

Interface

Functioning of System Call(s).

Process

System Call

Request for OS’s Services with an Argument (n)

Application

Validation of Request

System Call

Kernel

TRAP Executes a special S/W instruction (Trap) that identifies the requested Kernel’s service.

Eexecutes the service for application.

Q. Explain System Calls, OS Commands and its uses? Types of System Calls Q. Differentiate between System Call and System Process? 1. Job Control System Call. Q. What is user program? Explain. 2. File manipulation System Call. 3. Information Maintenance System Call.

System call Vs. System Program: System call

System Program

System calls provide the interface between a running System programs provide a convenient program and the operating system. environment for program development and Generally available as assembly-language execution. They can be divided into:  File manipulation instructions. Languages defined to replace assembly language for  Status information  File modification systems programming allow system calls to be made  Programming-language support directly (e.g., C, Bliss, PL/360). Three general methods are used to pass parameters  Program loading and execution  Communications between a running program and the operating system: Pass parameters in registers.  Application programs Store the parameters in a table in memory, and the  Most users’ view of the operation system table address is passed as a parameter in a register. is defined by system programs, not the Push -(store) the parameters onto the stack by the actual system calls. program, and pop off the stack by the operating system.

44/78

Modes of CPU Operation – The CPU may have two modes while processing the processes that are as follows; a. Privilege/Kernel Mode – The CPU runs in privilege mode for the execution of System Program (OS), it has full control to access Control Registers of CPU, memory table(s) and I/O devices. b. User Mode – The CPU runs in user mode for the execution of the instructions of user's program that has no control to access Control Register of CPU, memory table(s), and I/O devices. Hence, any resource allocation and sharing among multiple users is totally controlled by the OS. The mode of CPU is kept as a bit (either 0 for Privilege Mode or 1 for User Mode) inside Processor Status Word (PSW) Register on the CPU. Whenever an H/w interrupt is generated to the CPU, the mode of CPU becomes Privilege Mode from User mode and the status is changed by from 1 to 0 as a bit. And all the I/O devices are entirely controlled by the OS in kernel mode. Process Communication – While working with several processes, two or more processes can communicate with each other by sharing a common data or sending a signals/messages to each other but there may be a critical section problem between two processes while sharing a common data or resources. Whenever a critical section 39problem arises it may lead to unwanted/unexpected result. e.g. let’s suppose there are two processes named as A & B, both are sharing a common data X having the value 10. Both the processes want to increase the value of common data X by 1 as follows; X=10; Shared Resource (Data) Process A [Entry Section] R1=X R1=R1+1 X=R1 Remainder Section of Process A Critical Section Remainder Section of Process A Remainder Section of Process A

------------------------[Exit Section]-------------------------

Process B [Entry Section]

R2=X R2=R2+1 X=R2

Remainder Section of Process B --------------------[Exit Section]------------------

Process A assumes the vale of X to be 11 but gets 12. Process B assumes the value of X to be 11 but gets 12.

The result is unexpected due to critical section problem where the Process B is attempted to access a common variable X where it is already accessed by Process A. Solution of Critical Section Problem – Do not design the processes in such a way that if one process is in critical section, another process may not to be in critical section for the common resource. 1. Do not let the process to be in critical section if their remainder instructions are waiting for execution. 2. Ensure that only one process among others to be in critical section at a time 3. Mutual exclusion for interprocess communication must be preserved. 4. If any one process requires continuation for a moment, it must be allowed.

39

A critical section is a sequence of program statements within a task where the task is operating on some data object shared with other task. Critical section may be implemented in tasks by associating a semaphore with each shared data object.

45/78

Process Serialization – We may prevent the critical section problem by designing the execution of processes in such a way that one process may get the accessing of common resource at a time and there after other and so on, serially. It means the scheduling of processes must be scheduled serially. X=10 Process A; Entry Section R1=X R1=R1+1 X=R1 Exit Section Process B; Entry Section R2=X R2=R2-1 X=R2 Exit Section Process A expects the value of X (Initially 10) to be 11 and gets. Process B expects the value of X (Initially 11) to be 12 and gets.

Process Synchronization – It is the task of acquiring the lock on a resource by a process that is currently accessing that resource so that to prevent simultaneous access by the other process. And hence, to overcome the situation of deadlock. During the concurrent execution of several tasks, each task proceeds asynchronously with the other; that is, each task executes at its own speed, independently of the others. Thus, when task A has executed 10 statements, task B, which was initiated at the same time, may have executed only seven statements or no statements, or it may have already run to completion and terminated. In order for two tasks running asynchronously to coordinate their activities, there must be a mean of synchronization, so that one task can tell/pass the signal to the other when it completes the execution of a particular section of its code. The signal, sent between the tasks allows the tasks to synchronize their activities so that the second does not start the processing data before the first has finished and vice-versa. The OS may opt any of the synchronization method among several synchronization methods like Interrupts, monitor, and Semaphores etc-2. But the most commonly used method is the Semaphore. Monitor – Is the fundamental construct of process synchronization that is set of programmer’s defined operators. Semaphores – Are non-negative integer variables, mainly used by the OS for process synchronization (among several processes). It maintains the status of resources (Availability/Non-Availability) like memory location, files and I/O devices so that to overcome from the situation of Critical Section Problem. Types of semaphore: Binary Semaphore – It is an extension of Semaphore that can contain only two binary values, either 0 or 1. General Semaphore - Semaphore variables can on be accessed only by a process at a time. It means OS reserves a lock on the semaphore variable so that to prevent from simultaneous access and dead lock.

46/78

The OS maintains semaphore variables for the prominent resources of the system to keep the status of those resources by using binary semaphore. As soon as a process is directed to the resources of the system the corresponding semaphore is contacted to check the status of that particular resource. It is the task of the OS to maintain the status of semaphore variable as per the situation. Altogether, we can say that no two processes can share or attempt to access a resource simultaneously; hence the OS gets overcome from critical section problems.

Synchronous & Asynchronous Message Passing Synchronous Message Passing

Asynchronous Message Passing

Blocking

Non-Blocking

Sending process is blocked until the message is received by the receiving process/mailbox. Receiver blocks itself until a message is available.

Sending process sends the message and resumes / continues. Receiver receives the message one by one continuously.

Inter Process Communication (IPC) & Message Passing Direct Method Symmetric Both the sender and receiver know the name/address of each other in advance prior to communication. It uses the primitives for transferring of message as follows; 42

SEND (R , msg) RECEIVE (S43, msg)

Only one link is established between two processes. Only two process(s) can share a link.

Indirect Method

Asymmetric Only the sender knows the name/address of receiver in advance prior to communication (deliver of message to the receiver).

Message is sent using MAILBOX as an intermediary by the sender and received from the MAILBOX by the receiver. MAILBOX  is an object, has an unique ID that can receive, hold and send the messages from and to sender and receiver. SEND (M40, msg) RECEIVE (M, msg)41

Link can be shared by two or more than two processes. Owned by user, OS, and process(s), uniquely. Lifetime  Until an entire execution of a process. The OS provides the facility to 1. Create Mailbox. 2. Send / receive Message(s). 3. Delete Mailbox. A separate mailbox is created by the OS for every active process(s).

Q. Differentiate between symmetric & asymmetric inter process communication. What is a mailbox? Q. Why special H/W instructions are necessary for solving critical section problem in a multiprocessor environment? Q. Why special H/W instructions are necessary for solving critical section problem in a multiprocessor environment?

40

Common mailbox. Process P1 has its separate mailbox, identified by M20 uniquely and assigned by the OS till the lifetime of process P1. The process P1 gets the message from it’s mailbox as soon as a message comes to it’s mailbox. 42 Receiver 43 Sender 41

47/78

Deadlock Management eadlock - The OS uses the concept of LOCKING to protect the resources44 so that they can only be accessed by one process at a time to avoid simultaneous access and to overcome from the critical situation of deadlock45. Deadlock = Infinite waiting of a/all process(s) for a resource(s), Locked / Consumed by some other process(s). A set of blocked processes each held a lock on a resource and waiting to acquire a lock on some other resource, already being locked by some other process in the set, waiting infinitely. Example - System has 2 hard-disk drives. Pr1 and Pr2 each hold lock on one disk drive and each needs the accessing of another one, respectively.

D

Types of resources that may cause the Deadlock A) Physical Resources – Printer, Tape Drives, CPU Cycle, Cache Memory space etc. B) Logical Resources – Files, semaphores, Monitors etc. The System where deadlock may encounter; a) Larger number of process in a system. b) Multithreaded programs in a system. c) Infinite Waiting. d) Many-more resources within a system. e) Long-lived files and Database Server/ Application server.

REQUEST USE RELEASE

Necessary conditions for a deadlock situation; 1) Mutual Exclusion – At least one resource must be held in protected mode so that only one process may use the resource and other process prolongs/waits until the resource has been released. It means only one process may use a resource at a time. In another word – “only one process at a time can use a resource”. 2) Hold & Wait – A process holds lock on one resource and looking for other resource that is already locked by another process (Linear Wait.). X

Linear Wait

R1

Y

R2

3) No Preemption – Where a process continues to hold the lock on the resources until completion of task. In such a case, no more process can access or preempts the locked resource meanwhile working with or by first process. Resource Preemption Method  Selecting an eviction/victim.  Rollback – return to some safe state and then restart process from that state.  Starvation - same process may always be picked as victim. 44

A part of computer system (S/W & H/W) that directly communicate with the user. E.g. Printer, H/D, File, Cache, I/O Devices. But Main Memory is not a resource and managed by sophisticated locking mechanism. 45 It a critical situation that occurs whenever a process (X) acquires lock on a resource (R1) and tries to access another resource (R2) that is locked by another process (Y) and another process (B) is trying to access the resource (R1) that is locked by the process X.

48/78

Q. Explain necessary conditions for Deadlock? Q. Mention the necessary conditions for Deadlock Situation? Q. Discuss the criteria Deadlock Situation?

4) Circular Wait - A process (X) holds lock on one resource (R1) and looking for other resource (R2) that is already locked by another process (Y) and another process (Y) is also looking for accessing the resource (R1) in cyclic mode. X

Circular Wait

R1

Y

Note: All the above conditions must occur simultaneously in the system.

R2

Prevention From Deadlock; 1.) We must ensure that at least one of the necessary conditions may never occur/hold at all. Or imposing a linear order of all resource types, and letting process request resources in increasing order of enumeration order. 2.) Mutual exclusion on non-sharable resources. 3.) Most of all deadlocks occur due to lack of available resources. To prevent the same, each process must acknowledge the maximum number of required resources of each type that it requires. Having such type of prior information, the developer may design an algorithm that ensures the system will never enter a deadlock state that is known as Deadlock-Avoidance algorithm. 4.) No circular wait - Serialize/Arrange the request (Safe State) made by process(s) in the same order as the resources are ordered. (Avoid cross-reference of the requests of processes for the resources. Recovery from Deadlock; 1) Abort all the deadlock processes to break deadlock cycle, known as ROLLBACK. 2) Abort one process at a time until the deadlock cycle is eliminated so that to reduce the overhead and re-submit all the processes for execution even some of them have been partially completed. Note: Aborting/Rollback process is costly, especially in case of the aborted process that has been fully or partially completed. Bridge Crossing Example - Traffic only in one direction. Each section of a bridge can be viewed as a resource. If a deadlock occurs, it can be resolved if one truck backs up (preempt resources and rollback). Several trucks may have to be backed up if a deadlock occurs. Starvation is possible.

Detection of Deadlock Q. What are the various methods of concurrency control? Which of these may lead to deadlock and why?

49/78

Process Design & Creation System Structure – Layered Approach The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface. With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers. Layered Structure of the THE OS. A layered design was first used in the THE operating system. Its six layers are as follows: Layer 5: User Programs Layer 4: Buffering For Input And Output Devices Layer 3: Operator-Console Device Driver Layer 2: Memory Management Layer 1: CPU Scheduling Layer 0: Hardware Process Creation – In earlier days of OS, the OS could only create, maintain and destroy a process but now a day the user has capability to do it. Parent process creates children processes, which, in turn create other processes, forming a tree of processes. Resource sharing. Parent and children share all resources. Children share subset of parent’s resources. Parent and child share no resources. Execution. Parent and children execute concurrently. Parent waits until children terminate. Address space. Child duplicate of parent. Child has a program loaded into it. UNIX examples – fork system call creates new process. – exec system call used after a fork to replace the process’ memory space with a new program

50/78

Process creation can be done either statically or dynamically using following Process constructs; A) Process Flow Graph –

It is the method of process creation that visualizes precedence relationship among processes. In process flow graph, the execution of process is represented by a directed edge of graph – that is generally labeled with Process Id (PID) and Execution Time of process. The Execution of processes s and their representation using process flow graph could be of following types;

1. Serial

2. Parallel

Start

Start Start

3. Serial-Parallel

Start

P1

P1

P1

P2 P2 P3

P3

P4

End Represented By;

P2

End

P3

P4

End

S (P1, P2, P3)

51/78

Represented By;

Represented By;

P (P1,P2,P3,P4)

S(P1,P(P2,P3,P4))

B)

Cobegin-Coend Construct – Process Flow Graph is not capable to represent interleaved processes properly where the first operation of one process is stored before the last operation of other process is completed. A1 A2 B1 B2 Inte rlea ved Processes A & B where A=A1+B1 & B=B1 +B2

Cobegin-Coend Construct is simple an extension of process flow graph, additionally it properly describes nested process flow. Cobegin For Example, the execution Q. Under what circumstances S1; of following instruction can parent terminates the Begin S2; Cobegin S3;S4;S5; Coend End S6; Coend

(a+b)*(c+d)–(e/f)

Start S1 S2 S6 S3

S4

shall be as follows ; Begin Cobegin T1=a+b; T2=c+d; T3=e/f; Coend T4=T1*T2 T5=T4-T3 End Start

S5 T!

T2

T4

End Cobegin Denotes concurrent nested process flow.

T3

execution of its child(s)?

 Process executes last statement and asks the operating system to delete it (exit).  Output data from child to parent (via wait).  Process’ resources are deallocated by operating system.  Parent may terminate execution of children processes (abort).  Child has exceeded allocated resources.  Task assigned to child is no longer required.  Parent is exiting.  Operating system does not allow child to continue if its parent terminates.  Cascading termination.

T5

Coend End Simple Batch Systems _ Hire an operator _ User operator _ Add a card reader _ Reduce setup time by batching similar jobs _ Automatic job sequencing – automatically transfers control46 from one job to another. First rudimentary operating system. _ Resident monitor – initial control in monitor – control transfers to job – when job completes control transfers back to monitor 46

Control Cards _ Special cards that tell the resident monitor which programs to run. _ Parts of resident monitor – Control card interpreter – responsible for reading and carrying out instructions on the cards. – Loader – loads systems programs and applications programs into memory. – Device drivers – know special characteristics and properties for each of the system’s I/O devices. _ Problem: Slow Performance – I/O and CPU could not overlap; card reader very slow. _ Solution: Off-line operation – speed up computation by loading jobs into memory from tapes and card reading and line printing done off-line.

52/78

Multitasking, Multiprogramming, Multiprocessing: Time Sharing (Fair Share) – It is the facility provided by the OS that is a virtual machine view to the user. It is the feeling of user that the entire set of resources on the system are separately dedicated to just a particular user at a given instance of time. The CPU switches from one virtual machine to another for some other user. All together, we can say that CPU time is shared among various processes of user(s) and such features are called Time Sharing. Multitasking based on the concept of Time-Sharing. Time Quantum – The time taken to occupy by a process at one go to the CPU is known as quantum time for a particular process that is currently being running by the CPU. Multitasking – Is a concept opted by the OS for the concurrent execution of several processes virtually, by enabling the CPU as multitasking CPU. The OS enables a CPU as multitasking CPU to achieve multitasking by dividing its execution time among several small time frames known as Time Slices47. In Multitasking, the CPU executes the instructions of one process in given time slice and switches to another process just one after another in cyclic mode. The cycle is repeatedly repeated till the completion of the processes. CPU Time TS1

TS2

TS3

TSn

P1

P2

P3

Pn

TSn- rel="nofollow">Time Slice No Pn -> Process No

Single User Multitasking OS – The Os that allows to have several processes for concurrent virtual execution by time sharing for single user, is known as Single User Multitasking OS. E.g. Windows 9x Multi User Multitasking OS – The Os that allows to have several processes for concurrent virtual execution by time sharing for multiple users, is known as Multi User Multitasking OS. E.g. Unix, Linux. Multitasking Vs Multiprogramming – Needless to say, a program has several processes or we can say several processes can put together to build a program. As well as a program can have only one process. Multiprogramming Multitasking / Time Sharing Logical extension of multiprogramming Multiprogramming functions behind the concept of Multitasking functions behind the concept of serving an entire program serving an individual process Multiprogramming is the simultaneous availability of multiple programs in the shared memory location for execution. It also shares CPU Time (Time Slices) along with other resources. Designed to obtain better CPU utilization. It implements continuous memory management.

Multitasking is the simultaneous availability of several process having their separate memory location (even by the way it may use virtual memory in case of less memory that multiprogramming does not have) and CPU Time (Time Slices). Designed for interactive use of system. It implements segmented memory management.

Note: Both increases the CPU Throughput i.e. no of instruction processed per unit of time. 47

The CPU Time is divided into number of small time frames that are known as Time Slices.

53/78

Features of the OS, Needed for Multiprogramming 1. I/O routine supplied by the system. 2. Memory management – the system must allocate the memory to several jobs. 3. CPU scheduling – the system must choose among several jobs ready to run. 4. Allocation of devices. Requirements of Multitasking OS – Multitasking OS requires suitable hardware and other requirements. Some of them are as follows; 1. Direct Memory Access Controller. 2. Timer. 3. Storage & Instruction Protection. 4. Dynamic Address Relocation. 5. Segmented Memory Management 6. Priority Interrupt Mechanism.

Multi programming: Multiprogramming is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle. Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as timesharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system. Multi threading: An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. So to get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used

Threads - A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:  Program counter  Register set  Stack space A thread shares with its peer threads:  Code section  Data section  Operating-system resources collectively known as a task. A traditional or heavyweight process is equal to a task with one thread. 54/78

In a multiple threaded task, while one server thread is blocked and waiting, a second thread in the same task can run. Cooperation of multiple threads in same job confers higher throughput and improved performance. Applications that require sharing a common buffer (i.e., producer–consumer) benefit from thread utilization. Threads provide a mechanism that allows sequential processes to make blocking system calls while also achieving parallelism. Kernel-supported threads (Mach and OS/2). User-level threads; supported above the kernel, via a set of library calls at the user level (Project Andrew from CMU). Hybrid approach implements both user-level and kernel-supported threads (Solaris 2). A multiprocessor system is the parallel system with more than one CPU in close communication. It allows the same computer to have the multiple processors. Multiple processor configurations are very efficient, but only on some applications. A multiprocessing OS cannot be implemented on Hardware that does not support Address Translation. Tightly coupled system – processors share Memory and a clock; communication usually takes place through the shared memory. Advantages: Type of Multiprocessing  Increased throughput. Symmetric multiprocessing - Each processor runs an identical copy of the operating system. Many processes can run at  Economical. once without performance deterioration.  Increased reliability. Asymmetric multiprocessing - Each processor is assigned a  Graceful degradation. specific task; master processor schedules and allocates work to  Fail – Safe System. slave processors. More common used in extremely large systems.

Real-Time Systems - used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems, well-defined fixed-time constraints. A program that must interact with input-output devices or other tasks within some fixed time constraints is said to be operating in Real-Time. For example, a program that is used to monitor the pressure within a nuclear reactor may be required to receive and process pressure information from an external pressure sensor attached to the reactor every 100 ms. A program that controls the rocket engines on a spacecraft may be required to produce start and stop commands as nodded within intervals of ¼ seconds. When tasks are executing concurrently, their interactions are often subject to timing constraints if the overall computer system is being used for real time processing.. For example a task A wishes to rendezvous with another task B may not be able to delay more than a fixed length of time before proceeding, even without starting the rendezvous. In Real Time Computer System, failure of part of the hardware or of an external input-output device often leads to a task’s being abruptly terminated. If other tasks wait on such a failed task, the entire system of tasks may deadlock and cause a crash of the system. The special demand of real time system requires some explicit notation of time that can be used in the control and synchronization of tasks. Hard Real-Time system Soft Real-Time system Secondary storage limited or absent; data Limited utility in industrial control or robotics. 55/78

stored in short-term memory, or read-only Useful in applications (multimedia, virtual memory (ROM). It Conflicts with time-sharing reality) requiring advanced operating-system systems; not supported by general-purpose features. operating systems. Q. Explain multi-user, multitasking, multiprocessing and real time? Q. Explain multiprogramming and time sharing techniques of improving CPU utilization?

Time-Sharing Systems– Interactive Computing The CPU is multiplexed among several jobs that are kept in memory and on disk (the CPU is allocated to a job only if the job is in memory). A job is swapped in and out of memory to the disk. On-line communication between the user and the system is provided; when the operating system finishes the execution of one command, it seeks the next “control statement” not from a card reader, but rather from the user’s keyboard. On-line file system must be available for users to access data and code. Timer Interrupt – A timer interrupt denotes that the time slice assigned to a process has been expired and saves the state of process as well as invokes the dispatcher to bring another process into scheme. Hence, Timer Interrupt is only responsible for maintaining the time duration of a particular time slice assigned to a process. As soon as the duration of a particular time slice expires, it saves the processed state and status of process to the process status register (Containing Process Table) and invokes the dispatcher to bring another process into scheme. Process Status Register & Process Descriptor – Process Status Register is a register, which contains a process table, containing an entry for each and every running, waiting and ready to run processes known as process descriptor. The OS while managing processes use process Descriptor.

Process Id

Process Name

Open Files For Process

Process State

Memory Allocated For Process

User Id

Owner Id

Processed Time Used So Far

Parent Process Id

Process Status Register (Process Table containing Process Descriptor).

The components of CPU Scheduling

Dispatcher

Scheduler

D I S P A T C H E R – It is the low level routine (in-built with the OS) or the internal procedure of the OS that brings a Process in its running state that is in ready to run state by loading the process state from process control register where state of process was saved IN previous turn of cycle last by Timer Interrupt. The selection of process to be dispatched is based upon on some fair scheduling discipline and priority. It gives the control of the CPU to the process selected by sort term scheduler.

56/78

As soon as dispatcher gets signal from the timer interrupt it dispatches a process based on the scheduling algorithm and priority and load the state of the process that was PREVEOUSLY saved. It is responsible for providing following services; Dispatch Latency – Time taken by the dispatcher to stop one process  Context-Switch and start another running (context Switching from privilege to user mode.  Jumping to proper location in the user program. switch). S C H E D U L E R – Scheduler is a low level routine (Supplied with OS) of the OS that is responsible for scheduling a particular process from number of active processes larger than available time slices, kept in ready queue and ready to run. A process migrates b/w various scheduling queues like;  Job queue – set of all processes in the system.  Ready queue – set of all processes residing in main memory, ready and waiting to execute.  Device queues – set of processes waiting for an I/O device.  Process migration between the various queues Scheduling a process by the scheduler among multiple available active processes in ready queue may be based upon multiple scheduling algorithms and policies. All the scheduling algorithms have only one motive to decide/evaluate the priority of the processes at given instance of time. And the process that has higher priority at given instance of specific time is decided to be scheduled if there is no priority confliction48. Such scheduling algorithms are based on the following three criteria; A) Decision Mode – Specifies instance of time at which priority of processes are compared, evaluated and decided for scheduling. Decision mode can be of two types; 1) Non-Preemptive Decision Mode – In Non-Preemptive mode, once a process is started it is allowed to run to its completion and only after completion of a process the scheduling decision can be made for another process to assign the CPU Time Slice. E.g. Batch Processing. 2)

Preemptive Decision Mode - In Preemptive mode, a process being currently run may be halted by using Priority Interrupt and decision can be made for scheduling another process of higher or equal priority to assign the CPU Time Slice. It is used in all the circumference of Time Sharing Processing. The user neither considers the resources – that a certain task will consume, nor completions of one task to start another. E.g. WIN NT.

B) Priority Function – It is responsible for evaluating and deciding the priority of a process based on the following process parameters; 1) 2) 3) 4)

No of instructions. Memory Requirement. System evolvement load Total Service Time

The process can have only higher priority if it has less value of all the above said process parameters.

C) Arbitration Rule – Whenever two processes have the similar Priorities, there is a conflict while deciding and selecting a process for scheduling. To resolve such confliction, Arbitration rule allows selecting the processes on Round Robin Basis or Cyclic Basis. CPU scheduling decisions may take place when a process (Non-preemptive): 48

If two or more process has the similar priority then there is a conflict while selecting a process and it is known as process confliction.

57/78

1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates. Q. Explain different scheduler and their usages? Q. Differentiate between Short and Long Term Scheduler? Q. What do you understand by Short and Long Term Scheduler?

Types of Scheduler Throughout it’s lifetime. It is the responsibility of the OS to select a process from the queue by using its specific low-level routine known as Scheduler to schedule the process. Most commercially used such scheduler can be of three types; a)

Long Term Scheduler (Job Scheduler)– It is most appropriate and used in Batch Processing OS where several processes are collectively submitted and sequentially executed just one after another. Hence, in Batch Processing System, all the submitted processes are spooled to the mass storage device say Disk where they are kept for later execution. It is the responsibility of Long Term Scheduler or Job Scheduler to select a process from this pool and load it into memory for execution. Consequently, we can say that LTS works/selects a process from the pool kept on the dis and loads into memory for execution. It can also control Stable Degree of Multiprogramming where average rate of process creation is equals to the average rate of process departure in the system. Thus, the LTS is usually invoked when a process leaves the system entirely and during the execution of process (Long Interval), the LTS has enough time to select a process from only process pool kept on disk as queue. The SJFS can be preemptive or non-preemptive and frequently uses Long Term scheduler.

b)

Short Term Scheduler (CPU Scheduler) - It is most appropriate and used in Time Sharing Processing OS where several processes are collectively submitted and executed concurrently in context switch mode, using available Time Slice of the CPU. It is the responsibility of Short Term Scheduler or CPU Scheduler to select a process from Ready Queue and load it into memory for execution. Consequently, we can say that STS works/selects a process from the Ready Queue.

c)

Medium Term Scheduler – It is intermediate level (Long – Medium – Short) of scheduling used by some time-sharing system where the scheduling frequency is also of intermediate level. The MTS responsible for performing two tasks;  It removes the process from memory (Swap-Out) for another process to be executed. And  At later time, the process is brought back into the memory (Swap-In) from swapping area to continue the execution where it left off.

Short Term

Long Term

Ready Queue

CPU

End I/O

I/O Waiting Queue(s) 58/78

Long Term Scheduler (Job Scheduler) Vs Short Term Scheduler (CPU Scheduler) The differences in b/w LTS49 and STS50 is based upon the frequency upon which the corresponding scheduler is called so that to bring a process into it’s execution state. The STS executes very frequently at least once in every 100ms b’coz it has to bring another process as soon as the brief duration of time (Time slice) expires for a process. But, the LTS executes less frequently at least once after successful completion of a process. Consequently we can say that the execution of STS is quite greater as compared to LTS for a scheduling a process.

Supported Environment Mode Of Execution Selection of a process for scheduling from Frequency Of Execution Wastage Of CPU-Time

Long Term Scheduler

Short Term Scheduler

Batch Processing Non-Preemptive Process POOL kept on the Disk

Time Sharing Preemptive Ready Queue of Process

Low Low

High High

Frequent execution of STS has a disadvantage that is wastage of CPU Time during the scheduling of processes. If STS takes 10ms to decide and execute a process for 100ms then 10/(100+10)=9% of CPU is being used by STS and it is the wastage Time of CPU. While such wastage of CPU Time is not encountered in LTS for scheduling a process b’coz it executes very less frequently as compare to the LTS. Short Term Scheduler Medium Term Scheduler Long Term Scheduler

Selects the jobs from ready queue and connects to the CPU. Selects the jobs from blocked/waiting queue and connects to ready queue. Selects the jobs from the pool of jobs and loads to the ready queue.

Suppose there is a process known Pr-A having total execution time 1000ms. It can be processed in both of the processing environment that are Batch Processing (Non-Preemptive) Using LTS and Time Sharing Processing (Preemptive) using STS. We have to compare the CPU wastage Time in both the cases.

49 50

Long Term Scheduler Short Term Scheduler

59/78

Scenario-1 Time Sharing using STS

Scenario-2 Batch Processing using LTS

Lets suppose, There are 10 Time Slices of process Pr-1 having 100ms of Quantum Time of each of them. The execution time of STS=1ms. Hence, the total no execution of STS=10 Times. Hence, the total time for execution of STS=10ms. Hence, the total time for execution of process Pr1=Actual Execution Time + Scheduler Execution Time Therefore 1000+10 = 1010ms Wastage of CPU Time = Total execution Time Including Scheduler - Actual Execution Time of Process Pr-1 =1010-1000 = 10ms.

Lets suppose, The execution time of LTS=1ms. Hence, the total no execution of LTS for processing Pr-1 =1 Times. Hence, the total time for execution of LTS=1ms. Hence, the total time for execution of process Pr1=Actual Execution Time + Scheduler Execution Time Therefore 1000+1 = 1001ms Wastage of CPU Time = Total execution Time I including Scheduler - Actual Execution Time of Process Pr-1 =1001-1000 = 1ms

Scheduling Algorithms There are following types of scheduling algorithm that can be selected so that to schedule a process among several processes; 1.) First Job First Serve Scheduling – In FJFS scheduling algorithm, a process that arrives first among the several processes at a particular instance of time, has given the higher priority or selected to be scheduled for execution. Process ID P1 P2 P3 P4

Arrival Time (ms) T0 T1 T2 T3

CPU Burst Time (ms) 10 7 23 12

60 50 P4

40

P3

30

P2

20

P1

10 0

Pn

P3

P2

P1

CPU

1

`

Where T0>T1>T2>T3 are the arrival time of Processes P1 , P2, P3 and Pn.

2.) Short Job First Serve Scheduling – Whenever the scheduling of a process among several processes could not be decided by the FJFS where more than one job came at same instance of time then the scheduler opts SJFS. In SJFS scheduling algorithm, a process that has less load (evaluated from their burst time) among the several processes is given the priority/selected to be scheduled for execution. The SJF consisting of least waiting time. It can be preemptive or non-preemptive that may lead to indefinite blocking. 3.) Priority Based Scheduling – It is special case of SJF where priority is associated with each process and CPU is allocated according to highest priority. Equal priority is scheduled in FCFS basis. It can be preemptive or non preemptive that may lead to indefinite blocking. 4.) Round Robin Scheduling Algorithm - Whenever the scheduling of a process among several processes could not be decided by the FJFS and SJFS where more than one job came at same instance of time along with equal load/priority then the scheduler opts Round Robin Scheduling. In Round Robin Scheduling algorithm, a

60/78

process is arbitratory (Randomly) selected among several processes has equal load and the same instance of arrival time. The execution is scheduled for all the processes just one after another in time-sharing mode and it is mainly designed for time sharing system, having preemptive b/w processes. In RR-Scheduling, the ready queue is treated as Circular-Queue and it is most suitable for a time-shared OS and preemptive version of FIFO algorithm. The performance is highly depends upon Quantum Time and varies from given various quantum time to quantum time on the same system. Round Robin Scheduling is implemented by using a Process Waiting Queue that can be of following types; a) Single Level Process Waiting Queue – In Single Level Queue, a process that’s time quantum has been expired, is inserted into the process ready queue and another process is being called for execution from the same queue. The cycle is repeated in First In First Out manner till the completion of all processes . P1

P2

P3

Pn

P1 In

Out

Process Queue

CPU b) Multi Level Queue – Partitions the ready queue into several queues of having varying priorities (Lower Higher and Vice-Versa) as per the priority, decided by the process i.e. whether a process is of system or interactive or batch or normal) that makes a priority group of processes. And the processes of same group are assigned permanently to a particular group as per their priority. Each group has separate scheduling algorithms (RR/FCFS etc) System Process (High Priority)

Interactive Process

Batch Process

Normal Process (Low Priority)

   

No Process Promotion / Demotion from one to another queue at any level. Causes Starvation Inflexible Fewer Scheduling Overload from one to another queue

c) Multi Level Feedback Queue – There might be two types of processes in Single Process Waiting Queue; One of having large load and gets low priority and another having less load gets high priority. It partions the process ready queue into several queues as per the different CPU Brust-Time. If a process requires less CPU time (Low CPU Brust), it assigned to the higher priority queue and other that requires more CPU time, is assigned to the lowest priority queue at lower level. Hence, needless to say that the process with high priority has to wait for the process with low priority. Hence, there is the waiting problem that may cause starvation, so the OS maintains two separate queues; one for the high priority processes and another for the low priority processes. High priority queue only

61/78

contains the processes that are having higher priority means less load and low priority queue contains the processes that are having large load. As soon as the high priority queue gets empty after completing the processes of high priority, the process from low priority queue promoted to the high priority queue for execution that avoids the problem of starvation (Infinite Waiting). Note: Preemptive Round Robin, Priority Based Scheduling Algorithm. Non Preemptive FIFO, SJF Scheduling Algorithm Q. State an advantage in having different time-quantum sizes for different levels of a multilevel queuing system? The priority of queue is based upon the assigned quantum time that varies from level to level so that to a. Get the CPU quickly as per the brust time of process so that to finish it’s CPU brust quickly. b. To prevent from starvation by promoting the processes from lowest priority to highest priority queue. c. ------------------------d. -------------------------

And hence high priority processes need not have to wait for the processes of low priority queue. If a process in a queue at any level completes it’s execution then the matter is settled otherwise the process is preempted as the quantum expires and demoted to the end of immediate lower level queue. If again at the lower level, the quantum expires and the process does not get it’s completion then it is demoted to the last queue that works upon the FCFS basis. If queue-0 is empty then the execution of queue-1 begins and if the queue-0 & queue-1 both are empty then the queue-2 begins it’s execution based on FCFS. If queue-0 is empty and there is the execution of queue-1 is currently being scheduled, meanwhile a process belongs to queue-0 arrives then the execution of queue-1 is preempted immediately. And the same is repeated in case of queue2 and queue-1/queue-o. High Priority Process Queue-0 (Interactive) Quantum = 8

Hn

H3

H2

H1

Low Priority Process Queue-1 (Batch) Quantum = 16

Ln

L3

L2

L1

F2

F1

First Come First Serve Queue-2

Fn

F3

Selecting CPU scheduling algorithm for a particular System-The selection criteria are based upon following factors; a) CPU Utilization – The percentage of time that the processor is busy. We want to utilize the CPU as much as possible b’coz it is costly resource of the system and utilization may range from 0-100%. b) Response Time (Rt)– For Interactive System, it is time from the submission of a request until the previous response is produced. It is the amount of time that it takes to start the responding of a submitted process, but not the time it takes to output the response. Response Time (Rt) of a process (P1)= First Response Time (P1) - Arrival Time (P1) Average Response Time (ARt) = Sum of all response Time of process / No of Processes.

c) Waiting Time (Wt)– Time taken by a process to wait in the Ready Queue to get into the memory for execution. TWt=Sum of previously spent waiting time in the ready/ waiting/ IO queue. 62/78

The CPU scheduling algorithm, especially RR does not affect the amount of time during which a process is executed by the CPU or does I/O operations. Note: The Waiting Time of process does not count the execution time of that particular process during the cyclic execution by the CPU in RR. Waiting Time (Wt) for process (P1) = Starting Time (P1) – Arrival Time (P1) Average Waiting Time (AWt) = Sum of the waiting time of all processes / No of Processes. d) Throughput – It is the measure of work that is the number of processes that are completed per unit of time. E.g. 10 Processes/Second. It measures the performance of the CPU. e) Turnaround Time (Tt)– The amount of time / interval of submission of a process to the time of

f)

completion. Mostly preferred in Batch Processing. The Turnaround Time of process counts the execution time of that particular process during the cyclic execution along with the Waiting Time by the CPU in RR. Turnaround Time(Tt)=Waiting Time waiting/Ready/IO Queue + Execution Time + Context Switch Time. Turnaround Time (T t) of a process (P1)= Finish Time (P1) - Arrival Time (P1) Average Turnaround Time (AT t) of a processes = Sum of all turnaround time of processes/ No of Processes Context-Switch Time (Ct)– Switching the CPU from current to another process by saving the state of old process and loading the saved state of new process from process control block (PCB) Register. Context switch time is pure overhead and it depends upon the hardware support. Because the system does no useful work while switching from one to another process. The Context Switch51 Time is highly dependent upon the hardware support.

Process Control Block (PCB) - Information associated with each process.      

Process state Program counter CPU registers CPU scheduling information Memory-management information Accounting information  I/O status information

IF (CPU BRUST-TIME < TIMEQUANTUM) THEN The process itself releases the CPU voluntary. ELSE IF ( CPU BRUST-TIME > TIMEQUANTUM ) THEN The timer interrupts the running process. ENDIF ENDIF

ATt=SUM (Waiting Time of a process to get into memory + Waiting Time in Ready Queue + Execution Time Of CPU + Context Switch Time+ Time Spent on I/O Operations)

Optimization Criteria for To determine a scheduling algorithm, there is an evaluation method selecting the best called analytical evaluation that that uses the given algorithm and the system workload to produce a number that evaluates the algorithm;    

Max CPU utilization Max Throughput Min Turnaround Time Min Waiting Time  Min Response Time

51

performance for that workload. That is also known as deterministic modeling. Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload. It is fast and exact method.

It is a part of interrupt handling.

63/78

E.g. assume we have the workload of 5-processes as given below along with their CPU Burst Time and priority too. All 5-processes arrive at time 0in the given order wit h the given length of CPU Burst Time in milliseconds. The Larger CPU Burst-Time of a process causes the process to have lower priority;

Process Arrival Time ID P1 0 P2 2 P3 4 P4 7 P5 9

Now, we shall evaluate the best scheduling algorithm from the given set of process using average waiting time of processes.

CPU Burst Time (ms) 5 14 13 7 11

Priority 3 1 3 1 4

1. First Come First Serve (FCFS) – FCFS scheduling algorithm for given set of processes are represented by using Gantt Chart as follows;

P1

P2

0

P3

5

19

Waiting Time Of Process

P4

P5

32

39

Completion Time Of Process(s)

50

Duration In msec

P1 X P2 P1 P3 P1+P2 P4 P1+P2+P3 P5 P1+P2+P3+P4 Total Waiting Time (P1+P2+P3+P4+P5)

0 0+5=5 0+5+15=19 0+5+14+13=32 0+5+14+13+7=39 0+5+19+32+39=95ms 95/5=19.00ms

Avg Waiting Time=Total Waiting Time / Total No. Of Processes Q. Explain briefly Short Term scheduling Algorithm?

2) Short Job First Serve / Short Job First Scheduling (SJF) – Is of two types Non-Preemptive (Shortest Next CPU Brust)

Preemptive (Shortest Remaining Time First) Non-Preemptive (Shortest Next CPU Brust) – Process ID P1 P2 P3 P4 P5

CPU Burst Time (ms) 5 14 13 7 11

P1 0

P4 5

Waiting Time Of Process

P1 P4 P5 P3

The scheduling is performed by examining the length of next CPU burst instead of its total length. When the CPU is available, CPU is assigned to the process that has the smallest next CPU burst. The SJFS can be preemptive or non-preemptive and frequently used in Long Term scheduler. And, represented by using Gantt

P5 12

P3 23

Completion Time Of Process(s)

X P1 P1+P4 P1+P4+P5

P2 36

50

Duration In msec

0 0+5=5 0+5+7=12 0+5+7+11=23 64/78

P2 P1+P4+P5+P3 Total Waiting Time (P1+P4+P5+P3+P2) Avg Waiting Time=Total Waiting Time / Total No. of Processes

0+5+7+11+13=36 0+5+12+23+36=76 ms 76/5=15.20ms

Preemptive (Shortest Remaining Time First) – Preempts the process that is currently being executed when a new process arrives at the ready queue of having less brust time / shortest remaining time than the current one. Process Arrival CPU Burst Remaining time of P1=5ms is larger than the process P2 and P. Hence both are scheduled ID Time Time (ms) while preempting the process P1. But the P1 0 6 remaining time of P1=5msec that is smaller than P2 1 2 P3 hence it is executed prior to p3. And, P3 2 7 represented by using Gantt chart as follows; P4 3 3

P1 0

P2 1

P4 3

P1 6

P3 11

18

3) Priority Based Scheduling - It is special case of SJF where priority (Integer) is associated

with each process and CPU is allocated according to highest priority. Equal priority is scheduled in FCFS basis. A priority number (integer) is associated with each process. The CPU is allocated to the process with the highest priority (smallest integer  highest priority). Preemptive / Non-preemptive Q. What is problem with Priority Scheduling Algorithm? Also state the solution?

Problem Solution Starvation – The processes of Low Aging – As time progresses, the increment priority may never execute or infinitely in the priority of process. waits. The Priority is some fixed range of numbers (a) 0 - 7

(b) 0 - 4095

Priority Evaluation 4500 4000 3500 3000 2500 2000 1500 1000 500 0 -500

No Priority

Low priority

High Priority

But there is no general standard/scale on priority representation that whether 0 represents the highest or lowest priority. It leads to have confusion. In this connection, I use high numbers to represent HighPriority and low number to represent Low-Priority. The larger CPU Burst-Time of a process causes the process to have lower priority. The processes of equal priority are scheduled in First Come First Serve basis.

The The Priority Priority Based Based scheduling scheduling algorithm algorithm for for given given set set of of processes processes are are arranged arranged according according to to their their associated associated priority priority in in descending descending order order so so that that to to evaluate evaluate highest highest priority priority of of processes processes and and represented represented by by using using Gantt Gantt chart chart as as follows; follows;

65/78

Process ID P5 P1 P3 P4 P2

CPU Burst

Priority

Time (ms)

11 5 13 7 14

4 3 3 1 1

P5

P1

0

11

P3

P4

16

Waiting Time Of Process

P2

29

36

50

Completion Time Of Process(s)

Duration In msec

P5 X P1 P5 P3 P5+P1 P4 P5+P1+P3 P2 P5+P1+P3+P4 Total Waiting Time (P1+P4+P5+P3+P2)

0 11 5+11=16 0+11+5+13=29 0+11+5+13+7=36 0+11+16+29+36=92 ms

Avg Waiting Time=Total Waiting Time / Total No. of 92/5=18.40ms Processes

4) Round Robin Scheduling (With Quantum Time=10ms) – Each process gets a small unit of CPU time (time quantum), usually 10–100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1 /n of the CPU time in chunks of at most q time units at once. No process waits more than ( n - 1) q time units. Process ID

CPU Burst Time (ms)

Priority

P1 P2 P3 P4 P5

5 14 13 7 11

3 1 3 1 4

5/5

10/14

P1 0

P2 5

10/13

P3 15

Pn

Denotes the last/final turn of process(Pn) just prior to its completion.

7/7

10/11

P4 25

RR scheduling algorithm for given set of processes having quantum time 10 ms, are represented by using Gantt chart as given to left. Where TTt =Total Turnaround Time. WTt =Total Waiting Time. TWt TTt

14/14

P5 32

13/13

P2 42

11/11

P3 46

P5 49

50

Note: The Waiting Time of process does not count the execution time of that particular process during the cyclic execution by the CPU in RR. Waiting Time Of Process

Completion Time Of Process(s)

Duration In msec

P1 X P2 P1+P3+P4+P5 P3 P1+P2+P4+P5+P2 P4 P1+P2+P3 P5 P1+P2+P3+P4+P2+P3 Total Waiting Time (P1+P2+P3+P4+P5)

0 0+5+10+7+10=32 0+5+10+10+7+10+4=36 0+5+10+10=25 0+5+10+10+7+10+4+3=39

Avg Waiting Time=Total Waiting Time / Total No. of Processes

0+32+36+25+39=132ms 132/5=26.40ms

Comparison & Conclusion  Now, we can easily compare the best suitable scheduling algorithm using Total Waiting & Average Waiting Time as given to left; Types of Scheduling FCFS SJFS PB RR

Total Waiting Time 95.00ms 76.00ms 92.00ms 132.00ms

66/78

Average Waiting Time 19.00ms 15.20ms 18.40ms 26.40ms

According to Total Waiting Time, the best algorithms are as follows; Types of Scheduling Total Waiting Time Average Waiting Time SJFS 76.00ms 15.20ms PB 92.00ms 18.40ms FCFS 95.00ms 19.00ms RR 132.00ms 26.40ms

Sche duling Performance Evaluation Using Total Waiting Time

150 100 50 0 Series1

SJFS

PB

FCFS

RR

76

92

95

132

According to Average Waiting Time, the best algorithms are as follows; Types of Scheduling SJFS PB FCFS RR

Total Waiting Time 76.00ms 92.00ms 95.00ms 132.00ms

Average Waiting Time 15.20ms 18.40ms 19.00ms 26.40ms

Scheduling Pe rform ance Evaluation Using Ave rage Waiting Tim e

30 20 10 0 Series1

SJFS

PB

FCFS

RR

15.2

18.4

19

26.4

Self-Practice – Compute the average Turnaround Time using each of the following scheduling policies: a) FCFS c) RR With Time Quantum = 5 Unit

67/78

Disk/File Management The OS is responsible for providing the services to perform the following tasks; 1) Providing Structure of file system52 for placing of file(s) and directories.

52

A set of routines that manage directories, device accesses, buffers, and enabling programs to access files without having concerned with details of physical storage characteristics and device timing.

68/78

Types Of Directory Structure Single Level

Each file must have a unique name Rd

because there is no more concepts of multiple directories hence a filename can

F1

F2

F3

only belong to an user (single user) at a time.

Double level

Separate directory of each user (Multi-

Rd

user) having their own files along with F1

S1

Fn

attributes (H+R+W+A).

Fn

F1

Tree Structure

Allows user to create their own subRd

directories to recognize their own files.

S1

S2

E.g. SCO UNIX

F1 S1

S1

Fn

Acyclic Graph

U2

U1 RD

Allows sub-directories and files to be shared but it is complicated in term of

RD

searching of files and deletion also. E.g. Windows OS.

S2

S3

Sh Sh

Rd=Root Directory. U1/U2=User F1/F2/Fn=Files

D1/D2/Dn=Directories S1/S2/Sn=Sub-Directories Sh=Shared Directory

2) Providing a byte-stream for creating/opening/closing/reading/ writing/getting Information about files and directories. 3) Providing File System Structure – A disk may be partitioned into one or more Logical disk. A File System occupies one logical disk containing the following Information. E.g. FAT16, FAT32, NTFS, I-Node etc.

69/78

A. Method for accessing disk contents B. Available disk space C. File(s) Dir(s) Information D. Method for creating data blocks.

Disk Performance Parameters – Seek Time - Time taken to position a head at desired track. Estimated Seek Time = Constant that depends upon the disk Drive * Number of tracks traversed + Startup Time

Rotational Delay / Latency Time - Time taken/required to position (reach) the head above the current block. On average, it is equal to the time needed to make one half of a revolution. Transfer Time =

Number of bytes to be transferred

-----------------------------------------------------Rotation Speed in revolutions/ Sec * Number of bytes on track

Access Time – Seek Time + Rotational Latency/Delay + Transfer Rate. Total Avg. Access Time = (Avg. Seek Time + ½ * Rotation Speed in revolutions/Sec) + (Number of bytes to be transferred / Rotation Speed in revolutions/Sec * Number of Bytes on track.)

E.G. The time to transfer 4KB from a disk requiring 40ms to locate a track, making 3000 revolutions per minute and with a data transfer rate of 1000KB per second is; Access Time = 40 ms + 10 ms = 5 ms = 54 ms. 4) Disk Scheduling – It is the responsibility of the OS to efficiently use the disk drive and disk controller so that to have fast access time and better disk bandwidth. The OS improves the access time and disk bandwidth by scheduling the I/O services of I/O requests. Whenever a process needs I/O operation to or from the disk, it issues a system call having following arguments (Mop, Dadd, M add, Nby) to the OS; a) b) c) d)

Mode of operation that is whether it is Input or Output. The disk address for data transfer. The memory Address fro data transfer. Number of bytes to be transferred.

In multiprogramming/multitasking environment, there might be several processes requesting for Input or Output operation concurrently that may either lead to immediate service if the drive and controller is empty or pending of requests till the emptiness of drive and controller. While drive and controller is busy, the requests are kept in the disk queue being maintained by the OS. As soon as the drive and controller get free, the OS schedules a request from the queue. The selection of selecting a request among several requests toward Input or Output is based upon several diskscheduling algorithms that are as follows;

70/78

FCFS Scheduling SSTF Scheduling SCAN Scheduling C-SCAN Scheduling LOOK Scheduling

1) First Come First Serve Scheduling – It is simplest form of disk scheduling where the request that arrives at initially is processed/serviced first. Let’s suppose, Disk requests kept into disk queue, come into the disk driver for I/O operations to and from the tracks in the order of; 32  40  70  67  90  84  32  77

Arrival Time of requests

T0

Tn

The disk arm/head is initially at track 66 then it will first move for I/O operations to 32 then 40 and so on. The algorithm FCFS Disk Scheduling

100

is very simple

80

to implement

60

but the

40

performance is

20

not satisfactory,

Head Status on Cylinder

0 Request Order

Series1

the average

1

2

3

4

5

6

7

8

66

40

70

67

90

84

32

77

head movements are

2.) Shortest Seek Time First Scheduling – In SSTF, the shortest request (having shortest seek time) from/with respect to the position of current disk arm/head relatively, is selected from the disk queue. Hence, the disk arm/head only switches to next request that is nearer to the current position (where the disk arm is currently placed) of head. Let’s suppose, Disk requests kept into disk queue, come into the disk driver for I/O operations to and from tracks in the order of; 32  40  70  67  90  84  32  77

The disk arm/head is initially at track 66 then it will first move to 67 then 70 then 77 then 84 then 90 then 40 and then 32 for I/O operations. 67  70  77  84  90  40  32  32 SSTF Scheduling

Substantial

100

improvement

80 Head Status on Cylinder

in

60

performance

40

over FCFS

20 Request Order

0

Series1

1

2

3

4

5

6

7

8

67

70

77

84

90

40

32

32

71/78

.

53

/ Elevator Algorithm – In which, the arm of disk starts I/O operations at one end of disk and moves towards the other end, while servicing all the requests of disk queue for I/O operations on underlying cylinders54. The direction of head is reversed again from other end to beginning of disk servicing underlying requests, and services continue. Let’s suppose, Disk requests are kept into disk queue, come to the disk driver for I/O operations to and from tracks in the order of; 3) Scan

32  40  70  67  90  84  32  77

The disk head is initially at track 66 then it will first move towards end and first services track 67 then 70 then 77 then 84 then 90 and thereafter the arm/head reverses from end to 0th, servicing track 40 then 32 then 32. 32  32-> 40  60  67  70  77  84  90

Cylinder Number

Scan Algorithsm

Substantial

100

improvement in

80

performance

60

over FCFS but

40

not better than

20

SSTF.

0 1

2

3

4

5

6

7

8

9

Disk Request

4) C-Scan Algorithm – In which, the arm of disk starts I/O operations at one end of disk and moves towards the other end, while servicing all the requests of disk queue for I/O operations on underlying cylinders55. The direction of head is immediately returns to beginning of the disk without servicing any request along the way. It treats the cylinders as circular list. The average head movements in this algorithm is more than in the scanscheduling algorithm but it provides more uniform waiting time. Let’s suppose, Disk requests are kept into disk queue, come to the disk driver for I/O operations to and from tracks in the order of; The disk head is initially at track 66 then it will first move towards end and first services track 67 then 70 then 77 then 84 then 90 and thereafter the arm/head reverses from end to 0th, and then servicing track 32 then 32 then 40. Q. Disk requests come into 53

First come to ground floor (0) and then move/elevate to top floor. The variation of SCAN is C-SCAN. Set of tracks that are at one-arm position forms a cylinder. 55 Set of tracks that are at one-arm position forms a cylinder. 54

72/78

the disk driver for tracks in the order of 55,58,39,18,90,160,150,38 ,184. The disk arm is initially placed at track 100. A seek time takes 5msec per track moved. Compare the average seek lengths

32  32 40  60  67  70  77  84  90

Cylinder Number

C-SCAN Algorithm 100 80 60 40 20 0 1

2

3

4

5

6

7

8

9

10

Request Number

32  40  70  67  90  84  32  77 File System Concern – There are several constraints that enforces a File System to be bound with a particular OS, hence following character varies regarding the File System from an OS to other; Size – Size of Files and entire File System varies from an OS to other. Performance – R/W access time also varies from one file system to other and hence from one OS to other b’coz R/W access methods opted by the file system varies. Data Availability – During the failure of a disk the recovery of data/file from the disk is known as data availability that also varies from one File System to other b’coz the method opted for data availability varies from one FS to other say Checksum, Mirroring etc etc. Read Ahead & Read Behind - In Read Ahead, the read operation of next block from the FS is initiated while one block is currently being processed by the CPU where as In Read Behind, the read operation of next block from the FS is not initiated until unless the current block that is being processed by the CPU does not get completed Such concepts are only used during Read operations that are provided by either the OS or Disk Controller.

CPU Scheduling The OS is responsible for efficiently use of CPU so that to have maximum utilization of the CPU by scheduling the processes kept in Ready Queue and ready to run.

Disk Scheduling The OS is responsible for efficiently use of disk drive and disk controller so that to have fast access time and better disk bandwidth by scheduling the I/O services of I/O requests.

While the CPU is busy, the processes are kept in Waiting/Ready Queue being maintained by the OS. As soon as the CPU gets free, the OS schedules a request from the ready queue.

While drive and controller is busy, the requests are kept in the disk queue being maintained by the OS. As soon as the drive and controller get free, the OS schedules a request from the queue.

The selection of selecting a process among The selection of selecting a request among several processes for execution is based upon several requests toward Input or Output is

73/78

several CPU-scheduling algorithm that are as based upon several disk-scheduling algorithm follows; that are as follows; First Job First Serve Scheduling FCFS Scheduling Short Job First Serve Scheduling SSTF Scheduling Priority Based Scheduling SCAN Scheduling Round Robin Scheduling Algorithm C-SCAN Scheduling LOOK Scheduling The context switch time and execution time of scheduler (Long/Short Term) as well as marinating Process Control Block Register is an overhead.

No more concept of context switch time and execution time of scheduler (Long/Short Term) as well as and marinating Process Control Block Register

Requires Timer-Interrupt.

Does not require.

Protection & Security Assets of the Computer System & Access Matrix The resources of computer system are treated/known as objects that are as follows; a. Software – File, Program, Semaphore. Process etc b. Hardware – CPU, Printer, scanner, Memory segment etc. Each object has unique name that differentiates an object from rest of others. And each object has limited set of permissible operation e.g. CPU can only be executed on, memory/disk can only be read or written etc that make the object as an Abstract Data Type. Access Right – The ability to execute an operation(s) by the process on an object is known as an access right that is denoted by e.g. < O1, Execute > Domain – Is a set/list of access rights that are accessed by the process. < P1, Print > < File F, { r, w }

74/78

Domain D1

The association of process and domain can be of two types; Static – In which the domain contains more & more access rights to fulfill the requirements of a process at one place/domain. But is more complex than dynamic but so simple to use. Dynamic - In which the domain contains only the limited set of access rights that has less capability to satisfy the requirements of a process and the process is free to migrate from one domain to other as per requirements. It is so simple than static but so complex in operations. Domain Qualifiers –  Each user may be a domain where the objects are identified.  Each Process may be a domain by the identity of user/process.  Each procedure may be a domain. Access Matrix – A matrix consists of rows and columns, where row represents the domain and column represents the object. Further more, it is a way to define the access rights of domains and objects to the process/user. It specifies a variety of policies (what will be done) and provides an appropriate mechanism (how something will be done) for defining and implementing strict control for both static & dynamic association of process and domain as well as owner of object too. Object O1 O2 Printer File X D1 D2 Domain D1 D2 D3 D4

Owner Read Write W

X

Print

R, W

Print

W* R

X R, X

Switch Switch Switch

Switch

Capability List of Domain D1

Access Matrix

Access List of Object O2

Switch means the process is able to migrate from one domain to other. E.g. a process has an access to domain D3 but the same process can migrate to the domains D1 and D2 as per requirements of the process. Apart from the domain, the owner has also special access rights to a particular object(s). E.g. {Owner-Read, Write} * - Can be copied to the other domains in the same column as copy access right. E.g. write permission on File X of domain D3 can be copied to D2 and D4 as per requirements. Access List – Specifies al the access rights/permissible operations of an object, associated with all the domains and represented in a column. E.g. For object O1, the access list is as follows; O1= O1= O1= O1= Capability56 List – Specifies the objects and access rights/permissible operations are allowed for a particular domain is executed by capability list and represented in a row. It is maintained by the OS rather than the process but can be accessed by the user. E.g. Domain D1 has the capability list that specifies; <>The domain D1 can execute the object O2 <>The domain can access the printer to print <>The domain can read and write from/to file X <>The process associated to domain D1 can migrate to D2 as per requirement that is dynamic association.

A process having the access of D1 has capability to have the access rights of all the objects that belong to domain D1. 56

Physical name of object is known as Capability.

75/78

Belongs To Represented By Maintained By Specifies Objective

Access List

Capability List

The Object(s) The Column(s) The Process Accessing of an object and permissible operations by all the domains On Object Specification

The Domain(s) The Row(s) The OS All the objects and permissible operations belong to the domain On Domain Specification

Security – The OS is responsible for providing security for the resources of the Single/Multiple User. The OS ensures following things; 1) Ownership of resources must be respected for the resources. 2) Disk Files must be protected to the user other than owner. 3) Use of password to control access. 4) Encryption of files. 5) Encryption of data sent across network. In support of Security some OS (Say Win 2000) maintains Access Control List57”, containing permission on resources for corresponding resources. Possible Categories of Attack on the security of a computer system or network –

Encryption  Types of Encryption  a) Symmetric – Uses same key for encryption & decryption b) Asymmetric -

Rivest-Shamir-Adleman (RSA) Algorithm  Public-Private Key Encryption (56 Bits Key) 1. Lets Use the number N = Multiplication of two prime numbers. 2. Encode the data to be encrypted as a number to create plaintext (P). E.g. The character F is encodes as 6 as a number. Then Pkp=6 S 3. Calculate the Cipher text (C) = Pkp MOD N = 6 MOD 119 = 41 E N 4. Send C to the destination. D Receive C sent from sender. E R 1. Calculate plaintext (P) = C MOD N = 41 MOD 119 = 6 2. Decode plaintext 6 as F How to select Public Key (Kpu), Private key (Kpr ) and N 1. Select two prime numbers say x and y. E.g. 7 and 17. 2. Calculate N = x * y = 7 * 17 = 119 57

A list is part of an object security that applies or denies permission to specific user.

76/78

R E C E I V E R

3. Select Kpu that must not be factor of (x-1) * (y-1) = 6 * 16 = 96 and its factors are 2,2,2,2,2,3. Hence, I select 5 4. Select Kpr such that (Kpu * Kpr) MOD ((x-1) * (y-1))=1 and hence Kpr=77

Distributed System Distributed Systems – It is the collection of processors, having their own local memory and the processors communicate to each other through common lines such as high-speed buses or telephone lines. Distributes the computation among several physical processors. Loosely coupled system Tightly Coupled system Each processor has its own local memory; processors Several processors communicate with one another through various communication share a common

77/78

lines, such as high-speed buses or telephone lines. Advantages of memory and a clock. distributed systems: The communication is performed through • Resource sharing common memory. • Computation speed up – load sharing • Reliability • Communication Networking (A Type Of Distributed Systems) OS - A distributed system is a collection of processors that do not share memory or a clock. Each processor has its own local memory. The processors in the system are connected through a communication network. A distributed system provides user access to various system resources. Access to a shared resource allows:  Computation speed-up Disadvantages;  Increased data availability  Suffers from H/W failures.  Enhanced reliability  Overload of reconfiguration of system after recovery from the failure system. Types Of Distributed System • LAN • WAN Both composed of processors, distributed over small area and large area geographically. The difference between both is in the way they are distributed geographically. Failure Recovery in Distributed System  To detect Link + Site Failure Handshaking 1. At regular time interval (T), both sites assume A and B send each other an “I Am Up” message. 2. If any one sites either A or B does not receives the message within time interval T then assumes site A or B has been failed. 3. Then the working site either waits for another time period T or sends the message “Are You Up” to failure site. 4. Again if it does not get the reply then it assumes the link failure and sends the same message from another route to acknowledge it regarding site failure by using “Time-OutScheme”. Q. What aspects of distributed system would you select for ascertain running on a totally reliable network? a) Resource Sharing b) Computation Speed Up c) Reliability d) Communication e) Robustness Q. What are the advantages of distributed system? a) Data Migration b) Computation Migration c) Resource Sharing d) Reliability e) Communication f) Process Migration 1) Load Balancing  2) Computation Speed Up  3) Hardware Performance  4) Software Performance  5) Data Access 

78/78

Related Documents

Os
November 2019 66
Os
November 2019 58
Os
May 2020 36
Os
November 2019 68
Os
November 2019 60
Os
November 2019 35

More Documents from ""

Ocw_week2-azean.pdf
December 2019 4
Os Notes.pdf
December 2019 0
Anjali.docx
May 2020 1
Pg College.docx
May 2020 5