IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
B Tech Semester 6th Operating Systems (TCS-601) Model Paper 1 1. Attempt any two of the following: a) Describe the terms shell, kernel and thread. Ans. Shell: A shell is a piece of software that provides an interface for users. Typically, the term refers to an operating system shell which provides access to the services of a kernel. However, the term is also applied very loosely to applications and may include any software that is "built around" a particular component, such as web browsers and email clients that are "shells" for HTML rendering engines. The name 'shell' originates from shells being an outer layer of interface between the user and the innards of the operating system (the kernel). Operating system shells generally fall into one of two categories: command-line and graphical. Command-line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). In either category the primary purpose of the shell is to invoke or "launch" another program; however, shells frequently have additional capabilities such as viewing the contents of directories. The relative merits of CLI- and GUI-based shells are often debated. CLI proponents claim that certain operations can be performed much faster under CLI shells than under GUI shells (such as moving files, for example). However, GUI proponents advocate the comparative usability and simplicity of GUI shells. The best choice is often determined by the way in which a computer will be used. On a server mainly used for data transfers and processing with expert administration, a CLI is likely to be the best choice. On the other hand, a GUI would be more appropriate for a computer to be used for image or video editing and the development of the above data. Notable historic or popular Unix shells include: Bourne shell (sh), C shell (csh), Z shell (zsh). Kernel: The kernel is the central component of most computer operating systems. Its responsibilities include managing the system's resources (the communication between hardware and software components).[1] As a basic component of an operating system, a kernel provides the lowest-level abstraction layer for the resources (especially memory, processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. These tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
code in the same address space to increase the performance of the system, microkernels run most of their services in user space, aiming to improve maintainability and modularity of the codebase.[2] A range of possibilities exists between these two extremes. Thread: a thread of execution is a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share this data. On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks to be running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time. Examples of such languages include Python, and OCaml, because the parallel support of their runtime support is limited by the use of a central lock, called "Global Interpreter Lock" in Python, "master lock" in Ocaml. Other languages may be limited because they use threads that are user threads, which are not visible to the kernel, and thus cannot be scheduled to run concurrently. On the other hand, kernel threads, which are visible to the kernel, can run concurrently. b) Describe briefly any two operating system structures. Ans. Simple structure: Many commercial operating systems do not have a well defined structure. MS-DOS is an example of an operating system with a simple structure. It provides most functionality in least space. It is not divided into modules. Although MS-DOS has some structure, its interfaces and levels of functionality are not well separated.
MS-DOS layer structure.
Unix is another system that was initially limited by hardware functionality. It consists of two parts kernel and system programs. Everything below the system-call interface and above
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
the physical hardware is the kernel. It provides the file system, CPU scheduling, memory management, and other operating-system functions; a large number of functions for one level.
UNIX System Structure Microkernel Approach: As UNIX operating system expanded, the kernel became large and difficult to manage. In mid 1980s, researchers developed an operating system called microkernel approach. This method structures the operating system by removing all the nonessential components from the kernel, and implementing them as system- and user-level programs. The result is a smaller kernel. The main function is to provide a communication facility between the client program and the various services that are also running in user space. Communication takes place between user modules using message passing. Benefits:
Easier to extend a microkernel Easier to port the operating system to new architectures More reliable (less code is running in kernel mode) More secure
MacOS X Structure Tru64 UNIX is implemented with the Mach kernel. The Apple MACOS X Server operating system is based on the Mach kernel. Most modern operating systems implement kernel modules using object-oriented approach, Each core component is separate, each talks to the others over known interfaces and each is loadable as needed within the kernel. Windows NT uses a hybrid approach.
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
c) Explain the following terms: i. ii. iii.
Multiprogramming Multitasking Racing
Ans. i) Multiprogramming: In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the CPU would have to stop executing program instructions while the peripheral processed the data. This was deemed very inefficient. The first efforts to create multiprogramming systems took place in the 1960s. Several different programs in batch were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. Multiprogramming doesn't give any guarantee that a program will run in a timely manner. Indeed, the very first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced the waiting. The early OS/360 primary control program (PCP) followed the above model but was replaced the very next year, 1967, by MFT which limited the amount of CPU time any single process could consume before being switched out. iii) Multitasking: Multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. When context switches occur frequently enough the illusion of parallelism is achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs. iii) Racing: A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place is called race condition. To guard against the race condition we need to ensure that only one process at a time can be manipulating the common variable between them these various processes. Such situations occur frequently in operating systems as different parts of the
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
system manipulate resources and we want the changes not to interfere with one another. To ensure that the race condition doesn’t hold we use tools like synchronization and coordination.
2. Attempt any two of the following: a) Define inter-process communication (IPC) and its need. Also explain the various methods used to implement IPC. Ans. Inter-process communication provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. ITC is useful in a distributed environment where the communicating processes may reside on different computers connected with a network. An example is a chat program used on the World Wide Web. IPC is best provided by a message-passing system.
Message-Passing System: The function of a message system is to allow processes to communicate with one another without the need to resort to shared data. In this scheme, services are provided as ordinary user processes. That is, the services operate outside of the kernel. Communication among user processes is accomplished through the passing of messages. An IPC provides at least the two operations: send(message) and receive(message). Messages sent by a process can be of either fixed or variable size. If only fixed-sized messages can be sent, the system-level implementation straightforward. This restriction, however, makes the task of programming more difficult. On the other hand, variable-sized messages require complex system-level implementation, but the programming task becomes simpler. If processes P and Q want to communicate, they must send messages to and receive messages from each other; a communication link must exist between them. This link can be implemented in a variety of ways. 1.Direct or indirect communication 2.Symmetric or asymmetric communication 3. Automatic or explicit buffering 4.Send by copy or send by reference . 5.Fixed-sized or variable-sized messages
Naming: Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication. 1. Direct Communication: 2. With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send and receive primitives are defined as: Send (P, message) -Send a message to process P. Receive (Q, message) -Receive a message from process Q. A communication link in this scheme has the following properties: 1.A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other's identity to communicate. 2.A link is associated with exactly two processes.
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
3.Exactly one link exists between each pair of processes. This scheme exhibits symmetry in addressing; that is, both the sender and receiver processes must name the other to communicate. A variant of this scheme employs asymmetry in addressing. Only the sender names the Eipient; the recipient is not required to name the sender. In this scheme, the ad and receive primitives are defined as follows: send(P, message)—Send a message to process P. receive (id, message) —Receive a message from any process; the variable id is set to the name of the process with which communication has taken place. The disadvantage in both symmetric and asymmetric schemes is the limited modularity of the resulting process definitions. Changing the name of a process may necessitate examining all other process definitions. All references to the old name must be found, so that they can be modified to the new name. This situation is not desirable from the viewpoint of separate compilation. 2.Indirect Communication: With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. In this scheme, a pr can communicate with some other process via a number of different mailbox. Two processes can communicate only if they share a mailbox. The send and receive primitives are denned as follows: • send (A, message)-Send a message to mailbox A. • receive (A, message)-Receive a message from mailbox A.
In this scheme, a communication link has the following properties: 1.A link is established between a pair of processes only if both members of the pair have a shared mailbox. 2.A link may be associated with more than two processes. 3.A number of different links may exist between each pair of communication processes, with each link corresponding to one mailbox.
Synchronization: Communication between processes takes place by calls to send and receive primitives. There are different design options for implementing each primitive. Message passing may be either blocking or nonblocking —also known as synchronous and asynchronous. • Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. • Nonblocking send: The sending process sends the message and resumes operation. • Blocking receive: The receiver blocks until a message is available.
• Nonblocking receive: The receiver retrieves either a valid message or a null. b) What is a semaphore? Write down the algorithm for ‘the readers-writers
problem’ using semaphores. Ans. Semaphore: The general solutions to the critical-section problem are not to applicable to complex problems. To overcome this difficulty, we use a synchronization tool called a, semaphore. A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait and signal. The classical definition of wait in pseudocode is
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
Wait(S) { while (S <= 0) ; //no-op S--; } The classical definitions of signal in pseudocode is
signal(S) { S++; } Modifications to the integer value of the semaphore in the wait and signal operations must be executed indivisibly. That is, when one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value. In addition, in the case of the wait (S), the testing of the integer value of S (S < 0), and its possible modification (S--) must also be executed interruption. Readers-Writers Problem: The structure of a reader process: wait(mutex); readcount++; if (readcount==1) wait(wrt) ; signal(mutex); -------reading is performed ---------------wait(mutex); readcount—; if (readcount==0) signal(wrt); signal(mutex);
The structure of a writer process: wait(wrt);
---------------------
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
Writing is performed --------------------signal(wrt); c) State the critical section problem. What are the basic criteria for deducing a
correct solution of the critical section problem. Ans. Critical section problem: Consider a system consisting of n processes {Po, P1, …. , Pn-1}. Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. The important feature of the system is that when, one process is executing in its critical section, no other process is allowed to execute in its critical section. Thus, the execution of critical sections by the processes is "mutually exclusive in time. The critical-section problem is to_design a protocol that the processes can use to cooperate Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. do{ ENTRY SECTION critical section EXIT SECTION remainder section } while (1); General structure of a typical process Pi. A solution to the critical-section problem must satisfy the following requirements: 1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress: If no process is executing in its critical section and some process wish to enter their critical sections, then only those processes that are not executing in, their remainder section can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely. 3. Waiting: There exists a bound on the number of times that (processes are allowed to enter their critical sections after a process has if a request to enter its critical section and before that request is granted. We assume that each process is executing at a nonzero speed. However, we can make no assumption concerning the relative speed of the “n” processes. The solutions do not rely on any assumptions concerning the hardware instructions or the number of processes that the hardware supports. We do, however, assume that the basic machine-language instructions (the primitive instructions such as load, store and test) are executed atomically. That is, if two such instructions are executed concurrently, the result is equivalent to their sequential execution in some unknown order. Thus, if a load and a store are executed concurrently, load will get either the old value or the new value, but not some combination of the two.
A simple solution to critical section problem(for 2 processes) is shown below.
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
ENTRY SECTION:-
flag[i]=true; Turn=j; While(flag[j] && turn==j); EXIT SECTION:flag[i]=false; This algorithm is known as Peterson’s algorithm. Flag[0] and flag[1] is set to “false” initially. 3. Attempt any two of the following: a) Process
Arrival Time
Burst Time
P1
0.0
8
P2
0.4
4
P3
1.0
1
What is the average turnaround and waiting time for these processes with: i) ii)
FCFS SJF
Ans. i)
FCFS: P1 0
P2 8
P3 12
Gantt Chart for FCFS Average turnaround time= ((8-0)+(12-0.4)+(13-1.0))/3= 10.53 Average waiting time= ((0-0)+(8-0.4)+(12-1.0))/3= 4
13
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
ii)
DEPARTMENT OF INFORMATION
SJF-nonpreemptive: P1
P3
0
P2
8
9
13
Gantt Chart for SJF-nonpreemptive Average turnaround time= ((8-0)+(13-0.4)+(9-1.0))= 9.53 Average waiting time= ((0-0)+(9-0.4)+(8-1.0))= 5.2 SJF-preemptive: P1 0
P2 0.4
P3 1.0
P2 2.0
P1 5.4
13
Gantt Chart for SJF-preemptive Average turnaround time= ((13-0)+(5.4-0.4)+(2.0-1.0))/3=8 Average waiting time= ((5.4-0.4-0)+(2-0.6-0.4)+(1.0-1.0))=2
b) Deduce whether there is a deadlock in the following figure or not? R1
R2
P
P
P
R4 Ans.
R3
The resource allocation states will be: Process
P1
Allocation
Request
Available
R1 R2 R3 R4
R1 R2 R3 R4
R1 R2 R3 R4
1 0
0 0
0 0
0 1
0 0
0
0
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
P2
1 1
1 0
0 0
0 1
P3
1 0
0 0
0 0
1 0
Now applying deadlock detection algorithm, we get: Step1. Let Work and Finish be vectors. Work:=Available; If Allocation[i]!=0 then Finish[i]=false;
//Work=0000. //i=1,2,3.
Step2. Since there is no i for which Finish[i]=false; and Requesti <=Work; Goto Step4. Step4. Finish[i]=false for all i. Therefore the system is in deadlock state, and all the processes are deadlocked. c) i) What advantage is there in having different time quantum size on different levels of a multilevel queuing system? ii) Consider a system consisting of 4 resources of same type being shared by 3 processes, each of which needs utmost 2 resources. Show that the system is deadlock-free.
Ans. i) In a multilevel queuing system (having different time quantum size on its different levels),
processes are classified into different groups. Each queue in the system gets a certain portion of the CPU time slice, which it can then schedule among the various processes in the queue. A common classification is foreground or interactive processes and background or batch processes. These two types have different response-time requirements, and so might have different scheduling needs. The interactive processes could be put in the queue that has smaller time quantum size and the batch processes could be put on the queue that has bigger time quantum size-sine the batch processes need not to interrupt frequently.
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
ii) The various possibilities of requests and allocations are shown below:
P
P
P
Requests will easily be allocated.
P
P
P
Hold and wait is satisfied but circular wait isn’t-since 2 resources will be released soon by P3.
P
P
P
No deadlock since all the resources will soon be released by P2 and P3 one by one. Hold and wait isn’t satisfied. All the possibilities show that the system is deadlock free.
4. Attempt any two of the following:
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
a) Consider a paging system with the page table stored in memory. i) If a memory reference takes 200nsec, how long does a paged memory reference take. ii) If we add TLBs, and 75 percent of all page-table references are found in the TLBs, what is the effective memory reference time? (Assume that a finding a page table entry in the TLBs takes 0 time, if the entry is there). Ans. i)
1 memory reference=200nsec Since page table is in memory, Paged memory reference=200(for the page table)+200( for the page)=400 nsec.
ii)
Effective access time=(75/100)*200(since pare table entry in in the TLB) + (25/100)*400(200 for the page table and 200 for the page) = 250nsec.
b) Define segmentation. Describe its various advantages and disadvantages in detail? Consider the following segment table. Segment
Base
Length
0
219
600
1
2300
14
2
90
100
3
1327
580
4
1952
96
What are the physical addresses for the following logical addresses? i)
0430
ii)
110
iii)
2500
iv)
3400
v)
4112
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
c) H
i)
0430 Segment no.=0 =>base=219 and bound=600. Since 0430<600 therefore physical address for the logical address 0430= Base+430==219+430=649
ii)
110 The physical address for the logical address 110=2300+10=2310.
iii)
2500 Since 500>100, therefore the logical address 2500 will result in a trap to the operating system.
iv)
3400 The physical address for the logical address 3400=1327+400=1727.
v)
4112 Since 5112>96, therefore the logical address 4112 will result in a trap to the operating system.
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY vi)
DEPARTMENT OF INFORMATION
Describe least recently used page replacement algorithm and least recently used approximation page replacement algorithm in detail using relevant examples. Describe their hardware implementation.
5. Attempt any two of the following: a) Discuss the contagious, linked, index and multi-level indexing file allocation schemes. Which allocation scheme will minimize the amount of space required in directory structure? b) Write down a comparative description of each of the following disk scheduling algorithms. i)
FCFS
ii)
SCAN
iii)
SSTF
iv)
C-SCAN
c) Suppose that a disk drive has 5000 cylinders numbered 0 to4999. The drive is currently serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order is: 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130 Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests for each of the following disk scheduling algorithms? i)
FCFS
ii)
SSTF
iii)
SCAN
iv)
LOOK
v)
C-SCAN
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
DEPARTMENT OF INFORMATION
Ans. i)
FCFS: 143 0
86
130
913
948
1022
1470
1509
1750
1774
4999
948
1022
1470
1509
1750
1774
4999
Total seek distance=7081 cylinders. ii)
SSTF: 143
0
86
130
913
Total seek distance=1745 cylinders.
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
iii)
DEPARTMENT OF INFORMATION
SCAN: 143
0
86
130
913
948
1022
1470
1509
1750
1774
4999
1022
1470
1509
1750
1774
4999
1022
1470
1509
1750
1774
4999
Total seek distance=9769 cylinders.
iv)
LOOK: 143
0
86
130
913
948
Total seek distance=3319 cylinders.
v)
C-CSAN: 143
0
86
130
913
948
IDEAL INSTITUTE OF TECHNOLOGY TECHNOLOGY
Total seek distance=9986 cylinders.
DEPARTMENT OF INFORMATION