15 Ques.docx

  • Uploaded by: SAKTHIVEL
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View 15 Ques.docx as PDF for free.

More details

  • Words: 1,367
  • Pages: 4
UNIT III

1.

Consider the following two transactions and schedule (time goes from top to bottom). Is this schedule conflict-serializable? Explain why or why not. Transaction T0 r0[A] w0[A]

Transaction T1

r1[A] r1[B] c1 r0[B] w0[B] c0 The schedule is not conflict serializable because the precedence graph contains a cycle. The graph has an edge T0 → T1 because the schedule contains w0[A] → r1[A]. The graph has an edge T1 → T0 because the schedule contains r1[B] → wo[B]. 2.

What type of locking is needed for insert and delete operation? When you execute an INSERT, UPDATE, or DELETE statement, the database server uses exclusive locks. An exclusive lock means that no other users can update or delete the item until the database server removes the lock. In addition, no other users can view the row unless they are using the Dirty Read isolation level.

3.

State the need for concurrency. 1. Concurrency control is needed so that data can be updated correctly when multiple transactions are executed concurrently. Increased processor and disk utilization, leading to better transaction throughput. E.g. one transaction can be using the CPU while another is reading from or writing to the disk 2. Reduced average response time for transactions: short transactions need not wait behind long ones So, the reasons for allowing concurrency, If the transactions run serially, a short transaction may have to wait for a preceding long transaction to complete, which can lead to unpredictable delays in running a transaction. So, concurrent execution reduces the unpredictable delays in running transactions.

4.

Differentiate strict two phase protocol and rigorous two phase protocol. In strict two phase locking the shared locks are released in shrinking phase, but in rigorous two phase locking all the shared and exclusive locks are kept until the end of the transaction.

Strict two phase locking protocol provide only cascadeless schedules, recovery is very easy. But the set of schedules obtainable is a subset of those obtainable from plain two phase locking, thus concurrency is reduced. In rigorous two phase locking we do not need to know the access pattern of locks on data items beforehand so it is more appropriate for dynamic environments while in strict two phase locking the access pattern of locks should be specified at the start of transaction 5.

Describe about wait-die and wound-wait schemes. wait-die scheme — non-preemptive Older transaction may wait for younger one to release data item. (older means smaller timestamp) Younger transactions never Younger transactions never wait for older ones; they are rolled back instead. A transaction may die several times before acquiring needed data item wound-wait scheme — preemptive older transaction wounds (forces rollback) of younger transaction instead of waiting for it. Younger transactions may wait for older ones. may be fewer rollbacks than wait-die scheme.

6.

List the SQL statements used for transaction control. (Nov/Dec 2011) Transaction Control Language(TCL) commands are used to manage transactions in database.These are used to manage the changes made by DML statements. 1. Commit command: Commit command is used to permanently save any transaction into database. Following is Commit command's syntax, Commit; 2. Rollback command: This command restores the database to last commited state. It is also use with savepoint command to jump to a savepoint in a transaction. Following is Rollback command's syntax, rollback to savepoint-name; 3. Savepoint command: savepoint command is used to temporarily save a transaction so that you can rollback to that point whenever necessary. Following is savepoint command's syntax, savepointsavepoint-name;

7.

What is serializability? What are its types? Why it is needed in database? Serializability is a concept that helps to identify which non-serial schedules are correct and will maintain the consistency of the database. The two types are: o Conflict Serializability o View Serializibility It ensures that a schedule for executing concurrent transactions is equivalent to one that executes the transactions serially in some order. It assumes that all accesses to the database are done using read and write operations

8.

Under what conditions is it less expensive to avoid deadlock than to allow deadlocks to occur and then to detect them? Deadlock avoidance is preferable if the consequences of abort are serious (as in interactive transactions), and if there is high contention and a resulting high probability of deadlock.

9.

Specify the three different ways of handling deadlocks. o Deadlock Prevention o Deadlock Detection o Deadlock Recovery Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state. Deadlock detection and recovery - Abort a process or preempt some resources when deadlocks are detected.

10.

Define the term ACID properties. ACID-Atomicity, Consistency, Isolation, Durability Atomicity: Either all operation of transaction are reflected properly in database or none done Consistency: The total amount of data items before transaction and after transaction is equal. Isolation: Every transaction is independent of each other transaction Durability: After transaction completes successfully, the values stored in disk so no loss of data occurs.

UNIT IV 1. Difference between sparse index and dense index  Dense Index: An index record appears for every search key value in file. This record contains search key value and a pointer to the actual record  Sparse Index: Index records are created only for some of the records. To locate a record, we find the index record with the largest search key value less than or equal to the search key value we are looking for. 2. Define data transfer rate and Mean time to failure  Data-transfer rate – the rate at which data can be retrieved from or stored to the disk. a. 25 to 100 MB per second max rate, lower for inner tracks  Mean time to failure (MTTF)–the average time the disk is expected to run continuously without any failure.

3. How dynamic hashing differ from static hashing? The problem with static hashing is that it does not expand or shrink dynamically as the size of the database grows or shrinks. Dynamic Hashing -Dynamic hashing provides a mechanism in which data buckets are added and removed dynamically and ondemand. Dynamic hashing is also known as extended hashing. STATIC HASHING

DYNAMIC HASHING

Numbers of buckets are fixed Numbers of Buckets are not fixed As the file grows, performance decreases as the Performance do not degrade file grows. Space overhead is more Here we donot use Bucket Address table. Open hashing and Closed hashing are forms of it.

Minimum space lies overhead. Bucket address table is used. Extendable hashing and Linear hashing are forms of it.

4. Define Software and Hardware RAID systems With hardware RAID, logical disks are configured and mirrored outside of the system. A physical controller manages the array, presenting data to applications and operating systems as logical units. The controller resides on a PCI bus. The difference in hardware RAID and software RAID is where the RAID processing happens. Software RAID performs the processing on the host’s CPU. Because it does not use the server processor, hardware RAID provides better performance. However, because of the need for additional hardware, it is more expensive than software RAID. 5. What are the two type of stripping? Give example for both. If we need to read a large file, instead of pulling it all from a single hard disk, it is much faster to chop it up into pieces, store some of the pieces on each of the drives in an array, and then use all the disks to read back the file when needed. This technique is called striping, after the pattern that might be visible if you could see these "chopped up pieces" on the various drives with a different colour used for each file. It is similar in concept to the memory performance-enhancing technique called interleaving. Striping can be done at the byte level, or in blocks. Byte-level striping means that the file is broken into "byte-sized pieces". The first byte of the file is sent to the first drive, then the second to the second drive, and so on. Sometimes byte-level striping is done as a sector of 512 bytes. Block-level striping means that each file is split into blocks of a certain size and those are distributed to the various drives

Related Documents

15
October 2019 40
15
November 2019 33
15
October 2019 42
15
June 2020 17
15
November 2019 31
15
June 2020 19

More Documents from ""