Snmr Ia2 Ab.pdf

  • Uploaded by: Vaibhav Phad
  • 0
  • 0
  • August 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Snmr Ia2 Ab.pdf as PDF for free.

More details

  • Words: 3,745
  • Pages: 20
SNMR IA2 AB 1. List and explain NAS File Sharing Protocol. Two common NAS file sharing protocols are: 1. NFS – Network File System protocol Traditional UNIX environment file sharing protocol 2. CIFS – Common Internet File System protocol Traditional Microsoft environment file sharing protocol, based upon the Server Message Block Protocol. CIFS CIFS is client/server application protocol, which enables clients programs make requests for files and services on remote computers on the Internet. CIFS is a public (or open) variation on Microsoft’s Server Message Block (SMB) protocol. SMB is widely used on LANs. Like SMB, CIFS runs at a higher level than, and uses the Internet's TCP/IP protocol. CIFS is viewed as a complement to the existing Internet application protocols such as the File Transfer Protocol (FTP) and the Hyper Text Transfer Protocol (HTTP). The CIFS protocol allows the client to: i) Get access to files that are local to the server and read and write to them ii) Share files with other clients using special locks iii) Restore connections automatically in case of network failure iv) Use Unicode file names In general, CIFS gives the client user better control of files than FTP. It provides a potentially more direct interface to server programs than currently available through a Web browser and the HTTP protocol. CIFS runs over TCP/IP and uses DNS (Domain Naming Service) for name resolution. These file system protocols allow users to share file data across different operating environments as well as provide a means for users to transparently migrate from one operating system to another. 1. File system is mounted remotely using NFS or CIFS protocol 2. Application I/O requests transparently transmit data to the remote file system by the NFS/CIFS protocol. This is also known as redirection. 3. Utilizes mature data transport (TCP/IP) and media access protocols

NAS device assumes responsibility for organizing block level data (R/W) on disk and managing cache ensuring efficient management and data security. NFS and CIFS enable the owner of a file to set the required type of access, such as read-only or read-write, for a particular user or group of users. In both of these implementations, users are unaware of the location of the file system. In addition, a name service, such as Domain Name System (DNS), Lightweight Directory Access Protocol (LDAP), and Network Information Services (NIS), helps users identify and access a unique resource over the network. A naming service protocol creates a namespace, which holds the unique name of every network resource and helps recognize resources on the network. NFS NFS is a client/server application that enables a computer user view and optionally store and update files on a remote computer as though they were on the user's own computer. It uses Remote Procedure Calls (RPC) to communicate between computers. The user's system requires an NFS client to connect to the NFS server. Since the NFS server and client use TCP/IP to transfer files, TCP/IP must be installed on both systems. Using NFS, the user or system administrator can mount all or a portion of a file system (which is a portion of the hierarchical tree in any file directory and subdirectory). The portion of the file system that is mounted (designated as accessible) can be controlled using permissions (e.g., read-only or read-write).

2. Draw BC planning lifecycle.

3. List different components of Information System. Components of Information System Technology: Technology can be thought of as the application of scientific knowledge for practical purposes. The first three components of information systems – hardware, software, and data – all fall under the category of technology. People: A focus on the people involved in information systems is the next step. From the front-line help-desk workers, to systems analysts, to programmers, all the way up to the chief information officer Process: The last component of information systems is process. A process is a series of steps undertaken to achieve a desired outcome or goal. Information systems are becoming more and more integrated with organizational processes, bringing more productivity and better control to those processes. 4. Define RTO and RPO. Recovery Point Objective (RPO) defines the criticality of the data and determines how much data your business can acceptably lose if a failure occurs. The more critical the data, the lower the RPO.

Recovery Time Objective (RTO) is the estimated repair time and determines the acceptable amount of time needed for recovery. 5. Define Query Structure. A query consists of one or more conditions that must be met by each work item in the result set. 6. Explain are the different forms of virtualization. Various forms of Virtualization are memory virtualization, network virtualization, server virtualization, and storage virtualization. 1. Memory Virtualization Virtual memory makes an application appear as if it has its own contiguous logical memory independent of the existing physical memory resources. With technological advancements, memory technology has changed and the cost of memory has decreased. Virtual memory managers (VMMs) have evolved, enabling multiple applications to be hosted and processed simultaneously. In a virtual memory implementation, a memory address space is divided into contiguous blocks of fixed-size pages. A process known as paging saves inactive memory pages onto the disk and brings them back to physical memory when required. This enables efficient use of available physical memory among different processes. The space used by VMMs on the disk is known as a swap file. A swap file (also known as page file or swap space) is a portion of the hard disk that functions like physical memory (RAM) to the operating system. The operating system typically moves the least used data into the swap file so that RAM will be available for processes that are more active. Because the space allocated to the swap file is on the hard disk (which is slower than the physical memory), access to this file is slower. 2. Network Virtualization Network virtualization creates virtual networks whereby each application sees its own logical network independent of the physical network. A virtual LAN (VLAN) is an example of network virtualization that provides an easy, flexible and less expensive way to manage networks. VLANs make large networks more manageable by enabling a

centralized configuration of devices located in physically diverse locations. Consider a company in which the users of a department are separated over a metropolitan area with their resources centrally located at one office. In a typical network, each location has its own network connected to the others through routers. When network packets cross routers, latency influences network performance. With VLANs, users with similar access requirements can be grouped together into the same virtual network. This setup eliminates the need for network routing. As a result, although users are physically located at disparate locations, they appear to be at the same location accessing resources locally. In addition to improving network performance, VLANs also provide enhanced security by isolating sensitive data from the other networks and by restricting access to the resources located within the networks. A virtual SAN/virtual fabric is a recent evolution of SAN and conceptually, functions in the same way as a VLAN. 3. Server Virtualization Server virtualization enables multiple operating systems and applications to run simultaneously on different virtual machines created on the same physical server (or group of servers). Virtual machines provide a layer of abstraction between the operating system and the underlying hardware. Within a physical server, any number of virtual servers can be established; depending on hardware capabilities. Each virtual server seems like a physical machine to the operating system, although all virtual servers share the same underlying physical hardware in an isolated manner. For example, the physical memory is shared between virtual servers but the address space is not. Individual virtual servers can be restarted, upgraded, or even crashed, without affecting the other virtual servers on the same physical machine. 4. Storage Virtualization Storage virtualization is the process of presenting a logical view of the physical storage resources to a host. This logical storage appears and behaves as physical storage directly connected to the host. Throughout the evolution of storage technology, some form of storage virtualization has been implemented. Some examples of storage virtualization are host-based volume management, LUN creation, tape storage virtualization, and disk addressing (CHS to LBA).

The key benefits of storage virtualization include increased storage utilization, adding or deleting storage without affecting an application’s availability, and non-disruptive data migration (access to files and storage while migrations are in progress). The storage virtualization solution must be capable of addressing issues such as scalability, functionality, manageability, and support. o

Scalability Consider the scalability of an environment with no virtualization. This environment may have several storage arrays that provide storage independently of each other. Each array is managed independently and meets application requirements in terms of IOPS and capacity. After virtualization, a storage array can no longer be viewed as an individual entity. The environment as a whole must now be analysed. As a result, the infrastructure that is implemented both at a physical level and from a virtualization perspective must be able to adequately handle the workload, which may consist of different types of processing and traffic distribution. Greater care must be exercised to ensure that storage devices are performing to meet the appropriate requirements.

o

Functionality Functionality is another challenge in storage virtualization. Currently, the storage array provides a wide range of advanced functionality necessary for meeting an application’s service levels. This includes local replication, extended-distance remote replication and the capability to provide application consistency across multiple volumes and arrays. In a virtualized environment, the virtual device must provide the same or better functionality than what is currently available on the storage array, and it must continue to leverage existing functionality on the arrays. It should protect the existing investments in processes, skills, training, and human resources.

o

Manageability The management of the storage infrastructure in a virtualized environment is an important consideration for storage administrators. A key advantage of today’s storage resource management tools in an environment without virtualization is that

they provide an end-to-end view, which integrates all the resources in the storage environment. They provide efficient and effective monitoring, reporting, planning, and provisioning services to the storage environment. Introducing a virtualization device breaks the end-to-end view into three distinct domains: the server to the virtualization device, the virtualization device to the physical storage, and the virtualization device itself. The virtualized storage environment must be capable of meeting these challenges and must integrate with existing management tools to enable management of an end-to-end virtualized environment. o

Support Virtualization is not a stand-alone technology but something that has to work within an existing environment. This environment may include multiple vendor technologies, such as switch and storage arrays, adding to complexity. Addressing such complexities often requires multiple management tools and introduces interoperability issues. Without a virtualization solution, many companies try to consolidate products from a single vendor to ease these challenges. Introducing a virtualization solution reduces the need to standardize on a single vendor. However, supportability issues in a virtualized heterogeneous environment introduce challenges in coordination and compatibility of products and solutions from different manufacturers and vendors.

7. Types of Indexing. 1. Primary index 2. Secondary index 3. Clustered index 4. Dense index 5. Sparse index 6. Multilevel index

8. Explain any one NAS implementation method. NAS implementations:



There are two types of NAS implementations: integrated and gateway. The integrated NAS device has all of its components and storage system in a single enclosure (package). In gateway implementation, NAS head shares its storage with SAN environment. 1. Integrated NAS: o

An integrated NAS device has all the components of NAS, such as the NAS head and storage, in a single enclosure, or frame. This makes the integrated NAS a self-contained environment. The NAS head connects to the IP network to provide connectivity to the clients and service the file I/O requests. The storage consists of a number of disks that can range from low-cost ATA to high throughput FC disk drives. Management software manages the NAS head and storage configurations.

o

An integrated NAS solution ranges from a low-end device, which is a single enclosure, to a high-end solution that can have an externally connected storage array.

o

A low-end appliance-type NAS solution is suitable for applications that a small department may use, where the primary need is consolidation of storage, rather than high performance or advanced features such as disaster recovery and business continuity. This solution is fixed in capacity and might not be upgradable beyond its original configuration. To expand the capacity, the solution must be scaled by deploying additional units, a task that increases management overhead because multiple devices have to be administered. In a high-end NAS solution, external and dedicated storage can be used. This enables independent scaling of the capacity in terms of NAS heads or storage. However, there is a limit to scalability of this solution. o

2. Gateway NAS: 

A gateway NAS device consists of an independent NAS head and one or more storage arrays. The NAS head performs the same functions that it does in the integrated solution; while the storage is shared with other applications that require block-level I/O. Management functions in this type of solution are more complex than those in an integrated environment because there are separate administrative tasks for the NAS head and the storage. In addition to the components that are explicitly tied to the NAS solution, a gateway solution can also utilize the FC infrastructure, such as switches, directors, or direct-attached storage arrays.



The gateway NAS is the most scalable because NAS heads and storage arrays can be independently scaled up when required. Adding processing capacity to the NAS gateway is an example of scaling., adding capacity on the SAN independently of the NAS head. Administrators can increase performance and I/O processing capabilities for their environments without purchasing additional interconnect devices and storage. Gateway NAS enables high utilization of storage capacity by sharing it with SAN environment.

9. Explain Symmetric and Asymmetric virtualization with neat labelled diagram. Symmetric Storage virtualization In symmetric storage virtualisation the data and control flow go down the same path (Figure). This means that the abstraction from physical to logical storage necessary for virtualisation must take place within the data flow. As a result, the metadata controller is positioned precisely in the data flow between server and storage devices, which is why symmetric virtualisation is also called in-band virtualisation.

Advantages of symmetric virtualisation are: The application servers can easily be provided with data access both on block and file level, regardless of the underlying physical storage devices. The administrator has complete control over which storage resources are available to which servers at a central point. This increases security and eases the administration. Assuming that the appropriate protocols are supported, symmetric virtualisation does not place any limit on specific operating system platforms. It can thus also be used in heterogeneous environments. The performance of existing storage networks can be improved by the use of caching and clustering in the metadata controllers.

The use of a metadata controller means that techniques such as snapshots or mirroring can be implemented in a simple manner, since they control the storage access directly. They can also be used on storage devices such as JBODs or simple RAID arrays that do not provide to these techniques themselves. The disadvantages of a symmetric virtualisation are: Each individual metadata controller must be administered. If several metadata controllers are used in a cluster arrangement, then the administration is relatively complex and time-consuming particularly due to the cross-computer data access layer. This disadvantage can, however, be reduced by the use of a central administration console for the metadata controller. Several controllers plus cluster technology are indispensable to guarantee the fault-tolerance of data access. As an additional element in the data path, the controller can lead to performance problems, which makes the use of caching or load distribution over several controllers indispensable. It can sometimes be difficult to move the data between storage devices if this is managed by different metadata controllers. Asymmetric Storage virtualisation In contrast to symmetric virtualisation, in asymmetric virtualisation the data flow is separated from the control flow. This is achieved by moving all mapping operations from logical to physical drives to a metadata controller outside the data path The metadata controller now only has to look after the administrative and control tasks of virtualisation; the flow of data takes place directly from the application servers to the storage devices. As a result, this approach is also called out-band virtualisation.

The following advantages of asymmetric virtualisation can be established: • Complete control of storage resources by an absolutely centralised management on the metadata controller. • Maximum throughput between servers and storage devices by the separation of the control flow from the data flow, thus avoiding additional devices in the data path. • In comparison to the development and administration of a fully functional volume manager on every server, the porting of the agent software is associated with a low cost. • As in the symmetric approach, advanced storage functions such as snapshots or mirroring can be used on storage devices that do not themselves support these functions. • To improve fault-tolerance, several metadata controllers can be brought together to form a cluster. This is easier than in the symmetric approach, since no physical connection from the servers to the metadata controllers is necessary for the data flow. The disadvantages of asymmetric virtualisation are:

Special agent software is required on the servers or the host bus adapters. This can make it more difficult to use this approach in heterogeneous environments, since such software or a suitable host bus adapter must be present for every platform. Incompatibilities between the agent software and existing applications may sometimes make the use of asymmetric virtualisation impossible. The development cost increases further if the agent software and the metadata controller are also to permit access on file level in addition to access on block level. A performance bottleneck can arise as a result of the frequent communication between agent software and metadata controller. These performance bottlenecks can be remedied by the caching of the physical storage information. Caching to increase performance requires an ingenious distributed caching algorithm to avoid data inconsistencies. A further option would be the installation of a dedicated cache server in the storage network.

10. Write a note on Business Impact Analysis. What is business impact analysis (BIA)? It’s a way to predict the consequences of disruptions to a business and its processes and systems by collecting relevant data, which can be used to develop strategies for the business to recover in the case of emergency. Scenarios that could potentially cause losses to the business are identified. These can include supplier’s not delivering, delays in service, etc. The list of possibilities is long, but it’s key to explore them thoroughly in order to best assess risk. It is by identifying and evaluating these potential risk scenarios that a business can come up with a plan of investment for recovery and mitigation strategies, along with outright prevention.

11. Difference between Block level virtualization and File level virtualization. Virtualisation on block level means that storage capacity is made available to the operating system or the applications in the form of virtual disks In virtualisation on block level the task of file system management is the responsibility of the operating system or the applications The task of the virtualisation entity is to map these virtual blocks to the physical blocks of the real storage devices Virtualisation on file level means that the virtualisation entity provides virtual storage to the operating systems or applications in the form of files and directories The applications work with files instead of blocks and the conversion of the files to virtual blocks is performed by the virtualisation entity itself(This means, the task of file system management is performed by the virtualisation entity, unlike in block level which is done by OS or application ) The physical blocks are presented in the form of a virtual file system and not in the form of virtual blocks.

12. Document Surrogate. The most common way to show results for a query is to list information about documents in order of their computed relevance to the query. Alternatively, for pure Boolean ranking, documents are listed according to a metadata attribute, such as date. Typically the document list consists of the document's title and a subset of important metadata, such as date, source, and length of the article. In systems with statistical ranking, a numerical score or percentage is also often shown alongside the title, where the score indicates a computed degree of match or probability of relevance.

This kind of information is sometimes referred to as a document surrogate. 13. Explain SCSI client server model. SCSI-3 Client-Server Model 

SCSI-3 architecture derives its base from the client-server relationship, in which a client directs a service request to a server, which then fulfills the client’s request. In a SCSI environment, an initiator-target concept represents the client-server model. In a SCSI-3 client-server model, a particular SCSI device acts as a SCSI target device, a SCSI initiator device, or a SCSI target/initiator device.



Each device performs the following functions:

1. SCSI initiator device: Issues a command to the SCSI target device, to perform a task. A SCSI host adaptor is an example of an initiator. 2. SCSI target device: Executes commands to perform the task received from a SCSI initiator. Typically a SCSI peripheral device acts as a target device. However, in certain implementations, the host adaptor can also be a target device. 

Following Figure displays the SCSI-3 client-server model, in which a SCSI initiator, or a client, sends a request to a SCSI target, or a server. The target performs the tasks requested and sends the output to the initiator, using the protocol service interface.



A SCSI target device contains one or more logical units. A logical unit is an object that implements one of the device functional models as described in the SCSI command standards. The logical unit processes the commands sent by a SCSI initiator. A logical unit has two components, a device server and a task manager, as shown in Figure 5-4. The device server addresses client requests, and the task manager performs management functions.

14. What is Stemming? Explain with example. 15. What is Multi-Linguistic Retrieval System? 16. BC terminologies. 17. Explain pyramid model of information system.

Related Documents

Snmr Ia2 Ab.pdf
August 2019 11
Ia2
June 2020 5
Daftar Hadir Xii Ia2
December 2019 12
Daftar Hadir Siswa Xii Ia2
December 2019 15

More Documents from ""

Snmr Ia2 Ab.pdf
August 2019 11
Vaibhav
October 2019 46
Jai
May 2020 29
Mann 4
May 2020 23
Encyclopedia Of India
October 2019 35
Mann 2
May 2020 24