CHAPTER-1 INTRODUCTION
1.1 INTRODUCTION: 1
Cloud computing is the use of computing resources (hardware & software) that are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbols an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user’s data, software and computation. Cloud computing consists of hardware and software resources made available on the internet as managed thirdparty services. These services typically provide access to advanced software application and high-end networks of server computers.
FIG 1.1: STRUCTURE OF CLOUD COMPUTING
HOW CLOUD COMPUTING WORKS? The goal of cloud computing is to apply traditional supercomputing, or highperformance computing power, normally used by military and research facilities, to perform tens of trillions of computations per second, in customer-oriented application such as financial portfolios, to deliver personalised information, to provide data storage or to power large, immersive computer games. The cloud computing uses networks of large groups of servers typically running low-cost consumer PC technology with specialized connections to spread data-processing chores across them. This shared IT infrastructure contains large pools of systems that are linked together. Often, virtualization techniques are used to maximize the power of cloud computing.
2
5 Essential Characteristics of Cloud Computing
FIG 1.2: Characteristics of Cloud Computing CHARACTERISTICS AND SERVICES MODELS: The salient characteristics of cloud computing based on the definitions provided by the National Institute of Standards and Terminology (NIST) are outlined below:
ON-DEMAND SELF-SERVICE: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
BROAD NETWORK ACCESS: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
RESOURCE POOLING: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location-independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data center). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
3
RAPID ELASTICITY: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
MEASURED SERVICE: Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be managed, controlled, and reported providing transparency for both the provider and consumer of the utilized service.
SERVICES MODELS: Cloud Computing comprises three different service models, namely Infrastructure-as-a -Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-aService (SaaS). The three service models or layer are completed by an end user layer that encapsulates the end user perspective on cloud services. If a cloud user accesses services on the infrastructure layer, for instance, she can run her own applications on the resources of a cloud infrastructure and remain responsible for the support, maintenance, and security of these applications herself. If she accesses a service on the application layer, these tasks are normally taken care of by the cloud service provider.
Fig 1.1.2: -Structure of service models BENEFITS OF CLOUD COMPUTING:
4
1. ACHIEVE ECONOMIES OF SCALE – increase volume output or productivity with fewer people. Your cost per unit, project or product plummets. 2. REDUCE SPENDING ON TECHNOLOGY
INFRASTRUCTURE. Maintain easy access to your information with minimal upfront spending. Pay as you go (weekly, quarterly or yearly), based on demand. 3. GLOBALIZE YOUR WORKFORCE ON THE CHEAP. People worldwide can access the cloud, provided they have an Internet connection. 4. STREAMLINE PROCESSES. Get more work done in less time with less people.
5. REDUCE CAPITAL COSTS. There’s no need to spend big money on hardware, software or licensing fees. 6. IMPROVE ACCESSIBILITY. You have access anytime, anywhere, making your life so much easier! 7. MONITOR PROJECTS MORE EFFECTIVELY. Stay within budget and ahead of completion cycle times. 8. LESS PERSONNEL TRAINING IS NEEDED. It takes fewer people to do more work on a cloud, with a minimal learning curve on hardware and software issues. 9. MINIMIZE LICENSING NEW SOFTWARE. Stretch and grow without the need to buy expensive software licenses or programs. 10. IMPROVE FLEXIBILITY. You can change direction without serious “people” or “financial” issues at stake.
ADVANTAGES: 5
1. PRICE: Pay for only the resources used. 2. SECURITY: Cloud instances are isolated in the network from other instances for improved security. 3. PERFORMANCE: Instances can be added instantly for improved performance. Clients have access to the total resources of the Cloud’s core hardware. 4. SCALABILITY: Auto-deploy cloud instances when needed. 5. UPTIME: Uses multiple servers for maximum redundancies. In case of server failure, instances can be automatically created on another server. 6. CONTROL: Able to login from any location. Server snapshot and a software library lets you deploy custom instances. 7. TRAFFIC: Deals with spike in traffic with quick deployment of additional instances to handle the load.
1.2 EXISTING SYSTEM: Zhao et al. propose a cross-domain single sign onauthentication protocol for cloud users, whose security was also proven mathematically. In the approach, the CSP is responsible for verifying the user’s identity and making access control decisions. As computing resources are being shared between tenants and used in an ondemand manner, both known and zeroday system security vulnerabilities could be exploited by the attackers (e.g. using side-channel and timing attacks). In existing, a fine grained data-level access control model (FDACM) designed to provide role-based and data-based access control for multi-tenant applications was presented. Relatively lightweight expressions were used to represent complex policy rules.
1.3 DISADVANTAGES OF EXISTING SYSTEM:
Traditional access control models, such as role based access control, are
generally unable to adequately deal with cross-tenant resource access requests. Specification level security is difficult to achieve at the user and provider ends. The security of the approach was not provided.
6
1.4 PROPOSED SYSTEM: We use model checking to exhaustively explore the system and verify the finite state concurrent systems. Specifically, we use High Level Petri Nets (HLPN) and Z language for the modeling and analysis of the CTAC model. We present a CTAC model for collaboration, and the CRMS to facilitate resource sharing amongst various tenants and their users. We also present four different algorithms in the CTAC model, namely: activation, delegation, forward revocation and backward revocation. We then provide a detailed presentation of modeling, analysis and automated verification of the CTAC model using the Bounded Model Checking technique with SMTLIB and Z3 solver, in order to demonstrate the correctness and security of the CTAC model.
1.5 ADVANTAGES OF PROPOSED SYSTEM: HLPN provides graphical and mathematical representations of the system, which facilitates the analysis of its reactions to a given input. Therefore, we are able to understand the links between different system entities and how information is processed. We then verify the model by translating the HLPN using bounded model checking. For this purpose, we use Satisfiability Modulo Theories Library (SMT-Lib) and solver. We remark that such formal verification has previously been used to evaluate security protocols.
7
CHAPTER-2 ANALYSIS
2.1 INTRODUCTION: While there are a number of benefits afforded by the use of cloud computing to facilitate collaboration between users and organizations, security and privacy of 8
cloud services and the user data may deter some users and organizations from using cloud services (on a larger scale) and remain topics of interest to researchers Typically, a cloud service provider (CSP) provides a web interface where a cloud user can manage resources and settings (e.g. allowing a particular service and/or data to selected users). A CSP then implements these access control features on consumer data and other related resources. However, traditional access control models, such as role based access control, are generally unable to adequately deal with cross-tenant resource access requests. In particular, cross-tenant access requests pose three key challenges. Firstly, each tenant must have some prior understanding and knowledge about the external users who will access the resources. Thus, an administrator of each tenant must have a list of users to whom the access will be allowed. This process is static in nature. In other words, tenants cannot leave and join cloud as they wish, which is a typical setting for a real world deployment. Secondly, each tenant must be allowed to define cross-tenant access for other tenants as and when needed. Finally, as each tenant has its own administration, trust management issue among tenants can be challenging to address, particularly for hundreds or thousands of tenants. To provide a secure crosstenant resource access service, a fine-grained cross-tenant access control model is required. In this project, we propose a cloud resource mediation service (CRMS) to be offered by a CSP, since the CSP plays a pivotal role managing different tenants and a cloud user entrusts the data to the CSP. We posit that a CRMS can provide the CSP competitive advantage, since the CSP can provide users with secure access control services in a cross tenant access environment (hereafter, we referred to as cross tenant access control - CTAC). From a privacy perspective, the CTAC model has two advantages. The privacy of a tenant, sayT2, is protected from another tenant; say T1, and the CRMS, since T2’s attributes are not provided to T1. T2’s attributes attributes are evaluated only by the CRMS. Furthermore, a user does not provide authentication credentials to the CRMS. Therefore, the privacy of T2 is also protected as the CRMS has no knowledge of the permissions that T2 is requesting from T1. The security policies defined by T1 use pseudonyms of the permissions without revealing the actual information to the CRMS during publication of the policies. To demonstrate the 9
correctness and security of the proposed approach, we use model checking to exhaustively explore the system and verify the finite state concurrent systems. Specifically, we use High Level Petri Nets (HLPN) and ZLanguage for the modeling and analysis of the CTAC model. HLPN provides graphical and mathematical representations of the system, which facilitates the analysis of its reactions to a given input. Therefore, we are able to understand the links between different system entities and how information is processed. We then verify the model by translating the HLPN using bounded model checking. For this purpose, we use Satisfiability modulo Theories Library (SMT-Lib) and Z3solver. We remark that such formal verification has previously been used to evaluate security protocols such as in. We regard the key contributions of this project to be as follows:
We present a CTAC model for collaboration, and the CRMS to facilitate resource sharing
amongst various tenants and their users.
We also present four different algorithms in the CTAC model, namely: activation, delegation, forward revocation and backward revocation.
We then provide a detailed presentation of modeling, analysis and automated verification of the CTAC model using the Bounded Model Checking technique with SMTLIB and Z3 solver, in order to demonstrate the correctness and security of the CTAC model.
2.1.1 ANALYSIS MODEL: The model that is basically being followed is the SPIRAL MODEL, which states that the phases are organized in a linear order. First of all the feasibility study is done. Once that part is over the requirement analysis and project planning begins. If system exists one and modification and addition of new module is needed, analysis of present system can be used as basic model. The design starts after the requirement analysis is complete and the coding begins after the design is complete. Once the programming is completed, the testing is done. In this model the sequence of activities performed in a software development project are : Requirement Analysis 10
Project Planning System design Detail design Coding Unit testing Spiral Model was defined by Barry Boehm in his 1988 article, “A spiral Model of Software Development and Enhancement. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration models. The following diagram shows how a spiral model acts like:
Fig 2.1: Spiral Model
2.2 FEASIBILITY STUDY: 11
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis, the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
2.2.1 ECONOMICAL FEASIBILITY This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
2.2.2 TECHNICAL FEASIBILITY This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
2.2.3 SOCIAL FEASIBILITY The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of 12
confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
2.3 SYSTEM REQUIREMENT SPECIFICATION 2.3.1 INTRODUCTION The purpose of this document is to present a detailed description of the android application system. It will explain the purpose and features of the system, the interfaces of the system, what the system will do, the constraints under which it must operate and how the system will react to external stimuli. This document is intended for both the stakeholders and the developers of the system and will be proposed to the Regional Historical Society for its approval. PURPOSE: The purpose of this Software Requirement Specification (SRS) is to help the project. It is provided with some requirements which are used in Transaction Mercator System. All parts; design, coding and testing will be prepared with helping of SRS. The purpose of this document is to detail the requirements placed on the Transaction Mercator System and serves as a contract between the customer and the developers as to what is to be expected of the stock exchange, and how the components of the system are to work with each other with external systems. This document will be checked by group member’s supervisor and it will corrected by members if supervisor orders. DEVELOPERS RESPONSIBILITIES OVERVIEW: The developer is responsible for developing the system, which meets the SRS and solving all the requirements of the system Demonstrating the system and installing the system at client's location after the acceptance testing is successful. Submitting the required user manual describing the system interfaces to work on it and also the documents of the system. Conducting any user training that might be needed for using the system. 2.3.2 USER REQUIREMENTS:
13
FUNCTIONAL REQUIREMENTS: Following is a list of functionalities of the browsing enabled system. An Activity with a UI that allows you to browser set. Provide a second Activity that allows users to access the share with permission from the administrator. Handle activity lifecycle appropriately. A precondition for any points in this part of the grade is code that complies and runs. Your application should allow a user to browse the shares, buy and sell the shares with specific metadata. The assignment requires you to create a UI for browsing and a UI for integrate the two. The Net beans provide a number of useful layout components, views, and tools that you may want to use to create your location browser. As with the final project, you should design your application to only use the buttons on the Key board and mouse as input; your application should use the Key board, Mouse and keywords. NON FUNCTIONAL REQUIREMENTS: The system should be support Net beans. Member should use the System browser. Each member should have aseparate system. The system should ask the username and password to open the application. It doesn’t permit to unregistered user to access theSystem. The system should have Role based System functions access. Approval Process has to be defined. The system should have Modular customization components so that they can be reused across the implementation. These are the mainly following: Secure access of confidential data (employee’s details). 24 X 7 availability Better component design to get better performance at peak time Flexible service based architecture will be highly desirable for future extension PERFORMANCE REQUIREMENTS: Performance is measured in terms of the output provided by the application.Requirement specification plays an important part in the analysis of a system. Only when the requirement specifications are properly given, it is possible to design a system, which will fit into required environment. It rests largely in the part of the users of the existing system to give the requirement specifications because they are the people who finally use the system. This is because the requirements have to be known during the initial the stages so that the system can be designed according to those requirements. It is very difficult to change the system once it has been 14
designed and on the other hand designing a system, which does not cater to the requirements of the user, is of no use. The requirement specification for any system can be broadly stated as given below: The system should be able to interface with the existing system The system should be accurate The systemshould be better than the existing system The existing system is completely dependent on the user to perform all the duties.
2.3.3 SOFTWARE REQUIREMENTS: • • • • • • •
Operating system Coding Language Tool Server Database Database GUI Design
: : : : : : :
Windows family JAVA/JEE Net beans 8.1(or) above Apache tomcat 8.1 or above MYSQL MYSQL yog HTML, CSS
2.3.4 HARDWARE REQUIREMENTS: • System
:
Pentium IV 2.4 GHz
•
Hard Disk
:
10GB (minimum)
•
Ram
:
1 GB(minimum)
2.4 CONTEXT DIAGRAM OF PROJECT:
15
FIG:2.4 STRUCTURE DIAGRAM
In the above architecture we decal are that the two tenants will request each other by the third party to get the information now crms request to the cross check in cloud whether register tenants status activation or not.
16
CHAPTER-3 DESIGN
17
3.1INTRODUCTION: Software design sits at the technical kernel of the software engineering process and is applied regardless of the development paradigm and area of application. Design is the first step in the development phase for any engineered product or system. The designer’s goal is to produce a model or representation of an entity that will later be built. Beginning, once system requirement have been specified and analyzed, system design is the first of the three technical activities -design, code and test that is required to build and verify software. The importance can be stated with a single word “Quality”. Design is the place where quality is fostered in software development. Design provides us with representations of software that can assess for quality. Design is the only way that we can accurately translate a customer’s view into a finished software product or system. Software design serves as a foundation for all the software engineering steps that follow. Without a strong design we risk building an unstable system – one that will be difficult to test, one whose quality cannot be assessed until the last stage. During design, progressive refinement of data structure, program structure, and procedural details are developed reviewed and documented. System design can be viewed from either technical or project management perspective. From the technical point of view, design is comprised of four activities – architectural design, data structure design, interface design and procedural design
3.2 DATA FLOW DIAGRAM:
1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of input data to the system, various processing carried out on this data, and the output data is generated by this system. 2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system. 3. DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts 18
information flow and the transformations that are applied as data moves from input to output. 4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
FIG:3.1 DFD LEVEL0
19
FIG: 3.2 DFD LEVEL1
20
FIG :3.3 DFD LEVEL2
21
FIG:3.4 DFD LEVEL3
22
UML DIAGRAMS UML stands for Unified Modeling Language. UML is a standardized generalpurpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group. The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to; or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects.
GOALS: The Primary goals in the design of the UML are as follows: 1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models. 2. Provide extendibility and specialization mechanisms to extend the core concepts. 3. Be independent of particular programming languages and development process. 4. Provide a formal basis for understanding the modeling language. 5. Encourage the growth of OO tools market. 6. Support higher level development concepts such as collaborations, frameworks, patterns and components. 7. Integrate best practices.
23
USE CASE DIAGRAM: A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases..
FIG:3.5 USE CASE DIAGRAM
24
CLASS DIAGRAM: In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information.
cloud username password
Tenant 1
login() view cross check policies() view upoload file() view tenant users() logout()
name password emailid choose tenant gender dob country register() login() file upload() give permission to tenant t2 user() user activation request'() permission revocation() logout()
tenant 2 name paasword email id choose tenant gender file access policy dob country register() login() view cross tenant files() authentication request() same tenant file download() logout()
CRMS user name password login() view tenant user() view tenant t2 request() authentication request from t2() send result to t1() logout()
FIG:3.6 CLASS DIAGRAM
25
SEQUENCE DIAGRAM: A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams.
Tenant1
Tenant2
Crms
Cloud
Database
register with details
login
fileupolad register
login give permission tenantt2 user
login login
user actviation request user authentication request result with required atributes
sent to results
view cross check policies
return result permission granted
logout logout logout logout
FIG:3.7 SEQUENCE DIAGRAM
26
ACTIVITY DIAGRAM: Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.
FIG:3.8 ACTVITY DIAGRAM
27
3.3 IMPLEMENTATION: MODULES: System Framework The Responsibilities of Entities Steps Involved for Initiating a Permission Activation Request Revocation
MODULES DESCRIPTION: System Framework: In this project formally specifies the resource sharing mechanism between two different tenants in the presence of our proposed cloud resource mediation service. There are three main entities. To explain the service, we use an example involving two tenants, T1 and T2, where T1 is the Service Provider (SP) and T2 is the Service Requester (SR) (i.e. user) and the CRMS. T1 must own some permission pi for which user of T2 can generate a cross-tenant request. The resource request from a user of T2 must be submitted to T1, which then handovers the request to the CRMS for authentication and authorization decisions. The CRMS evaluates the request based on the security polices provided by T1.
The Responsibilities of Entities: a) Tenant T1 responsibilities: T1 is responsible for publishing cross tenant policies on the CRMS. T1 receives access requests from T2 and redirects the request to the CRMS for further processing. b) Tenant T2 responsibilities: The CRMS redirects access requests to T2 for authentication. Once the redirected access request is received, the responsibility of T2 is to authenticate the identity of particular user. In response, T2 sends the user authentication response (valid or invalid) and tenant authentication response to the CRMS. c) CRMS responsibilities: The CRMS receives the permission-activation request redirected from T1. Once an access request is received, the CRMS evaluates the request on the pre-published policies and responds to T1.
28
Steps Involved For Initiating a Permission Activation Request: Step 1: Permission activation request: A user wishing to access a resource at T1. The user will be presented a directory where a list of shared services along with their descriptions are present. Step 2: Request redirection to the CRMS: Upon selection of a shared service the user wishes to access, the user is redirected to the CRMS site. On the site, the user will be asked for the parent tenant. The user selects the parent tenant and the CRMS redirects the user’s request to the selected tenant (T2 in this case). Step 3: Tenant T2 authentication: The user has to authenticate at her parent tenant, T2. Upon successful authentication, the user will be redirected again to CRMS with the attributes requested by the CRMS for cross tenant policy execution. Step 4: CRMS redirection to tenant T1 & permission activation: The user’s attributes are evaluated against the T1 policy and if the policy criteria is successfully fulfilled, then the user is provided service access at T1; otherwise, the access request is denied. The CRMS also takes into account any conflict of interest policies, such as Chinese Wall Policy.
Revocation: There are two ways in which we can revoke a previously granted permission from the cross-tenant user/cross-tenant. To achieve the permission revocation, we introduce the Forward Revocation Algorithm and the Backward Revocation Algorithm. A forward revocation query defines a request in which an intra-tenant user revoke a permission or a set of permissions from a crosstenant user/tenant along with the deactivation of the delegation policy. And A Backward revocation query defines a action that is triggered when the attributes of the delegatee mismatch. Thus, an intratenant user revokes a permission or a set of permissions from a crosstenant user/tenant as well as deactivating the delegation policy.
29
CHAPTER-4 IMPLEMENTATION&RESULT
4.1 INTRODUCTION
30
ABOUT JAVA: Initially the language was called as “oak” but it was renamed as “java” in 1995.The primary motivation of this language was the need for a platformindependent(i.e. architecture neutral)language that could be used to create software to be embedded in various consumer electronic devices. o Java is a programmer’s language o Java is cohesive and consistent o Except for those constraint imposed by the Internet environment. Java gives the programmer, full control Finally Java is to Internet Programming where c was to System Programming. IMPORTANCE OF JAVA TO THE INTERNET Java has had a profound effect on the Internet. This is because; java expands the Universe of objects that can move about freely in Cyberspace. In a network, two categories of objects are transmitted between the server and the personal computer. They are passive information and Dynamic active programs. in the areas of Security and probability. But Java addresses these concerns and by doing so, has opened the door to an exciting new form of program called the Applet. APPLICATIONS AND APPLETS. An application is a program that runs on our Computer under the operating system of that computer. It is more or less like one creating using C or C++ .Java’s ability to create Applets makes it important. An Applet I san application, designed to be transmitted over the Internet and executed by a Java-compatible web browser. An applet I actually a tiny Java program, dynamically downloaded across the network, just like an image. But the difference is, it is an intelligent program, not just a media file. It can be react to the user input and dynamically change.
JAVA ARCHITECTURE
31
Java architecture provides a portable, robust, high performing environment for development. Java provides portability by compiling the byte codes for the Java Virtual Machine, which is then interpreted on each platform by the run-time environment. Java is a dynamic system, able to load code when needed from a machine in the same room or across the planet. COMPILING AND INTERPRETING JAVA SOURCE CODE.
FIG 4.1 JAVA ARCHITECTURE During run-time the Java interpreter tricks the byte code file into thinking that it is running on a Java Virtual Machine. In reality this could be an Intel Pentium windows 95 or sun SPARCstation running Solaris or Apple Macintosh running system and all could receive code from any computer through internet and run the Applets. SIMPLE:
Java was designed to be easy for the Professional programmer to learn and to use effectively. If you are an experienced C++ Programmer. Learning Java will oriented features of C++ . Most of the confusing concepts from C++ are either left out of Java or implemented in a cleaner, more approachable manner. In Java there are a small number of clearly defined ways to accomplish a given task.
32
OBJECT ORIENTED Java was not designed to be source-code compatible with any other language. This allowed the Java team the freedom to design with a blank state. One outcome of this was a clean usable, pragmatic approach to objects. The object model in Java is simple and easy to extend, while simple types, such as integers, are kept as highperformance non-objects. ROBUST The multi-platform environment of the web places extraordinary demands on a program, because the program must execute reliably in a variety of systems. The ability to create robust programs. Was given a high priority in the design of Java. Java is strictly typed language; it checks your code at compile time and runtime. Java virtually eliminates the problems of memory management and deal location, which is completely automatic. In a well-written Java program, all run-time errors can and should be managed by your program.
4.2EXPLANATION OF KEY FUNCTIONS SERVLETS/JSP A Servlet is a generic server extension. a Java class that can be loaded Dynamically to expand the functionality of a server. Servlets are commonly used with web servers. Where they can take the place CGI scripts. A servlet is similar to proprietary server extension, except that it runs inside a Java Virtual Machine (JVM) on the server, so it is safe and portable. Servlets operate solely within the domain of the server. Unlike CGI and Fast CGI, which use multiple processes to handle separate program or separate requests, separate threads within web server process handle all servlets. This means that servlets are all efficient and scalable.Servlets are portable; both across operating systems and also across web servers. Java Servlets offer the best possible platform for web application development. Servlets are used as replacement for CGI scripts on a web server, they can extend any sort of server such as a mail server that allows servelts extend its functionality perhaps by performing a virus scan on all attached documents or handling mail filtering tasks. Servlets provide a Java-based solution used to address 33
the problems currently associated with doing server-side programming including inextensible scripting solutions platform-specific API’s and incomplete interface. Servlets are objects that conform to a specific interface that can be plugged into a Java-based server.Servlets are to the server-side what applets are to the server-side what applets are to the client-side-object byte codes that can be dynamically loaded off the net. They differ from applets in than they are faceless objects(with out graphics or a GUI component).They serve as platform independent, dynamically loadable,plugable helper byte code objects on the server side that can be used to dynamically extend server-side functionality. For example an HTTP servlet can be used to generate dynamic HTML content when you use servlets to do dynamic content you get the following. They’re faster and cleaner then CGI scripts
They use a standard API( the servlet API)
They provide all the advantages of Java (run on a variety of servers without needing to be rewritten)
They are many features of servlets that make them easy and attractive to tuse these include:
Easily configure using the GUI-based Admin tool]
Can be Loaded and Invoked from a local disk or remotely across the network.
Can be linked together or chained, so that on servlet can call another servlet, or several servlets in sequence.
Can be called dynamically from within HTML, pages using server-side include-tags.
Are secure-even when downloading across the network, the servlet security model and servlet and box protect your system from unfriendly behavior.,
ADVANTAGES OF THE SERVLET API One of the great advantages of the servlet API is protocol independent. It assumes nothing about: The protocol being used to transmit on the net How it is loaded The server environment it will be running in 34
These quantities are important, because it allows the Servlet API to be embedded in many different kinds of servers. There are other advantages to the servelt API as well These include: It’s extensible-you can inherit all your functionality from the base classes made available to you It’s simple small, and easy to use. FEATURES OF SERVLETS: Servlets are persistent. Servlet are loaded only by the web server and can maintain services between requests. Servlets are fast. Since servlets only need to be l\loaded once, they offer much better performance over their CGI counterparts. Servlets are platform independent. Servlets are extensible Java is a robust, object-oriented programming language, which easily can be extended to suit your needs. Servlets are secure
Servlets are used with a variety of client. Servlets are classes and interfaces from tow packages,javax .servlet and
javax.servlet.http.The java. Servlet package contains classes t support generic, protocol-independent servlets. The classes in the javax.servelt.http package To and HTTP specific functionality extend these classes Every servlet must implement the javax.servelt interface. Most servlets implement it by extending one of two classes.javax.servlet.GenericServlet or javax.servlet.http.HttpServlet. A protocol-independent servlet should subclass Generic-Servlet. While an Http servlet should subclass HttpServlet, which is itself a subclass of Generic-servlet with added HTTP-specific functionality.Unlike a java program, a servlet does not have a main() method,Instead the server in the process of handling requests invoke certain methods of a servlet. Each time the server dispatches a request to a servlet, it invokes the servelts Service() method, A generic servlet should override its service() method to handle requests as appropriate for the servlet. The service() accepts two parameters a request object and a response object .the request object tells the servlet about the request, while the response object is used to return a response INCONTRAST.ANHTTP SERVLET USUALLY 35
DOES NOT OVERRIDE THE SERVICE() METHOD. INSTEAD IT OVERRIDES
doget() to
handle get requests and dopost() to handle post requests. an http servlet can override either or both of these modules the service() method of httpservlet handles the setup and dispatching to all the doxxx() methods. which is why it usually should not be overridden the remainders in the javax.servlet and javax.servlet.http.package are largely support classes .the servletrequest and servletresponse classes in javax.servlet provide access to generic server requests and responses while httpservletrequest and httpservletresponse classes in javax.servlet provide access to generic server requests and responses while httpservletrequest and httpservletresponse in javax.servlet.http provide access a http requests and responses . the javax.servlet.http provide contains an httpsession class that provides built-in session tracking functionality and cookie class that allows quickly setup and processing httpcookies. Loading Servlets: Servlets can be loaded from their places. From a directory that is on the CLASSPATH. The CLASSPATH of the JavaWebServer includes service root/classes/, which is where the system classes reside From the <SERVICE_ROOT/servlets/directory. This is not in the server’s classpath. A class loader is used to create servlets form this directory. New servlets can be addedexisting servlets can be recompiled and the server will notice these changes. From a remote location. For this a code base like http://nine.eng/classes/foo/ is required in addition to the servlet’s class name. Refer to the admin Gui docs on servlet section to see how to set this up. Loading Remote Servlets Remote servlets can be loaded by: Configuring the admin Tool to setup automatic loading of remote servlets. Selecting up server side include tags in .html files Defining a filter chain Configuration INVOKING SERVLETS
A servlet invoker is a servlet that invokes the “server” method on a named servlet. If the servlet is not loaded in the server, then the invoker first loads the servlet(either form local disk or from the network) and the then invokes the “service” 36
method. Also like applets, local servlets in the server can be identified by just the class name. In other words, if a servlet name is not absolute. it is treated as local. A Client can Invoke Servlets in the Following Ways:
The client can ask for a document that is served by the servlet.
The client(browser) can invoke the servlet directly using a URL, once
it has been mapped using the SERVLET ALIASES Section of the admin GUI
The servlet can be invoked through server side include tags.
The servlet can be invoked by placing it in the servlets/directory
The servlet can be invoked by using it in a filter chain
THE SERVLET LIFE CYCLE:The Servlet life cycle is one of the most exciting features of Servlets. This life cycle is a powerful hybrid of the life cycles used in CGI programming and lowerlevel NSAPI and ISAPI programming. The servlet life cycle allows servlet engines to address both the performance and resource problems of CGI and the security concepts of low level server API programming. Servlet life cycle is highly flexible Servers have significant leeway in how they choose to support servlets. The only hard and fast rule is that a servlet engine must conform to the following life cycle contact: Create and initialize the servlets Handle zero or more service from clients Destroy the servlet and then garbage Collects it. It’s perfectly legal for a servlet t be loaded, created an initialized in its own JVM,only to be destroyed and garbage collected without handling any client request or after handling just one request. The most common and most sensible life cycle implementations for HTTP servelts are: Single java virtual machine and astatine persistence. INIT AND DESTROY:Just like Applets servlets can define init() and destroy() methods, A servlets init(ServiceConfig) method is called by the server immediately after the server
37
constructs the servlet’s instance.Depanding on the server and its configuration, this can be at any of these times When the server states
When the servlet is first requested, just before the service() method is
invoked
At the request of the server administrator In any case, init() is guaranteed to be called before the servlet handles its
first request The init() method is typically used to perform servlet initialization creating or loading objects that are used by the servlet in handling of its request. In order to providing a new servlet any information about itself and its environment, a server has to call a servelts init() method and pass an object that implement the ServletConfig interface. This ServletConfig object supplies a servlet with information about its initialization parameters. These parameters are given to the servlets and are not associated with any single request. They can specify initial values, such as where a counter should begin counting, or default values, perhaps a template to use when not specified by the request, The server calls a servlet’s destroy() method when the servlet is about to be unloaded. In the destroy() method, a servlet should free any resources it has acquired that will not be garbage collected. The destroy() method also gives a servlet a chance to write out its unsaved. cached information or any persistent information that should be read during the next call to init(). SESSION TRACKING: HTTP is a stateless protocol, it provides no way for a server to recognize that a sequence of requests is all from the same client. This causes a problem for application such as shopping cart applications. Even in chat application server can’t know exactly who’s making a request of several clients. The solution for this is for client to introduce itself as it makes each request, Each clients needs to provide a unique identifier that lets the server identify it, or it needs to give some information that the server can use to properly handle the request, 38
There are several ways to send this introductory information with each request Such as: USER AUTHORIZATION: One way to perform session tracking is to leverage the information that comes with User authorization. When a web server restricts access to some of its resources to only those clients that log in using a recognized username and password. After the client logs in, the username is available to a servlet through getRemoteUser () We use the username to track the session. Once a user has logged in, the browser remembers her username and resends the name and password as the user views new pages on the site. A servlet can identify the user through her username and they’re by Track her session. The biggest advantage of using user authorization to perform session tracking is that it’s easy to implement. Simply tell the protect a set of pages, and use getRemoteUser () to identify each client. Another advantage is that the technique works even when the user accesses your site form or exists her browser before coming back. The biggest disadvantage of user authorization is that it requires each user to register for an account and then log in each time the starts visiting your site. Most users will tolerate registering and lagging in as a necessary evil when they are accessing sensitive information, but its all overkill for simple session tracking. Other problem with user authorization is that a user cannot simultaneously maintain more than one session at the same site. HIDDEN FORM FIELDS: One way to support anonymous session tracking is to use hidden from fields. As the name implies, these are fields added to an HTML, form that are not displayed in the client’s browser, They are sent back to the server when the form that contains them is submitted. In a sense, hidden form fields define constant variables for a form. To a servlet receiving a submitted form, there is no difference between a hidden fields and a visible filed.
39
As more and more information is associated with a clients session . It can become burdensome to pass it all using hidden form fields. In these situations it’s possible to pass on just a unique session ID that identifies as particular clients session. That session ID can be associated with complete information about its session that is stored on the server. The advantage of hidden form fields is their ubiquity and support for anonymity. Hidden fields are supported in all the popular browsers, they demand on special server requirements, and they can be used with clients that haven’t registered or logged in. The major disadvantage with this technique, however is that works only for a sequence of dynamically generated forms, The technique breaks down immediately with static documents, emailed documents book marked documents and browser shutdowns. URL REWRITING: URL rewriting is another way to support anonymous session tracking, With URL rewriting every local URL the user might click on is dynamically modified. or rewritten, to include extra, information. The extra information can be in the deform of extra path information, added parameters, or some custom, server-specific.URL change. Due to the limited space available in rewriting a URL, the extra information is usually limited to a unique session. Each rewriting technique has its own advantage and disadvantage Using extra path information works on all servers, and it works as a target for forms that use both the Get and Post methods. It does not work well if the servlet has to use the extra path information as true path information The advantages and disadvantages of URL.rewriting closely match those of hidden form fileds,The major difference is that URL rewriting works for all dynamically created documents, such as the Help servlet, not just forms. With the right server support, custom URL rewriting can even work for static documents. PERSISTENT COOKIES: A fourth technique to perform session tracking involves persistent cookies. A cookie is a bit of information. sent by a web server to a browser that can later be read 40
back form that browser. When a browser receives a cookie, it saves the cookie and there after sends the cookie back to the server each time it accesses a page on that server, subject to certain rules. Because a cookie’s value can uniquely identify a client, cookies are often used for session tracking. Persistent cookies offer an elegant, efficient easy way to implement session tracking. Cookies provide as automatic an introduction for each request as we could hope for. For each request, a cookie can automatically provide a client’s session ID or perhaps a list of clients performance. The ability to customize cookies gives them extra power and versatility. The biggest problem with cookies is that browsers don’t always accept cookies sometimes this is because the browser doesn’t support cookies. More often its because the browser doesn’t support cookies. More often its because the user has specifically configured the browser to refuse cookies. The power of servlets is nothing but the advantages of servlets over other approaches, which include portability, power, efficiency, endurance, safety elegance, integration, extensibility and flexibility. PORTABILITY: As servlets are written in java and conform to a well defined and widely accepted API.they are highly portable across operating systems and across server implementation We can develop a servlet on a windows NT machine running the java web server and later deploy it effortlessly on a high-end Unix server running apache. With servlets we can really “write once, serve everywhere” Servlet portability is not the stumbling block it so often is with applets, for two reasons First, Servlet portability is not mandatory i.e. servlets has to work only on server machines that we are using for development and deployment Second, servlets avoid the most error-prone and inconstancy implemented portion of the java languages.
41
POWER: Servlets can harness the full power of the core java. API’s: such as Networking and Url access, multithreading, image manipulation, data compression, data base connectivity, internationalization, remote method invocation(RMI) CORBA connectivity, and object serialization, among others, EFFICIENCY AND ENDURANCE: Servlet invocation is highly efficient, Once a servlet is loaded it generally remains in the server’s memory as a single object instance, There after the server invokes the servlet to handle a request using a simple, light weighted method invocation .Unlike the CGI, there’s no process to spawn or interpreter to invoke, so the servlet can begin handling the request almost immediately, Multiple, concurrent requests are handled the request almost immediately. Multiple, concurrent requests are handled by separate threads, so servlets are highly scalable. Servlets in general are enduring objects. Because a servlets stays in the server’s memory as a single object instance. it automatically maintains its state and can hold onto external resources, such as database connections. SAFETY: Servlets support safe programming practices on a number of levels. As they are written in java, servlets inherit the strong type safety of the java language. In addition the servlet API is implemented to be type safe. Java’s automatic garbage collection and lack of pointers mean that servlets are generally safe from memory management problems like dangling pointers invalid pointer references and memory leaks. Servlets can handle errors safely, due to java’s exception – handling mechanism. If a servlet divides by zero or performs some illegal operations, it throws an exception that can be safely caught and handled by the server. A server can further protect itself from servlets through the use of java security manager.A server can execute its servlets under the watch of a strict security manager.
42
ELEGANCE: The elegance of the servlet code is striking .Servlet code is clean, object oriented modular and amazingly simple one reason for this simplicity is the served API itself. Which includes methods and classes to handle many of the routine chores of servlet development. Even advanced to operations like cookie handling and session tracking tracking are abstracted in convenient classes. INTEGRATION: Servlets are tightly integrated with the server. This integration allows a servlet to cooperate with the server in two ways . for e.g.: a servlet can use the server to translate file paths, perform logging, check authorization, perform MIME type mapping and in some cases even add users to the server’s user database. EXTENSIBILITY AND FLEXIBILITY: The servlet API is designed to be easily extensible. As it stands today the API includes classes that are optimized for HTTP servlets. But later it can be extended and optimized for another type of servlets. It is also possible that its support for HTTP servlets could be further enhanced. Servlets are also quite flexible, Sun also introduced java server pages. which offer a way to write snippets of servlet code directly with in a static HTML page using syntax similar to Microsoft’s Active server pages(ASP)
JDBC WHAT IS JDBC? Any relational database. One can write a single program using the JDBC API,and the JDBC is a Java API for executing SQL,Statements(As a point of interest JDBC is trademarked name and is not an acronym; nevertheless,Jdbc is often thought of as standing for Java Database Connectivity. It consists of a set of classes and interfaces written in the Java Programming language.JDBC provides a standard API
43
for tool/database developers and makes it possible to write database applications using a pure Java API Using JDBC, it is easy to send SQL statements to virtually program will be able to send SQL .statements to the appropriate database. The Combination of Java and JDBC lets a programmer writes it once and run it anywhere. What Does JDBC Do? Simply put,JDBC makes it possible to do three things Establish a connection with a database Send SQL statements Process the results JDBC Driver Types The JDBC drivers that we are aware of this time fit into one of four categories JDBC-ODBC Bridge plus ODBC driver Native-API party-java driver JDBC-Net pure java driver Native-protocol pure Java driver An individual database system is accessed via a specific JDBC driver that implements the java.sql.Driver interface. Drivers exist for nearly all-popular RDBMS systems, through few are available for free. Sun bundles a free JDBC-ODBC bridge driver with the JDK to allow access to a standard ODBC,data sources, such as a Microsoft Access database, Sun advises against using the bridge driver for anything other than development and very limited development. JDBC drivers are available for most database platforms, from a number of vendors and in a number of different flavors. There are four driver categories Type 01-JDBC-ODBC Bridge Driver Type 01 drivers use a bridge technology to connect a java client to an ODBC database service. Sun’s JDBC-ODBC bridge is the most common type 01 driver. These drivers implemented using native code. Type 02-Native-API party-java Driver 44
Type 02 drivers wrap a thin layer of java around database-specific native code libraries for Oracle databases, the native code libraries might be based on the OCI(Oracle call Interface) libraries, which were originally designed for c/c++ programmers, Because type-02 drivers are implemented using native code. in some cases they have better performance than their all-java counter parts. They add an element of risk, however, because a defect in a driver’s native code section can crash the entire server Type 03-Net-Protocol All-Java Driver Type 03 drivers communicate via a generic network protocol to a piece of custom middleware. The middleware component might use any type of driver to provide the actual database access. These drivers are all java, which makes them useful for applet deployment and safe for servlet deployment Type-04-native-protocol All-java Driver Type o4 drivers are the most direct of the lot. Written entirely in java, Type 04 drivers understand database-specific networking. protocols and can access the database directly without any additional software JDBC-ODBC Bridge If possible use a Pure Java JDBC driver instead of the Bridge and an ODBC driver. This completely eliminates the client configuration required by ODBC.It also eliminates the potential that the Java VM could be corrupted by an error in the native code brought in by the Bridge (that is, the Bridge native library, the ODBC driver manager library, library, the ODBC driver library, and the database client library)
HTML Hypertext Markup Language (HTML), the languages of the world wide web(WWW),
allows users to produces web pages that included text, graphics and
pointer to other web pages (Hyperlinks). HTML is not a programming language but it is an application of ISO Standard 8879, SGML(Standard Generalized Markup Language),but Specialized to hypertext and adapted to the Web. The idea behind Hypertext one point to another point. We can navigate through the information based on out interest and preference. 45
A markup language is simply a series of items enclosed within the elements should be displayed. Hyperlinks are underlined or emphasized works that load to other documents or some portions of the same document. Html can be used to display any type of document on the host computer, which can be geographically at a different location. It is a versatile language and can be used on any platform or desktop.HTML provides tags(special codes) to make the document look attractive. HTML provides are not case-sensitive. Using graphics,fonts,different sizes, color, etc.. can enhance the presentation of the document. Anything That is not a tag is part of the document it self. BASIC HTML TAGS:
Specific Comments.
………
Creates Hypertext links.
………
Creates hypertext links.
……..
Formats text in large-font
…….
contains all tags and text in the Html-document
…… Creates Text
………..
Definition of a term.
……….. |
creates table
indicates table data in a table.
ADVANTAGES:
A HTML document is small and hence easy to send over the net. It is small
because it does not include formatted information. 46
HTML is platform independent
HTML tags are not case-sensitive.
JAVA SCRIPT The Java Script Language. JavaScript is a compact , object-based scripting language for developing client and server internet applications. Netscape Navigator 2.0 interprets JavaScript statements embedded directly in an HTML page. and Livewire enables you to create server-based applications similar to common gateway interface(cgi) programs. In a client application for Navigator, JavaScript statements embedded in an HTML Page can recognize and respond to user events such as mouse clicks form Input, and page navigation. For example, you can write a JavaScript function to verify that users enter valid information into a form requesting a telephone number or zip code . Without any network transmission, an Html page with embedded Java Script can interpret the entered text and alert the user with a message dialog if the input is invalid or you can use JavaScript to perform an action (such as play an audio file, execute an applet, or communicate with a plug-in) in response to the user opening or exiting a page.
4.3 METHOD OF IMPLEMENTATION 4.3.1 FORMS A Database is a collection of interrelated data stored with a minimum of redundancy to serve many applications. The database design is used to group data into a number of tables and minimizes the artificiality embedded in using separate files. The tables are organized to: Reduced duplication of data. Simplify functions like adding, deleting, modifying data etc.., Retrieving data Clarity and ease of use More information at low cost
47
Normalization Normalization is built around the concept of normal forms. A relation is said to be in a particular normal form if it satisfies a certain specified set of constraints on the kind of functional dependencies that could be associated with the relation. The normal forms are used to ensure that various types of anomalies and inconsistencies are not introduced into the database. First Normal Form: A relation R is in first normal form if and only if all underlying domains contained atomic values only. Second Normal Form: A relation R is said to be in second normal form if and only if it is in first normal form and every non-key attribute is fully dependent on the primary key. Third Normal Form: A relation R is said to be in third normal form if and only if it is in second normal form and every non key attribute is non transitively depend on the primary key.
48
4.3.2OUTPUT SCREENS CTAC HOME PAGE
FIG 4.3.2.1 CTAC HOME PAGE
49
TENANT1 REGISTRATION
FIG 4.3.2.2TENANT1 REGISTRATION
TENANT2 REGISTRATION
FIG 4.3.2.3 TENANT2 REGISTRATION
50
TENANT1 LOGIN PAGE
FIG 4.3.2.4 TENANT1 LOGIN PAGE
TENANT1 HOME PAGE
FIG 4.3.2.5 TENANT1 HOME PAGE 51
TENANT1 UPLOAD FILE
FIG 4.3.2.6TENANT1 UPLOAD FILE
FIG 4.3.2.7TENANT1 UPLOAD FILE
52
TENANT1 VIEW UPLOAD FILES
FIG 4.3.2.8 TENANT1 VIEW UPLOAD FILES
TENANT2 LOGIN PAGE
FIG 4.3.2.9 TENANT2 LOGIN PAGE
53
TENANT2 HOME PAGE
FIG 4.3.2.10TENANT2 HOME PAGE
TENANT2 KEY
FIG 4.3.2.11 TENANT2 KEY
54
TENANT2 VERFICATION FOR FILE ACCESS
FIG4.3.2.12 TENANT2 VERFICATION FOR FILE ACCESS
TENANT2 CROSS TENANT FILES
FIG 4.3.2.13TENANT2 CROSS TENANT FILES
55
TENANT2 REQUEST SENT
FIG 4.3.2.14 TENANT2 REQUEST SENT
FIG 4.3.2.15 TENANT2 REQUEST SENT
56
TENANT1 LOGIN &FILE ACCESS ACTIVATION REQUEST
FIG 4.3.2.16TENANT1 LOGIN &FILE ACCESS ACTIVATION REQUEST
TENANT1 FORWARD TO CRMS
FIG 4.3.2.17TENANT1 FORWARD TO CRMS
57
CRMS LOGIN PAGE
FIG 4.3.2.18 CRMS LOGIN PAGE
CRMS VIEW TENANT USERS
FIG4.3.2.19 CRMS VIEW TENANT USERS
58
FIG 4.3.2.20CRMS VIEW TENANT USERS TENANT2 LOGIN &AUTHENTICATE REQUEST
FIG 4.3.2.21 TENANT2 LOGIN &AUTHENTICATE REQUEST
59
TENANT2 PRIVATE KEY
FIG 4.3.2.22TENANT2 PRIVATE KEY TENANT2 AUTHENTICATE
FIG 4.3.2.23 TENANT2 AUTHENTICATE
60
TENANT2 AUTHENTICATE SUCESSFULLY
FIG4.3.2.24TENANT2AUTHENTICATE SUCESSFULLY CRMS LOGIN &VIEW TENANT2 REQUEST
FIG 4.3.2.25 CRMS LOGIN &VIEW TENANT2 REQUEST
61
CRMS CROSS CHECK THE CLOUD
FIG
4.3.2.26 CRMS CROSS CHECK THE CLOUD
CLOUD LOGIN PAGE
FIG 4.3.2.27 CLOUD LOGIN PAGE 62
CLOUD HOME PAGE
FIG 4.3.2.28 CLOUD HOME PAGE CLOUD VIEW CROSS CHECK POLICIES
FIG4.3.2.29CLOUDVIEWCROSSCHECK POLICIES
63
CROSS CHECK SUCESS
FIG4.3.2.30 CROSS CHECK SUCESS CROSS CHECK FAILED
FIG 4.3.2.31 CROSS CHECK FAILED
64
CLOUD SERVER KEY
FIG 4.3.2.32 CLOUD SERVER KEY
CLOUD UPLOAD FILES
FIG 4.3.2.33 CLOUD UPLOAD FILES
65
IN CLOUD TENANT1 USER DETAILS
FIG 4.3.2.34 IN CLOUD TENANT1 USER DETAILS IN CLOUD TENANT2 USER DETAILS
FIG 4.3.2.35 IN CLOUD TENANT2 USER DETAILS
66
CRMS LOGIN &SEND RESULT TO T1
FIG 4.3.2.36 CRMS LOGIN &SEND RESULT TO T1 CRMS SENT RESULT TO T1 SUCCESFULLY
FIG 4.3.2.37 CRMS SENT RESULT TO T1 SUCCESFULLY
67
IN CRMS TENANT1 USER DETAILS
FIG 4.3.2.38 IN CRMS TENANT1 USER DETAILS
IN CRMS TENANT2 USER DETAILS
FIG 4.3.2.39 IN CRMS TENANT2 USER DETAILS
68
TENANT1 LOGIN &GIVE PERMISSION TO T2 USER
FIG 4.3.2.40 TENANT1 LOGIN &GIVE PERMISSION TO T2 USER TENANT1 PERMISSION GRANTED
FIG 4.3.2.41 TENANT1 PERMISSION GRANTED
69
TENANT2 LOGIN & VIEW CROSS TENANT FILES
FIG 4.3.2.42 TENANT2 LOGIN & VIEW CROSS TENANT FILES
TENANT2 DOWNLOAD AND PRIVATE KEY
FIG 4.3.2.43 TENANT2 DOWNLOAD AND PRIVATE KEY
70
FIG 4.3.2.44 TENANT2 DOWNLOAD AND PRIVATE KEY
TENANT2 DOWNLOAD FILE
FIG 4.3.2.45 TENANT2 DOWNLOAD FILE
71
TENANT2 DOWNLOADED FILE HISTORY
FIG 4.3.2.46 TENANT2 DOWNLOADED FILE HISTORY TENANT1 LOGIN & PERMISSION REVOCATION
FIG
4.3.2.47TENANT1 LOGIN & PERMISSION REVOCATION
72
TENANT1 PERMISSION REVOKED SUCCESFULY
FIG 4.3.2.48 TENANT1 PERMISSION REVOKED SUCCESFULY
TENANT2 LOGIN & VIEW CROSS TENANT FILES
FIG 4.3.2.49 TENANT2 LOGIN & VIEW CROSS TENANT FILES
73
TENANT2 REVOKED
FIG 4.3.2.50 TENANT2 REVOKE
74
4.3.3 RESULT ANALYSIS Cloud computing refers to the use of the network's own computing power, to replace the software original installed on their own computer, or replace the original data which will be stored on the hard disk related work. With the help of the network to carry out the operation of the process, it will be able to file information stored directly in the huge virtual storage space. In addition to cloud computing on the basis of this can also be combined with a variety of network services in the network server above the relevant information storage, and then through the browser to carry out the information browsing and access. Therefore, the essence of cloud computing is for some of the dynamic changes to the relevant access to resources and services to provide a reasonable, and through the rational use of computer idle resources to enhance the calculation of large data and storage efficiency . Cloud computing platform which based on the data storage system in the specific application process, it also has the following advantages: To achieve the data storage deployment of intelligent and adaptive development, and can complete all the data and information resources, unified coordination and management work. can make the reading and writing of data and storage efficiency be further improved, and through the virtualization algorithm to achieve the efficient use of storage space and data reallocation work, so that the physical storage space can make the utilization further upgrade, on the other hand can also have a good load balancing and fault redundancy. cloud computing based on the data storage system can also achieve largescale effects and flexibility to expand the function, which can also make their own operating and maintenance costs are effectively reduced, and to maximize the effectiveness of a variety of network resources issued . Therefore, the data storage management model based on cloud computing spatial can achieve a substantial change in the nature of storage on the basis of effective amplification in the data storage capacity. In recent years, with the development of information technology and cloud computing continues, making the data storage system is also the specific application process will be directly related to all Internet users. With the rapid increase in the number of Internet users, its storage performance for the throughput also put forward higher requirements, which also need to expand the data storage capacity in the 75
process, making its throughput have also be further growth. At the present stage of cloud computing data storage system development process, it also needs to further enhance the durability of data storage, and requires the system to data migration and data fault tolerance in the process of fully guarantee the integrity and consistency of these data, so that can achieve a good data storage effect, and give users a more good application experience.
76
CHAPTER 5 TESTING AND VALIDATION
77
5.1 INTRODUCTION The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
5.2 DESIGN OF TESTCASES AND SCENARIOS TYPES OF TESTS UNIT TESTING Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. INTEGRATION TESTING Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
78
FUNCTIONAL TEST Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input
: identified classes of valid input must be accepted.
Invalid Input
: identified classes of invalid input must be rejected.
Functions
: identified functions must be exercised.
Output
: identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. SYSTEM TEST System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. WHITE BOX TESTING: White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. BLACK BOX TESTING Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most 79
other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. UNIT TESTING: Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. TEST STRATEGY AND APPROACH Field testing will be performed manually and functional tests will be written in detail. TEST OBJECTIVES
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
FEATURES TO BE TESTED
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
INTEGRATION TESTING Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.
80
TEST RESULTS: All the test cases mentioned above passed successfully. No defects. ACCEPTANCE TESTING User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
81
5.3 VALIDATION TEST CASES
Test
Test Scenario
Test steps
Result
Pass/Fail
Login as
The System
Welcome
Pass
tenant1 has
authorized
Accept the
to the
to register
person
username and
account
Case ID
1
The new
Type of
Prerequisite
Test Case
s, if any
Functional
with their
password and
details
check it with the database.
To login to the Account the user will type his username and Password
The System
2 Login as
The new tenant2 has to register
Functional
authorized person
with their
Accept the username and password and
Invalid Login
Pass
check it with the database.
details
The system verifies and
3
uploads the Uploads as authorized person 82
files
Successfull y uploads
Pass
Login into tenant1 form
Test
Test Scenario
Case ID
4
Upload the
Functional
the files
Type of
Prerequisites,
Test Case
if any
Functional
file
Test steps
Result
Pass/Fail
Checking the
The System
The users list
Pass
authorized
Accept the
is diaplayed
persons
input and displays the
View upload
users
files
Tenant2 login 5
Functional
The System
The
Checking the
Accept the
anonymity
anonymity
input and
users list is
users
displays the
displayed
anonymity users Verification for file access
The system
Registering to the cloud 83
stores the
Successful
details to
registration
database
Pass
6
Test Case ID
Functional
server
Type of
Prerequisites,
Test Case
if any
84
Pass
Test steps
Result
7
Functional
Login as
The System
Welcome to
authorized
Accept the
the account
person
username and password and check it with the database.
The System Make
8
Functional
updation to profiles
Successful
Accept the
updation
changes and updates to the database.
The system
9 Functional
Test Case ID
Test Scenario
Downloads as
verifies and
authorized
downloads the
Successfully
user
files
downloaded
Type of
Prerequisites,
Test Case
if any
85
Test steps
Result
Pass/Fail
The user 10
Functional
logouts
Logout of
The System
Logout
their account
logout the
sucessfull
Pass
user account and updates anything done by user to database The System Accept the details of anonymity user
\11
The anonymity user enter
Functional
with their
The
Authenticated
anonymity
takes place
user login
name and
The system
Pass
verifies the access code if
mailid
valid user
The access code is sent so 12
authentication
verifying the
for anonymity
access code
users
Functional
86
Welcome to account
Pass
Test Scenario
Test steps
Result
Pass/Fail
verifying the
The system
Not
Pass
access code
verifies the
authorized
authentication
access code if
user
for anonymity
not valid user
Test
Type of
Prerequisites,
Test Case
if any
Functional
Case ID
13
The access code is sent so
users
The anonymity user tries to
14
download the
The System
file
Accept the Functional
Downloads the file as authorized
changes and
Successful
updates to the
updation
Pass
database.
user The anonymity user
The system
downloads the
verifies and
files
downloads
15
the files Downloads as authorized Functional
user
87
Successfully downloaded
Pass
CHAPTER 6 CONCLUSION AND FUTURE ENHANCEMENT
6.1CONCLUSION 88
In this project, we proposed a cross-tenant cloud resource mediation service (CRMS), which can act as a trusted-third party for fine-grained access control in a cross-tenant environment. For example, users who belong to an intra-tenant cloud can allow other cross-tenant users to activate permission in their tenant via the CRMS. We also presented a formal model CTAC with four algorithms designed to handle the requests for permission activation. We then modeled the algorithms using HLPN, formally analyzed these algorithms in Z language, and verified them using Z3 Theorem Proving Solver. The results obtained after executing the solver demonstrated that the asserted algorithm specific access control properties were satisfied and allows secure execution of permission activation on the cloud via the CRMS.
6.2FUTURE ENHANCEMENT Future work will include a comparative analysis of the proposed CTAC model with other state-of-the-art cross domain access control protocols using real-world evaluations. For example, one could implement the protocols in a closed or small scale environment, such as a department within a university. This would allow the researchers to evaluate the performance, and potentially (in) security, of the various approaches under different real-world settings.
7. REFERENCES
89
[1] Akhunzada, A., Gani, A., Anuar, N. B., Abdelaziz, A., Khan, M. K., Hayat, A., & Khan, S. U. (2016). Secure and dependable software defined networks. Journal of Network and Computer Applications, 61, 199-221. [2] Alam, Q., Tabbasum, S., Malik, S., Alam, M., Tanveer, T., Akhunzada, A., Khan, S., Vasilakos, A. and Buyya, R., (2016). Formal Verification of the xDAuth Protocol. IEEE Transactions on Information Forensics and Security, 11(9), pp. 1956-1969. [3] Ali, M., Malik, S. and Khan, S., DaSCE: Data Security for Cloud Environment with Semi-Trusted Third Party. [4] Barrett, C., Conway, C.L., Deters, M., Hadarean, L., Jovanovi, D., King, T., Reynolds, A. and Tinelli, C., 2011, July. Cvc4. In International Conference on Computer Aided Verification (pp. 171-177). Springer Berlin Heidelberg. [5] Bofill, M., Nieuwenhuis, R., Oliveras, A., Rodrguez-Carbonell, E. and Rubio, A., 2008, July. The barcelogic SMT solver. In International Conference on Computer Aided Verification (pp. 294-298). Springer Berlin Heidelberg. [6] Bruttomesso, R., Cimatti, A., Franzn, A., Griggio, A. and Sebastiani, R., 2008, July. The mathsat 4 smt solver. In International Conference on Computer Aided Verification (pp. 299-303). Springer Berlin Heidelberg. [7] Choo, K.K., 2006. Refuting security proofs for tripartite key exchange with model checker in planning problem setting. In 19th IEEE Computer Security Foundations Workshop (CSFW’06) (pp. 12-pp). IEEE. [8] Choo, K.-K. R., Domingo-Ferrer, J. and Zhang, L., 2016. Cloud Cryptography: Theory, Practice and Future Research Directions. Future Generation Computer Systems, 62, pp. 51-53. [9] De Moura, L. and Bjørner, N., 2011. Satisfiability modulo theories: introduction and applications. Communications of the ACM, 54(9), pp.69- 77. [10] Dutertre, B. and De Moura, L., 2006. The yices smt solver. Tool paper at http://yices. csl. sri. com/tool-paper. pdf, 2(2). [11] Heiser, J., 2009. What you need to know about cloud computing security and compliance. Gartner, Research, ID, (G00168345). 90
[12] Jung, T., Li, X. Y., Wan, Z. and Wan, M., 2015. Control Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute-Based Encryption. IEEE Transactions on Information Forensics and Security, 10(1), (pp. 190-199). [13] Lin, Y., Malik, S.U., Bilal, K., Yang, Q., Wang, Y. and Khan, S.U., 2016. Designing and Modeling of Covert Channels in Operating Systems. IEEE Transactions on Computers, 65(6), pp.1706-1719. [14] Liu, J. K., Au, M. H., Huang, X., Lu, R., and Li, J., 2016. Fine-Grained TwoFactor Access Control for Web-Based Cloud Computing Services. IEEE Transactions on Information Forensics and Security, 11(3), (pp. 484-497). [15] Liu, X., Deng, R. H., Choo, K.-K. R. and Weng, J., 2016. An Efficient PrivacyPreserving Outsourced Calculation Toolkit With Multiple Keys. IEEE Transactions on Information Forensics and Security, 11(11), pp. 2401-2414. [16] Ma, K., Zhang, W. and Tang, Z., 2014. Toward Fine-grained Data-level Access Control Model for Multi-tenant Applications. International Journal of Grid and Distributed Computing, 7(2), pp.79-88. [17] Murata, T., 1989. Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77(4), pp.541-580. [18] Sayler, A., Keller, E. and Grunwald, D., 2013. Jobber: Automating inter-tenant trust in the cloud. In Presented as part of the 5th USENIX Workshop on Hot Topics in Cloud Computing. [19] SMT-Lib. http://smtlib.cs.uiowa.edu/, 2015. [20] Tang, B. and Sandhu, R., 2013, August. Cross-tenant trust models in cloud computing. In Information Reuse and Integration (IRI), 2013 IEEE 14th International Conference on (pp. 129-136). IEEE. [21] Yang, Y., Zhu, H., Lu, H., Weng, J., Zhang, Y. and Choo, K.-K. R., 2016. Cloud based data sharing with fine-grained proxy re-encryption. Pervasive and Mobile Computing, 28, pp. 122-134.
91