Final Documentation.docx

  • Uploaded by: Tharun Theja Reddy
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Final Documentation.docx as PDF for free.

More details

  • Words: 10,088
  • Pages: 76
1. INTRODUCTION Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user's data, software and computation. Cloud computing consists of hardware and software resources made available on the Internet as managed third-party services. These services typically provide access to advanced software applications and high-end networks of server computers.

Fig1: Structure of cloud computing

1.1 Working of cloud computing The goal of cloud computing is to apply traditional supercomputing, or highperformance computing power, normally used by military and research facilities, to perform tens of trillions of computations per second, in consumer-oriented applications such as financial portfolios, to deliver personalized information, to provide data storage or to power large, immersive computer games. The cloud computing uses networks of large groups of servers typically running low-cost consumer PC technology with specialized connections to spread data-processing chores across them. This shared IT infrastructure contains large pools of systems that are linked together. Often, virtualization techniques are used to maximize the power of cloud computing.

1.2 Characteristics and Services Models The salient characteristics of cloud computing based on the definitions provided by the National Institute of Standards and Terminology (NIST) are outlined below: On-demand self-service A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider. Broad network access Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location-independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data center). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines. Rapid elasticity Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be managed, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Fig2:Characteristics of cloud computing

1.3 Services Models Cloud Computing comprises three different service models, namely Infrastructure-asa-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). The three service models or layer are completed by an end user layer that encapsulates the end user perspective on cloud services.

Fig3: Structure of service models

The model is shown in figure below. If a cloud user accesses services on the infrastructure layer, for instance, she can run her own applications on the resources of a cloud infrastructure and remain responsible for the support, maintenance, and security of these applications herself. If she accesses a service on the application layer, these tasks are normally taken care of by the cloud service provider.

1.4 Benefits of cloud computing Achieve economies of scale increase volume output or productivity with fewer people. Your cost per unit, project or product plummets. Reduce spending on technology infrastructure Maintain easy access to your information with minimal upfront spending. Pay as you go (weekly, quarterly or yearly), based on demand. Globalize your workforce on the cheap People worldwide can access the cloud, provided they have an Internet connection. Streamline processes Get more work done in less time with less people. 1. Reduce capital costs: There’s no need to spend big money on hardware, software or licensing fees. 2. Improve accessibility: You have access anytime, anywhere, making your life so much easier. 3. Monitor projects more effectively: Stay within budget and ahead of completion cycle times. 4. Less personnel training is needed: It takes fewer people to do more work on a cloud, with a minimal learning curve on hardware and software issues. 5. Minimize licensing new software: Stretch and grow without the need to buy expensive software licenses or programs. 6. Improve flexibility: You can change direction without serious “people” or “financial” issues at stake.

1.5 Advantages 1. Price: Pay for only the resources used. 2. Security: Cloud instances are isolated in the network from other instances for improved security.

3. Performance: Instances can be added instantly for improved performance. Clients have access to the total resources of the Cloud’s core hardware. 4. Scalability: Auto-deploy cloud instances when needed. 5. Uptime: Uses multiple servers for maximum redundancies. In case of server failure, instances can be automatically created on another server. 6. Control: Able to login from any location. Server snapshot and a software library lets you deploy custom instances. 7. Traffic: Deals with spike in traffic with quick deployment of additional instances to handle the load.

2. SYSTEM ANALYSIS 2.1 Existing System The off-site data storage cloud utility requires users to move data in cloud’s virtualized and shared environment that may result in various security concerns. Pooling and elasticity of a cloud, allows the physical resources to be shared among many users. The data outsourced to a public cloud must be secured. Unauthorized data access by other users and processes (whether accidental or deliberate) must be prevented As discussed above, any weak entity can put the whole cloud at risk. In such a scenario, the security mechanism must substantially increase an attacker’s effort to retrieve a reasonable amount of data even after a successful intrusion in the cloud.

2.2 Disadvantages of Existing System 

The data compromise may occur due to attacks by other users and nodes within the cloud.



The employed security strategy must also take into account the optimization of the data retrieval time.

2.3 Proposed System We collectively approach the issue of security and performance as a secure data replication problem. We present Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that judicially fragments user files into pieces and replicates them at strategic locations within the cloud. The division of a file into fragments is performed based on a given user criteria such that the individual fragments do not contain any meaningful information. Each of the cloud nodes (we use the term node to represent computing, storage, physical, and virtual machines) contains a distinct fragment to increase the data security. 

The aim of is Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that collectively approaches the security and performance issues.



The scope of this paper is the DROPS methodology; we divide a file into fragments, and replicate the fragmented data over the cloud nodes. Each of the nodes stores only a single fragment of a particular data file that ensures that even in case of a successful attack, no meaningful information is revealed to the attacker.

2.4 Advantages of Proposed System



The implications of TCP in cast over the DROPS methodology need to be studied that is relevant to distributed data storage and access.



To improve data retrieval time, the nodes are selected based on the centrality measures that ensure an improved access time.

3. SYSTEM REQUIREMENTS

3.1 Hardware Requirements •

Processor

: Pentium IV 2.4 GHz



Hard Disk

: 40 GB



Monitor

: 15 VGA Colour



Mouse

: Logitech



Ram

: 512 Mb

3.2 Software Requirements •

Operating system

: - Windows XP/7/8/10.



Coding Language

: JAVA 1.8 /J2EE



Data Base

: MYSQL



Web Server

: Apache Tomcat



Other Tools

: Edit plus and SQLYog607.



Front End

: HTML,CSS,JAVASCRIPT and JSP



Backend

: JDBC



IDE/Tool

: Netbeans 8.2

3.3 Functional Requirements A. Data Owner There are mainly different types of data that are stored on cloud. The data which is created by user before uploading the file into cloud. The owner has their own services like register, login, file upload, view file, select cloud, and Update file. Delete File and logout. B. User User will be any person who will use cloud. The users have their own services which are provided by data owner like register, login, view file, select cloud, download file and logout C. Fragmentation The security of a large-scale system, such as cloud depends on the security of the system as a whole and the security of individual nodes. A successful intrusion into a single node may have severe consequences, not only for data and applications on the victim node, but also for the other nodes. The data on the victim node may be revealed fully because of the presence of the whole file. A successful intrusion may be a result of some software or administrative vulnerability. The file owner specifies the fragmentation threshold of the data file is specified to be generated by. The file owner can specify the fragmentation threshold in terms of either percentage or the number and size of different fragments.

3.4 Fragment Placement

To provide the security while placing the fragments, the concept of T-coloring is used that was originally used for the channel assignment problem. This generates a non-negative random number and builds the set T starting from zero to the generated random number. The set T is used to restrict the node selection to those nodes that are at hop-distances not belonging to T. For this purpose, it assigns colors to the nodes, such that, initially, all of the nodes are given the open color. When a fragment is placed on the node, all of the nodes neighborhood nodes at a distance belonging to T are assigned close color. A. Replication To increase the data availability, reliability, and improve data retrieval time, it also performs a controlled replication. It places the fragment on the node that provides the decreased access cost with an objective to improve retrieval time for accessing the fragments for reconstruction of original file. While replicating the fragment, the separation of fragments in the placement technique through T-coloring, is also taken care of. In case of a large number of fragments or small number of nodes, it is also possible that some of the fragments are left without being replicated because of the T-coloring. T-coloring prohibits storing the fragment in neighborhood of anode storing a fragment, resulting in the elimination of a number of nodes to be used for storage. In such a case, only for the remaining fragments, the nodes that are not holding any fragment are selected for storage randomly. B. T-Coloring Suppose we have a graph G = (V;E) and a set T containing non-negative integers including 0. The T coloring is a mapping function f from the vertices of V to the set of nonnegative integers, such that Sf(x)f(y)S ¶ T, where (x; y) > E. The mapping function f assigns a color to a vertex. In simple words, the distance between the colors of the adjacent vertices must not belong to T. Formulated by Hale, the T-coloring problem for channel assignment assigns channels to the nodes, such that the channels are separated by a distance to avoid interference.

4. FEASIBILITY STUDY The feasibility of the project is analyzing in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system

Is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are 

ECONOMICAL FEASIBILITY



TECHNICAL FEASIBILITY

 OPERATIONAL FESIBILITY 4.1 Economical Feasibility This study is carried out to the check the economic impact that the system will have on the organization. the amount of found that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well with in the budget and these was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

4.2 Technical Feasibility This study is carry out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.

4.3 Operational Feasibility The aspect of study of the study is to check the level of acceptance of the system by the user.this includes the process of tranining the user to use the system efficiently.the user must not feel threatened by the system,instead must accept it as a necessity.the level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it.his level of confidence must be raised so that he is also able to make some constructive criticism,which is welcomed,aa he is the final user of the system.

5. SYSTEM DESIGN 5.1 System Architecture

UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group.

Fig4: System architecture

5.2 UML Diagrams The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to; or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects.

5.2 Goals The Primary goals in the design of the UML are as follows:

1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models. 2. Provide extendibility and specialization mechanisms to extend the core concepts. 3. Be independent of particular programming languages and development process. 4. Provide a formal basis for understanding the modeling language. 5. Encourage the growth of OO tools market. 6. Support higher level development concepts such as collaborations, frameworks, patterns and components. 7. Integrate best practices.

5.3 Data Flow Diagram 1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of input data to the system, various processing carried out on this data, and the output data is generated by this system. 2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system. 3. DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output. 4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

5.4 UML Diagrams The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to; or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects.

5.4.1 Use Case Diagram A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram is to show what system functions are performed for which actor. Roles of the actors in the system can be depicted.

5.4.2 Class Diagram In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information.

5.4.3 Sequence Diagram A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams. Life Line 

A lifeline represents an individual participant in the Interaction.

Activations 

A thin rectangle on a lifeline) represents the period during which an element is performing an operation.



The top and the bottom of the of the rectangle are aligned with the initiation and the completion time respectively.

Call Message 

A message defines a particular communication between Lifelines of an Interaction.



Call message is a kind of message that represents an invocation of operation of target lifeline.

Return Message 

A message defines a particular communication between Lifelines of an Interaction.



Return message is a kind of message that represents the pass of information back to the caller of a corresponded former message.

Self Message A message defines a particular communication between Lifelines of an Interaction. 

Self message is a kind of message that represents the invocation of message of the same lifeline.

Note A note (comment) gives the ability to attach various remarks to elements. A comment carries no semantic force, but may contain information that is useful to a modeler.

5.4.4 Collaboration Diagram A collaboration diagram, also called a communication diagram or interaction diagram, is an illustration of the relationships and interactions among software objects in the Unified Modeling Language (UML). The concept is more than a decade old although it has been refined as modeling paradigms have evolved. UML collaboration diagram symbols Pre-drawn UML collaboration diagram symbols represent object, multi-object, association role, delegation, link to self, constraint and note. These symbols help create accurate diagrams and documentation.

Objects Objects are model elements that represent instances of a class or of classes.

Multi-object represents a set of lifeline instances.

Association role Association role is optional and suppressible.

Delegation Delegation is like inheritance done manually through object composition.

Link to self Link to self is used to link the bjects that fulfill more than one role.

Constraint Constraint is an extension mechanism that enables you to refine the semantics of a UML model element.

Note: contains comments or textual information.

12: Login 15: If modify data 27: Logout

6: Login 8: Verify fragments data 11: Modify file 28: Logout

2: Login 25: Logout

Cloud TPA

Data owner

16: Forward file alerts 13: Forward update alert

23: Send message 24: Issue secrate key 21: File request

1: new owner registration 3: Upload file by applying file fragmentation 9: Apply Tcoloring algorithm 14: View owner approach fragments data 4: File 10: replication Upload files into cloud nodes by applying fragment placement 17: Upload files 7: View owner fragmented files 5: View and update and delete files

19: Login 22: File decryption 29: Logout

18: User registration Data base

Data user 20: View owner files 26: Download file

5.4.4 Activity Diagram Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.

Initial State or Start Point A small filled circle followed by an arrow represents intial action state or the start point for any activity diagram.For activity diagram using swimlany,make sure the start point is placed in the top left corner of the first column.

Activity or Action State An action state represents the non-interruptible action of objects.you can draw an action state in smart draw using rectangle with rounded corners.

Action Flow Action flow also called edges and paths ,illusta=ratevthe transitions from on action state to another.they are usually drawn with an arrowed lines.

Object Flow Object flow refers to the creation and modification of objects by activities.an object flow arrow from an action to an object means that the action creates or influences the object.an object flow arrow from an object to an action indicates that he action state uses the object.

Decisions and Branching A diamond represents a decision with alternate paths.when an activity requires a decision prior tomoving an to the next activity ,add a diamond between the two activities the outgoing alternates should be labeled with a condition or guard expression.you can also label one of the paths “else”.

Synchronization

A fork node is used to split a single incoming flow into multiple concurrent flows.it is represented as a atraight,slightly thicker line in an activity diagram .a join s multiple concurrent flows back into an single outgoing flow.a fork and join mode used together are often referred to as synchronization.

5.4.5 Component Diagram Component diagram fall under the category of an implementation diagram, a kind of diagram that models the implementation and deployment of the system. A component diagram in particular is used to describe the dependencies between various software component such as the dependency between various software component such as the dependency between the executable files and source files.

File Upload File Fragmentation

data owner

File Replication Register User

Update & Delete files

Login

View Files & File Alerts

cloud tpa data user

T-Coloring Algorithm

File Request & Download

Fragment placement

Component Notation A component in UML is shown in the following figure with a name inside. Additional elements can be added wherever required.

CLOUD COMPONENT

5.4.5 Deployment Diagram Deployment diagrams are used to visualize the topology of the physical components of a system, where the software components are deployed.

File Upload Data Owner

File Replication

Log in

Register User

Data User

File Fragmentation

View files & Fil e alerts

TPA

Cloud File Request & Download

T-Colouring

Fragment Placement

Node Notation A node in UML is represented by a square box as shown in the following figure with a name. A node represents the physical component of the system.

5.4.6 State chart Diagrams The name of the diagram itself clarifies the purpose of the diagram and other details. It describes different states of a component in a system. The states are specific to a component/object of a system.

Initial State Notation Initial state is defined to show the start of a process. This notation is used in almost all diagrams.

Final State Notation Final state is used to show the end of a process. This notation is also used in almost all diagrams to describe the end.

Authentication

Log in

Data owner

File upload

File fragmentation

Data user

File replcation

TPA

View files & File alerts

Logout

File request & Download

Cloud

T-Colouring

Fragment placement

6. SYSTEM CODING AND IMPLEMENTATION 6.1 Sample Code Userlogin.jsp <meta http-equiv="content-type" content="text/html; charset=iso-8859-1" /> Multi Cloud <script type="text/javascript"> function valid() { var a=document.s.uid.value; if(a=="") { alert("Enter User ID"); document.s.uid.focus(); return false; } var b=document.s.pass.value; if(b=="") { alert("Enter Password"); document.s.pass.focus(); return false; }} <meta name="keywords" content="" /> <meta name="description" content="" />




<strong>
face="Verdana,

Arial,

Helvetica,

sans-serif"

size="+1"

color="#990000" style="text-decoration: underline;">User Login






face="Courier

New"

size="+1"><strong>
src="images/login1.png">  User ID
    


type="text"

name="uid"

class="b">

face="Courier

New"

size="+1"><strong>
src="images/login.png">  Password
    


type="password"

name="pass"

class="b">

face="Courier

New"

size="+1"><strong>
src="images/login.png">  User Type
     <select name="utype”>



type="reset" name="r" value="clear" class="b1">
message=request.getParameter("message"); if(message!=null

&&

message.equals)

{ out.println("Your username and password is incorrect !"); }%>






Fileupload: <%@ page import="java.sql.*" import="databaseconnection.*"%> <meta http-equiv="content-type" content="text/html; charset=iso-8859-1" /> Multi Cloud <script type="text/javascript"> function valid() { var a=document.s.fn.value; if(a==""){

alert("Enter File Name"); document.s.fn.focus(); return false; } var b=document.s.file.value; if(b==""){ alert("Browse a File"); document.s.file.focus(); return false; } if(document.s.server.selectedIndex==0) { alert("Select a Cloud Server"); document.s.server.focus(); return false; }} <meta name="keywords" content="" /> <meta name="description" content="" /> <% String name=(String)session.getAttribute("name"); %>
Filerequest.jsp: <%@

page

import="java.sql.*"

import="databaseconnection.*,databaseconnection1.*,databaseconnection2.*,databaseconnec tion3.*"%>

<meta http-equiv="content-type" content="text/html; charset=iso-8859-1" /> Multi Cloud <meta name="keywords" content="" /> <meta name="description" content="" /> <% String name=(String)session.getAttribute("name"); %> <% String message=request.getParameter("message"); if(message!=null && message.equalsIgnoreCase("success")) { out.println("<script>alert('File Request is Sended to Owner !')");

}

%>


src="images/home.jpg"

height="20"

width="20">  

<strong><em>New File Request




<strong>New File Request    





<% String message1=request.getParameter("mes"); if(message1!=null && message1.equalsIgnoreCase("success")) { out.println("<script>alert('Request is Already Existed!')"); } %>



<strong>  FID  :      <select name="fid"> <% String uid=null,uname=null,date=null,fid=null,fname=null,fsize=null,sta=null; String u=(String)session.getAttribute("u"); try{ Class.forName("com.mysql.jdbc.Driver");

Connection

cona

=

DriverManager.getConnection("jdbc:mysql://localhost:3306/cloudserver1","root","root"); Statement staa = cona.createStatement(); String cspprivatekey=null; String qrya="select * from filestore"; ResultSet rsa =staa.executeQuery(qrya); while(rsa.next()) { cspprivatekey=rsa.getString("fid"); %> <% catch(Exception e1) { out.println(e1.getMessage()); } %>%>
<strong>  UID  :      value="<%=u%>"


readonly>


size="+1"><strong>  Access Rights  :
     <select name="arw">


            &nbs p;   










<% Try { Class.forName("com.mysql.jdbc.Driver"); Connection

cona

=

DriverManager.getConnection("jdbc:mysql://localhost:3306/multi-

cloud","root","root"); Statement staa = cona.createStatement(); String cspprivatekey=null;String cspprivatekey1=null;String accre=null,ridd=null; String qrya="select * from userrequests where uid='"+u+"'"; ResultSet rsa =staa.executeQuery(qrya); while(rsa.next()) {

cspprivatekey=rsa.getString("fid"); cspprivatekey1=rsa.getString("status"); accre=rsa.getString("ar"); ridd=rsa.getString("rid"); %>
align="center"

class="paragraping">

<strong>
color="#6300C6">

<strong>
color="#6300C6">

<%=cspprivatekey%>
align="center"

class="paragraping">

<%=accre%>
align="center"

<%if(cspprivatekey1.equals("no")){%>

class="paragraping"><strong>
Processing

color="#6300C6">

<%}else{%>
align="center"

href="results.jsp?<%=cspprivatekey%>"><strong>
class="paragraping">View

Results
<% } }} catch(Exception e1) { out.println(e1.getMessage()); } %>
<strong>File ID <strong>Access Rights <strong>Status


Tpalogin.jsp: <meta http-equiv="content-type" content="text/html; charset=iso-8859-1" /> Multi Cloud <script type="text/javascript"> function valid() { var a=document.s.tid.value; if(a=="") { alert("Enter User ID"); document.s.tid.focus(); return false;

} var b=document.s.pass.value; if(b=="") { alert("Enter Password"); document.s.pass.focus(); return false; } } <style type="text/css"> .b:hover{ border-size:3px; border-color:red; } .b1 { background-color: #color; border-bottom:solid; border-left: #FFEEEE; border-right:solid; border-top: #EEEEEE; color: brown; font-family: Verdana, Arial

} <meta name="keywords" content="" /> <meta name="description" content="" />





align="center">


face="Estrangelo

Edessa"><strong>
size="+1" color="#990000" style="text-decoration: underline;">TPA Login





type="text"

name="tid"

< /table>




Serverlog.jsp: <meta http-equiv="content-type" content="text/html; charset=iso-8859-1" /> Multi Cloud <script type="text/javascript"> function valid() { var a=document.s.uid.value; if(a=="") {

by


alert("Enter User ID"); document.s.uid.focus(); return false; } var b=document.s.pass.value; if(b=="") { alert("Enter Password"); document.s.pass.focus(); return false; } if(document.s.ser.selectedIndex==0) { alert("Select a Server"); document.s.ser.focus(); return false; } <style type="text/css"> .b:hover{ border-size:3px; border-color:red; }

.b1 { background-color: #color; border-bottom:solid; border-left: #FFEEEE; border-right:solid; border-top: #EEEEEE; color: brown; font-family: Verdana, Arial

} <meta name="keywords" content="" /> <meta name="description" content="" />





src="images/ser.png"

height="30"

width="25">      <strong><em>Server Login




face="Courier

New"

size="+1"><strong>
src="images/login1.png">  User ID
     class="b">

face="Courier

New"

size="+1"><strong>
src="images/login.png">  Password
    


type="password"

name="pass"

class="b">


            &nbs p;  
<% String message=request.getParameter("message"); if(message!=null && message.equalsIgnoreCase("fail")) { out.println("
color='red'>Your

username

and

password

is

incorrect

!
"); } %>



type="password"

name="pass"


face="Courier

New"

size="+1"><strong>
src="images/login1.png">  User ID
    

face="Courier

New"

size="+1"><strong>
src="images/login.png">  Password
     class="b">
<strong>  Server      <select name="ser" class="b">



type="reset" name="r" value="clear" class="b1">
<% String message=request.getParameter("message"); if(message!=null && message.equalsIgnoreCase("fail")) { out.println("You are not valid user !"); } %>



6.2 Programming Language 6.2.1 Java Technology Java technology is both a programming language and a platform.

by


The Java Programming Language The Java programming language is a high-level language that can be characterized by all of the following buzzwords: 

Simple



Architecture neutral



Object oriented



Portable



Distributed



High performance



Interpreted



Multithreaded



Robust



Dynamic



Secure

Java byte codes The platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

Fig5: java byte code

Fig6: java program execution The Java Platform A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms. The Java platform has two components: 

The Java Virtual Machine (Java VM)



The Java Application Programming Interface (Java API)

Fig7: java platform

6.2.2 What Can Java Technology Do The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser. Every full implementation of the Java platform gives you the following features: The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on. Applets: The set of conventions used by applets.

Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses. Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language. Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates. Software components: Known as JavaBeansTM, can plug into existing component architectures. Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI). Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

Fig8: java technology ODBC Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

JDBC In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on. SQL Level API The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user. SQL Conformance SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle nonstandard functionality in a manner that is suitable for its users. 1. JDBC

must

be

implemental

on

top

of

common

database

interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa. 2. Provide a Java interface that is consistent with the rest of the Java system Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system. 3. Keep it simple This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. 4. Keep the common cases simple Because more often than not, the usual SQL calls used by the programmer are simple

SELECT’s, INSERT’s, DELETE’s and UPDATE’s. Java ha two things: a programming language and a platform. Java is a high-level programming language that is all of the following

Interpreter

Java Program

Compilers

My Program

6.2.3 Networking

Fig9: tcp stack IP datagram’s The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header.

UDP UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model - see later. TCP TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate. Internet addresses In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address. This encodes a network ID and more addressing. The network ID falls into various classes according to the size of the network address. Network address Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32. Subnet address Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts. Host address 8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet. Total address

Fig10:total address Port addresses: A service exists on a host, and is identified by its port. This is a 16 bit number. To send a

message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are "well known". Sockets: A socket is a data structure maintained by the system to handle JFree Chart JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart's extensive feature set includes:

1. Map Visualizations Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas); Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more. 2. Time Series Chart Interactivity Implement a new (to JFreeChart) feature for interactive time series charts --- to display a separate control that shows a small version of ALL the time series data, with a sliding "view" rectangle that allows you to select the subset of the time series data to display in the main chart. 3. Dashboards There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet. 4. Property Editors The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts. J2ME (Java 2 Micro edition) Sun Microsystems defines J2ME as "a highly optimized Java run-time environment targeting a wide range of consumer products, including pagers, cellular phones, screen-phones, digital set-top boxes and car navigation systems." Announced in June 1999 at the JavaOne Developer Conference, J2ME brings the cross-platform functionality of the Java language to smaller devices, allowing mobile wireless devices to share applications. With J2ME, Sun has

adapted the Java platform for consumer products that incorporate or are based on small computing devices.

1. General J2ME architecture

Fig11: General J2ME architecture J2ME uses configurations and profiles to customize the Java Runtime Environment (JRE). As a complete JRE, J2ME is comprised of a configuration, which determines the JVM used, and a profile, which defines the application by adding domain-specific classes.

2. Developing J2ME applications Introduction In this section, we will go over some considerations you need to keep in mind when developing applications for smaller devices. We'll take a look at the way the compiler is invoked when using J2SE to compile J2ME applications. Finally, we'll explore packaging and deployment and the role preverification plays in this process. 3. Design considerations for small devices Developing applications for small devices requires you to keep certain strategies in mind during the design phase. It is best to strategically design an application for a small device before you begin coding. Correcting the code because you failed to consider all of the "gotchas" before developing the application can be a painful process. Here are some design strategies to consider 

Keep it simple. Remove unnecessary features, possibly making those features a separate, secondary application.



Smaller is better. This consideration should be a "no brainer" for all developers. Smaller applications use less memory on the device and require shorter installation times. Consider packaging your Java applications as compressed Java Archive (jar) files.

 Minimize run-time memory use. 4. Configurations overview The configuration defines the basic run-time environment as a set of core classes and a specific JVM that run on specific types of devices. Currently, two configurations exist for J2ME, though others may be defined in the future: Connected Limited Device Configuration (CLDC) is used specifically with the KVM for 16-bit or 32-bit devices with limited amounts of memory. This is the configuration (and the virtual machine) used for developing small J2ME applications. Its size limitations make CLDC more interesting and challenging (from a development point of view) than CDC. * Connected Device Configuration (CDC) is used with the C virtual machine (CVM) and is used for 32bit architectures requiring more than 2 MB of memory. An example of such a device is a Net TV box. 5. J2ME profiles What is a J2ME profile As we mentioned earlier in this tutorial, a profile defines the type of device supported. The Mobile Information Device Profile (MIDP), for example, defines classes for cellular phones. It adds domain-specific classes to the J2ME configuration to define uses for similar devices Profile 1: KJava KJava is Sun's proprietary profile and contains the KJava API. The KJava profile is built on top of the CLDC configuration. The KJava virtual machine, KVM, accepts the same byte codes and class file format as the classic J2SE virtual machine. KJava contains a Sunspecific API that runs on the Palm OS. Profile 2: MIDP MIDP is geared toward mobile devices such as cellular phones and pagers. The MIDP, like KJava, is built upon CLDC and provides a standard run-time environment that allows new applications and services to be deployed dynamically on end user devices. MIDP is a common, industry-standard profile for mobile devices that is not dependent on a specific vendor. It is a complete and supported foundation for mobile application, development.

6.3 IMPLEMENTATION A. Data Owner There are mainly different types of data that are stored on cloud. The data which is created by user before uploading the file into cloud. The owner has their own services like register, login, file upload, view file, select cloud, and Update file. Delete File and logout.

B. User User will be any person who will use cloud. The users have their own services which are provided by data owner like register, login, view file, select cloud, download file and logout

C. Fragmentation The security of a large-scale system, such as cloud depends on the security of the system as a whole and the security of individual nodes. A successful intrusion into a single node may have severe consequences, not only for data and applications on the victim node, but also for the other nodes. The data on the victim node may be revealed fully because of the presence of the whole file. A successful intrusion may be a result of some software or administrative vulnerability. The file owner specifies the fragmentation threshold of the data file is specified to be generated by. The file owner can specify the fragmentation threshold in terms of either percentage or the number and size of different fragments. The percentage fragmentation

threshold, for example, can dictate that each fragment will be of 5% size of the total size of the file. Alternatively, the owner can generate separate file containing information about the fragment number and size, for instance, fragment 1of size 4,000 Bytes, fragment 2 of size 6,749 Bytes. The owner of the file is the best candidate to generate fragmentation threshold as he is very well aware about the significant information from the file. The owner can best split the file such that each fragment does not contain significant amount of information. The default percentage fragmentation threshold can be made a part of the Service Level Agreement (SLA), if the user does not specify the fragmentation threshold while uploading the data file

D. Fragment Placement To provide the security while placing the fragments, the concept of T-coloring is used that was originally used for the channel assignment problem. This generates a non-negative random number and builds the set T starting from zero to the generated random number. The set T is used to restrict the node selection to those nodes that are at hop-distances not belonging to T. For this purpose, it assigns colors to the nodes, such that, initially, all of the nodes are given the open color. When a fragment is placed on the node, all of the nodes neighborhood nodes at a distance belonging to T are assigned close color. In this process, this loses some of the central nodes that may increase the retrieval time. But it achieves a higher security level. If anyhow the intruder compromises a node and obtains a fragment, he cannot determine the location of the other fragments. The attacker can only keep on guessing the location of the other fragments, because the nodes are separated by T-coloring.

E. Replication To increase the data availability, reliability, and improve data retrieval time, it also performs a controlled replication. It places the fragment on the node that provides the decreased access cost with an objective to improve retrieval time for accessing the fragments for reconstruction of original file. While replicating the fragment, the separation of fragments in the placement technique through T-coloring, is also taken care of. In case of a large number of fragments or small number of nodes, it is also possible that some of the fragments are left without being replicated because of the T-coloring. T-coloring prohibits storing the fragment in neighborhood of anode storing a fragment, resulting in the elimination of a number of nodes to be used for storage. In such a case, only for the remaining fragments, the nodes that are not holding any fragment are selected for storage randomly.

F. T-COLORING

Suppose we have a graph G = (V;E) and a set T containing non-negative integers including 0. The T coloring is a mapping function f from the vertices of V to the set of nonnegative integers, such that Sf(x)f(y)S ¶ T, where (x; y) rel="nofollow"> E. The mapping function f assigns a color to a vertex. In simple words, the distance between the colors of the adjacent vertices must not belong to T. Formulated by Hale, the T-coloring problem for channel assignment assigns channels to the nodes, such that the channels are separated by a distance to avoid interference.

7. SYSTEM TESTING The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

7.1 Types of Tests Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system

configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components. Functional test Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input

: identified classes of valid input must be accepted.

Invalid Input

: identified classes of invalid input must be rejected.

Functions

: identified functions must be exercised.

Output

: identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. System Test System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

White Box Testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. Black Box Testing Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. Unit Testing Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. Test strategy and approach Field testing will be performed manually and functional tests will be written in detail. Test objectives 

All field entries must work properly.



Pages must be activated from the identified link.



The entry screen, messages and responses must not be delayed.

Features to be tested 

Verify that the entries are of the correct format



No duplicate entries should be allowed



All links should take the user to the correct page.

Integration Testing Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. Test Results

All the test cases mentioned above passed successfully. No defects encountered. Acceptance Testing User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

7.2 Test Results All the test cases mentioned above passed successfully. No defects encountered.

8. RESULTS The following figure shows home screen which is entry point of our project

The following figure shows user login screen

The following figure shows fileupload screen

The following figure shows files are divided into fragments screen

The following figure shows fileupdate screen

The following figure shows tpalogin page screen

The following figure shows tpa view all files

The following figure shows when ever tpa click the view link it shows all 3 fragmented files in encrypted formate by using we decrypte the files

The following figure shows tpa view after decrypt

The following figure shows tpa view all filealerts screen

The following figure shows serverlogin page

The following figure shows cloudserver1 file verification

The following figure shows cloudserver2 file verification

The following figure shows cloudserver3 file verification

The following figure shows file request

screen

The following figure shows when we send filerequest it goes to the data owner

The following figure shows after login to user in file request module we have download option screen

9. CONCLUSION We proposed the DROPS methodology, a cloud storage security scheme that collectively deals with the security and performance in terms of retrieval time. The data file was fragmented and the fragments are dispersed over multiple nodes. The nodes were separated by means of T-coloring. The fragmentation and dispersal ensured that no significant information was obtainable by an adversary in case of a successful attack. No node in the cloud, stored more than a single fragment of the same file. The performance of the DROPS methodology was compared with full-scale replication techniques. The results of the simulations revealed that the simultaneous focus on the security and performance resulted in increased security level of data accompanied by a slight performance drop. Currently with the DROPS methodology, a user has to download the file, update the contents, and upload it again. It is strategic to develop an automatic update mechanism that can identify and update the required fragments only. The aforesaid future work will save the time and resources utilized in downloading, updating, and uploading the file again. Moreover, the implications of TCP in cast over the DROPS methodology need to be studied that is relevant to distributed data storage and access.

REFERENCES [1] K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A. Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C. Z. Xu, and A. Y.Zomaya, “Quantitative comparisons of the state of the art data center architectures,” Concurrency and Computation: Practice and Experience, Vol. 25, No. 12, 2013, pp. 1771-1783. [2] K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A. Zomaya, “On the characterization of the structural robustness of data center networks,” IEEE Transactions on Cloud Computing, Vol. 1, No. 1, 2013, pp. 64-77. [3] D. Boru, D. Kliazovich, F. Granelli, P. Bouvry, and A. Y. Zomaya, “Energy-efficient data replication in cloud computing datacenters,” In IEEE Globecom Workshops, 2013, pp. 446451. . [4] Y.Deswarte,L.Blain, and J-C. Fabre, “Intrusion tolerance in distributed computing systems,” In Proceedings of IEEE Computer Society Symposium on Research in Security and Privacy, Oakland CA, pp. 110-121, 1991. [5] B. Grobauer, T.Walloschek, and E. Stocker, “Understanding cloud computing vulnerabilities,” IEEE Security and Privacy, Vol 9, No. 2, 2011, pp. 50-57. [6] W. K. Hale, “Frequency assignment: Theory and applications,” Proceedings of the IEEE, Vol. 68, No. 12, 1980, pp. 1497-1514. [7] K. Hashizume, D. G. Rosado, E. Fernndez-Medina, and E. B. Fernandez, “An analysis of security issues for cloud computing,” Journal of Internet Services and Applications, Vol. 4, No. 1,2013, pp. 1-13. [8] M. Hogan, F. Liu, A.Sokol, and J. Tong, “NIST cloud computing standards roadmap,” NIST Special Publication, July 2011. [9] W. A. Jansen, “Cloud hooks: Security and privacy issues in cloud computing,” In 44th Hawaii IEEE International Conference on System Sciences (HICSS), 2011, pp. 1-10.

Related Documents

Ais Final Final Final
November 2019 111
Final Final
June 2020 55
Final
June 2020 2
Final
June 2020 6

More Documents from ""