Web based travelling journey planner
Abstract: Web based travelling journey planner is a web based application through which any one can book for holiday trip. Holiday trip planner travel planning functionality makes it easy for to plan holiday in a matter of minutes. By pre-planning your dream holiday, you can then proceed to book with ease. Holiday trip planner is an online travel management system. It aimed offer a range of best-value services to ensure that runs smoothly and efficiently. It offers a complete range of services for the business and individual traveler. Holiday trip planner provides very smooth facility one who wants to make holiday trip. This Application provides all available trip places with complete package (Complete package includes all hotel facility, bus facility, and complete charges).
Eminent Technology Solutions
EMINENT TECHNOLOGY SOLUTION is one of the leading information technology companies. Through its Global Network Delivery Model, Innovation Network and Solution
Accelerators, ETS focuses on helping global organizations address their business challenges effectively. ETS is an enterprise software company and it’s headquartering at Bangalore, regional Office at Madurai and having branch offices in all over Tamil Nadu. It possesses not only the latest technology gadgets but also the most knowledgeable and experienced hands to offer most user friendly customized solutions. ETS offers a unique, customer-centric model for delivering software products and services. The vision, ability to execute and financial resources to be a great business partner for your enterprise. Our offerings for application delivery, application management, and IT governance help customers maximize the business value of IT by optimizing application quality, performance, and availability as well as managing IT costs, risks, and compliance. ETS offers the following solutions to the IT industry: Software Development Web Development High-End Training for Students & Professionals
Who we are
Eminent Technology Solutions is a Software Development, IT Solution, Academic Projects, Website Development and Professional Services providing Company. The company is located at the Silicon City of India, Bangalore. The company was founded by Vast Experienced Global IT Professional to meet the challenges of the present growing global technological environment with respect to the Information Technology Sector. Eminent Technology Solutions shall meet the global challenges with its pool of highly qualified Professionals. The company has competencies in Customized Software Development, Out-sourcing of manpower and consultancy in the areas of Information Systems, Analysis, Design, Development and Implementation. Company's Mission Statement
To develop cost effective better performing user-friendly computer based business solutions with quality and reliability within the time frame. In the process create satisfied customers and proud employees. Company's Values Trust each other with utmost respect. Continuous skill improvement with professional work environment.
CHAPTER I INTRODUCTION
In the current era of ubiquitous network connections, a wide array of software has come into existence to help users secure various aspects of their computer use. As users work with these pieces of software, they must constantly make decisions regarding the integrity and authenticity of incoming information and requests for personal data. This software often attempts to help users protect themselves through a variety of user interface elements — icons, dialog boxes, text elements and so forth. Given the current state of affairs with respect to computer security, it is clear that something has gone awry. Our own experience, as well as work done by other researchers, supports the idea that problems often arise when software behaves in a way contrary to the mental model suggested by its user interface. Moreover, when software attempts to help humans make decisions, but is not capable of appropriately modeling how humans make those decisions, it may provide information that is insufficient, irrelevant, or even totally misleading. This work does not necessarily reflect the views of the sponsors. bring human mental models and the behavior of systems into closer alignment, thus avoiding these problems, we believe software must be designed to explicitly leverage the people who use it; lay out the goals of the system, identify the tasks involved at which humans excel (and at which computers do not), and then design the system accordingly.
CHAPTER II EXISTING SYSTEM To address email security and privacy concerns, many organizations in the commercial, federal and educational sectors have deployed S/MIME, a secure email standard that leverages X.509 Identity Certificates to provide message integrity and nonrepudiation via digital signatures. In addition, these signatures often contain the sender’s Identity Certificate, so all information contained therein is available to the recipient. The signature would also contain his public key, which can then be used to validate the signature. Pretty Good Privacy (PGP) is another PKI-based email scheme which provides similar properties, but with more sporadic adoption. In terms of our trust model, S/MIME can do one of two things for the recipient, depending on whether she has experience with the sender. If she knows the sender a priori, S/MIME can enable the recipient to leverage her trust in an institution to assure herself of the sender’s identity and thus apply her process-based trust to the incoming message. If she has little or no prior experience with the sender, then S/MIME allows the recipient to extend some measure of institutionally-based trust to the sender. At least, that’s the idea. However, both the literature and personal experience show that issues remain. S/MIME leverages X.509 ID certificates, which are minted by a Certification Authority (CA), often local to the user receiving the cert. Standard clients come with a wide variety of trust roots pre-configured but, with many organizations deploying their own PKIs, this set is far from comprehensive. Another interesting issue arises from the fact that standard S/MIME clients treat all installed trust roots as equal. When an S/MIME signature is deemed valid, the client will display the same information to the user regardless of which CA issued the credentials used to sign the message. The name directory allows users to choose any nickname, even one close to the actual name of the President. In the case of familiar correspondents, S/MIME is supposed to help users leverage their trust in an institution to allow them to reliably extend their pre-existing process based trust to each other. However, as we have shown, it is possible for an attacker to play several complex systems off each other and thwart this design. In large organizations, it becomes less likely that a sender and recipient knew each other prior to contact. Thus, an S/MIME signature verifying only the sender’s name and email address
would not be enough to help the recipient make a good decision. The signature is not expressive enough to allow humans to specify the right properties for conclusions in human trust settings. The first class of issues arises when users expect that a name, verified by a digital signature, equates to a person. A second class of issues arises when a name, verified by a digital signature, does not tell the user what they need to know. The final class of expressiveness issues shows up when the same property does not mean the same thing in different contexts. In terms of our trust model, in the case of unfamiliar correspondents hierarchical-PKI-based S/MIME is attempting to allow users to build institutional trust between each other. Standard S/MIME implementations can only establish membership in a fairly large subculture. The members of this group are not homogenous enough to clearly define an area in which all members should be trusted. Despite these issues, S/MIME has provided message integrity and non-repudiation, as well as the sender’s public key, provided that the recipient trusts the sender’s CA and that the sender’s private key has remained private. S/MIME, therefore, is a good starting point, and the public key in particular could provide a way to hook further contextual information about the sender into the message. Role Based Messaging (RBM) is a system that creates role-based mail accounts. Users who have appropriate credentials (where “appropriate” is defined by policy on a per-role basis) can log into those accounts to read mail sent to that role and also to send signed and encrypted mail from that role. Mail may be encrypted to a role, not simply to a specific user. Role membership is controlled by a PERMIS back end, in which X.509 ACs are used to store role membership information. Policies can be added to messages to further control what recipients can do with them. A policy governs who can assign roles, though this could be set up to allow any user to grant roles to others. Also, these “role managers” can create new roles within their organization, but they will not be recognized by the system. Recent RBM work has proposed Policy Based Management (PBM), which adds infrastructure to allow organizations to advertise what kinds of policy languages they support. PBM also allows third parties to sign off on particular implementations of particular policy languages and enforcement models, so that an enterprise can prevent (again, via policy) users from sending secure messages to an external organization whose mail system may not respect message permissions set by the sender. In addition to several other issues, RBM
offers a solution for the email trust problem. Users could all be granted roles, and then choose the appropriate role from which to send a message that needed to be trusted. It does not appear that users could claim multiple roles at the same time, but new, combined roles could feasibly be created to handle that. Also, it is unclear whether the nice audit feature provided by the chaining of ACs to create ABUSE attributes is also provided by RBM.
CHAPTER III PROPOSED SYSTEM An Attribute-Based, Usefully Secure Email (ABUSE) system leverages users by enabling them to build a decentralized, non-hierarchical PKI to express their trust relationships with each other, and then use this PKI to manage their trust in people with whom they correspond via secure email. Our design puts humans into the system—to do things that humans are good at but machines are not—at both the creation of credentials as well as the interpretation of credentials. By doing so, this system can overcome the failings of approaches based on standard PKI. Rather than attempt to automatically make trust decisions for users, the system is designed to help them make more informed trust decisions about email that they receive. By allowing the users to create useful metadata about each other, it is able to access the store of data about themselves, and attach the selected attributes to outgoing messages. Then, this information is presented to recipients in an understandable fashion. Our design goals are to 1. Enable users to bind appropriate trustworthy assertions about themselves to outgoing email, 2. Enable users to understand trustworthy assertions about senders of incoming email, 3. Avoid push-back from users without ABUSE-saavy clients, 4. Minimize the administrative burden on everyone involved, 5. Avoid the need for an organization-wide “Attribute Administrator”, 6. Avoid limiting the attribute space (i.e. avoid predefining a set of attributes and relationships) 7. Leverage existing PKI and S/MIME infrastructure, and 8. Provide some support for attributes belonging to users at outside organizations.
MODULES Creating, Storing, Distributing and Displaying Attributes
ABUSE users create own attributes, without involving a system administrator. Addressing this portion of the system, therefore, is basically a user-interface design problem, as is the attribute display portion. The ABUSE is built into Dartmouth’s homegrown email client, known as BlitzMail. The vast majority of email usage at the college occurs through BlitzMail, and the users cross all demographics, from students to faculty to staff. The size and variety of this installed base will provide us with a good volume of data, and that the users’ familiarity with the client will allow us to avoid worrying about users being confused with the general email portion of the UI when designing our studies. The benefits of cleaner user studies are worth taking on the challenge of integrating ABUSE with BlitzMail. In comparison to the iterative and subjective process of user interface design, creating an infrastructure for storing and distributing ABUSE attributes should be a much simpler task. Arguments could be made for storing attributes on the client-side, but some details of the client platform upon which we are building our prototype dictate that we should instead provide a central attribute store, indexed by users’ public keys. A user’s client will have to prove knowledge of their private key in order to pull attributes out of the central attribute directory. Then, the client will allow the user to choose which of their attributes they wish to attach to a given message, if any. The sender’s private key is then used to sign a hash of the chosen attributes and the message, and this signature is included with the rest of the ABUSE content. Forwarding behavior varied, with some clients stripping out headers for encapsulated messages and others maintaining them, but hiding them. Attribute management Our goals are to minimize the administrative burden on users and avoid the need for a dedicated administrator. PCs are structurally similar to X.509 Identity Certificates except that they tend to be more short-lived and include some extra extensions that allow for the specification of arbitrary information. Chains of PCs can be created just like it is possible to create chains of identity certificates. Attributes can be verified by recipients in the same way that identity certificate chains are verified. The attribute generation process is envisioned by having a local trust-root grant a small set of attributes to some high-level members of the organization. The responsibility for maintaining portions of the attribute space is divided among many people, but each individual is only responsible for a small portion of the overall space. Since attributes
all chain back to a single trust root, as long as a user’s client is configured to trust that root, the client will be able to verify that the attributes are valid. This is not unreasonable, since an environment inside an institution is considered, where all clients can be configured to use a local Certification Authority (CA) as their trust root. Attribute content Some TM languages can only express certain kinds of relationships between principals. We do not believe that we can anticipate every relationship that a user would want to express between herself and another user, so defining a set of legal relations would be undesirable. Other languages are sufficiently flexible, but imposing significant structure on attributes would either make the user interface for creating attributes more complex, or require the system to convert back and forth from an easily human-parsable format. Since we are not trying to automatically reason about attributes in ABUSE, this seems unnecessary.
Figure.1 Single-enterprise ABUSE system
Figure.2 ABUSE system that spans multiple enterprises
Data Flow Diagram
Attribute Management
Creating
Sorting
Distributing
Displaying
ER Diagram
CHAPTER IV Feasibility Study The feasibility study is an evaluation of proposed system regarding its workability, organizational ability to meet user needs and effective use of resources. When a new application is proposed, it should go through the feasibility study before it is approved for the development. There are three aspects of feasibility study. 1. Technical Feasibility 2. Economic Feasibility 3. Operational Feasibility Technical Feasibility The consideration that is normally associated with the technical feasibility includes where the project is to be developed and implemented. The proposed software should have the security for data and should be very fast in processing the data efficiently. A basic knowledge to operate a computer is sufficient to handle the system, since the system is designed to provide user-friendly access. Economic Feasibility Economic justification is generally the “Bottom Line” consideration for most systems. It includes a broad range of concerns that include the Cost-benefit analysis. The cost-benefit analysis delineates costs for project development and weights then against tangible and development of the system. Hence, there are tangible and intangible benefits the project development. Operational Feasibility The new system must be accepted by the user. In this system, the administrator is one of the users. As users are responsible for initiating the development of a new system, this is rooted out. Cost-benefit analysis (CBA) is an analytical tool for assessing and the pros and cons of moving forward with a business proposal.
A formal CBA tallies all of the planned project costs, quantifies each of the tangible benefits and calculates key financial performance metrics such as return on investment (ROI), net present value (NPV), internal rate of return (IRR) and payback period. The costs associated with taking action are then subtracted from the benefits that would be gained. As a general rule, the costs should be less than 50 percent of the benefits and the payback period shouldn't exceed 12 months.
A CBA is considered to be a subjective (as opposed to objective) assessment tool because cost and benefit calculations can be influenced by the choice of supporting data and estimation methodologies. Sometimes its most valuable use when assessing the value of a business proposal is to serve as a vehicle for discussion. Cost-benefit analysis is sometimes called benefit-cost analysis (BCA).The CBAM consists of the following steps: 1. Choosing scenarios and architectural strategies 2. Assessing quality attribute benefits 3. Quantifying the benefits of architectural strategies 4. Quantifying the costs and schedule implications of architectural strategies 5. Calculating desirability and making decisions
CHAPTER V System Design System design concentrates on moving from problem domain to solution domain.
This
important phase is composed of several steps. It provides the understanding and procedural details necessary for implementing the system recommended in the feasibility study. Emphasis is on translating the performance requirements into design specification.
The design of any software involves mapping of the software requirements into Functional modules. Developing a real time application or any system utilities involves two processes. The first process is to design the system to implement it. The second is to construct the executable code.
Software design has evolved from an intuitive art dependent on experience to a science, which provides systematic techniques for the software definition. Software design is a first step in the development phase of the software life cycle.
Before design the system user requirements have been identified, information has been gathered to verify the problem and evaluate the existing system. A feasibility study has been conducted to review alternative solution and provide cost and benefit justification. To overcome this proposed system is recommended. At this point the design phase begins.
The process of design involves conceiving and planning out in the mind and making a drawing. In software design, there are three distinct activities: External design, Architectural design and detailed design. Architectural design and detailed design are collectively referred to as internal design. External design of software involves conceiving and planning out and specifying the externally observable characteristics of a software product. Input Design: Systems design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. Systems design could be seen as the
application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering. Input Design is the process of converting a user oriented description of the inputs to a computerbased business system into a programmer-oriented specification. •
Input data were found to be available for establishing and maintaining master and transaction files and for creating output records
•
The most suitable types of input media, for either off-line or on-line devices, where selected after a study of alternative data capture techniques.
Input Design Consideration •
The field length must be documented.
•
The sequence of fields should match the sequence of the fields on the source document.
•
The data format must be identified to the data entry operator.
Design input requirements must be comprehensive. Product complexity and the risk associated with its use dictate the amount of detail •
These specify what the product does, focusing on its operational capabilities and the processing of inputs and resultant outputs.
•
These specify how much or how well the product must perform, addressing such issues as speed, strength, response times, accuracy, limits of operation, etc.
Output Design: A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user.
Efficient and intelligent output design improves the system’s relationship to help user decisionmaking. 1. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements. 2. Select methods for presenting information. 3. Create document, report, or other formats that contain information produced by the system. The output form of an information system should accomplish one or more of the following objectives. •
Convey information about past activities, current status or projections of the
•
Future.
•
Signal important events, opportunities, problems, or warnings.
•
Trigger an action.
•
Confirm an action.
Software Description A programming infrastructure
created
by Microsoft for
building,
deploying,
and
running applications and services that use .NET technologies, such as desktop applications and Web services. The .NET Framework contains three major parts:
the Common Language Runtime
the Framework Class Library
ASP.NET.
Microsoft started development of the .NET Framework in the late 1990s, originally under the name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of .NET
1.0
were
released.
The .NET
Framework (pronounced dot
net)
is
a software
framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a large library and provides language interoperability (each language can use code written in other languages) across several programming languages. Programs written for the .NET Framework execute in a software environment (as contrasted to hardware environment), known as the Common Language Runtime (CLR), an application virtual machine that provides services such as security, memory management, and exception handling. The class library and the CLR together constitute the .NET Framework. An application software platform from Microsoft introduced in 2002 and commonly called .NET ("dot net"). The .NET platform is similar in purpose to the Java EE platform, and like Java's JVM runtime engine, .NET's runtime engine must be installed in the computer in order to run .NET applications. .NET Programming Languages .NET is similar to Java because it uses an intermediate bytecode language that can be executed on any hardware platform that has a runtime engine. It is also unlike Java, as it provides support for multiple programming languages. Microsoft languages are C# (C Sharp), J# (J Sharp), Managed C++, JScript.NET and Visual Basic.NET. Other languages have been reengineered in
the
European
version
of
.NET,
called
the
Common
Language
Infrastructure
.
.NET Versions .NET Framework 1.0 introduced the Common Language Runtime (CLR) and .NET Framework 2.0 added enhancements. .NET Framework 3.0 included the Windows programming interface (API) originally known as "WinFX," which is backward compatible with the Win32 API. .NET Framework 3.0 added the following four subsystems and was installed with Windows, starting with Vista. .NET Framework 3.5 added enhancements and introduced a client-only version (see .NET Framework Client Profile). .NET Framework 4.0 added parallel processing and language enhancements. TheUserInterface(WPF) Windows Presentation Foundation (WPF) provides the user interface. It takes advantage of advanced 3D graphics found in many computers to display a transparent, glass-like appearance. Messaging (WCF) Windows Communication Foundation (WCF) enables applications to communicate with each other
locally
and
remotely,
integrating
local
messaging
with
Web
services..
Workflow (WWF) Windows Workflow Foundation (WWF) is used to integrate applications and automate tasks. Workflow structures can be defined in the XML Application Markup Language. User Identity (WCS) Windows CardSpace (WCS) provides an authentication system for logging into a Web site and transferring personal information.
DESIGN FEATURES Interoperability Because computer systems commonly require interaction between newer and older applications, the .NET Framework provides means to access functionality implemented in newer and older programs that execute outside the .NET environment. Access to COM components is provided in the System .Runtime. InteropServices and System.Enterprise Services namespaces of the framework; access to other functionality is achieved using the P/Invoke feature. Common Language Runtime engine The Common Language Runtime (CLR) serves as the execution engine of the .NET Framework. All .NET programs execute under the supervision of the CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling. Language independence The
.NET
Framework
introduces
a Common
Type
System,
or
CTS.
The
CTS specification defines all possible data types and programming constructs supported by the CLR and how they may or may not interact with each other conforming to the Common Language Infrastructure (CLI) specification. Because of this feature, the .NET Framework supports the
exchange of types and object instances between libraries and applications written using any conforming .NET language.
Base Class Library The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of functionality
available
to
all
languages
using
the
.NET
Framework.
The
BCL
provides classes that encapsulate a number of common functions, including file reading and writing, graphic rendering, database interaction, XML document manipulation, and so on. It consists of classes, interfaces of reusable types that integrates with CLR(Common Language Runtime). Simplified deployment The .NET Framework includes design features and tools which help manage the installation of computer software to ensure it does not interfere with previously installed software, and it conforms to security requirements. Security The design addresses some of the vulnerabilities, such as buffer overflows, which have been exploited by malicious software. Additionally, .NET provides a common security model for all applications. Portability While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be platform-agnostic and cross-platform implementations are available for other operating systems (see Silverlight and the Alternative implementations section below). Microsoft submitted the specifications for the Common Language Infrastructure (which includes the core class libraries, Common Type System, and the Common Intermediate Language), the C# language and the C++/CLI language[8] to both ECMAand the ISO, making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.
ARCHITECTURE:
Overview of the Common Language Infrastructure Common Language Infrastructure (CLI) The purpose of the Common Language Infrastructure (CLI) is to provide a languageneutral platform for application development and execution, including functions for exception handling, garbage collection, security, and interoperability. By implementing the core aspects of
the .NET Framework within the scope of the CLI, this functionality will not be tied to a single language but will be available across the many languages supported by the framework. Microsoft's implementation of the CLI is called the Common Language Runtime, or CLR. Security .NET has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. Code Access Security is based on evidence that
is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the intranet or Internet). Code Access Security uses evidence to determine the permissions granted to the code. Other code can demand that calling code is granted a specified permission. The demand causes the CLR to perform a call stack walk: every assembly of each method in the call stack is checked
for the required permission; if any assembly is not granted the permission a security exception is thrown. Class library Namespaces in the BCL[9] System System.Diagnostics System.Globalization System.Resources System.Text System.Runtime.Serialization System.Data
The .NET Framework includes a set of standard class libraries. The class library is organized in a hierarchy of namespaces. Most of the built-in APIs are part of either System.* or Microsoft.* namespaces. These class libraries implement a large number of common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation, among others. The .NET class libraries are available to all CLI compliant languages.
Memory management The .NET Framework CLR frees the developer from the burden of managing memory (allocating and freeing up when done); it handles memory management itself by detecting when memory can be safely freed. Instantiations of .NET types (objects) are allocated from the managed heap;
a pool of memory managed by the CLR.. When there is no reference to an object, and it cannot be reached or used, it becomes garbage, eligible for collection. NET Framework includes a garbage collector which runs periodically, on a separate thread from the application's thread, that enumerates all the unusable objects and reclaims the memory allocated to them. VB.NET VB.NET uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, on a single line. As part of that evaluation, functions or subroutines may be called and variables may be assigned new values. To modify the normal sequential execution of statements, VB.NET provides several control-flow statements identified by reserved keywords. Structured programming is supported by several constructs including two conditional execution constructs ( If … Then … Else … End If
and Select Case ... Case ... End Select ) and three iterative execution (loop) constructs
( Do … Loop , For … To , and For Each ) . The For … To statement has separate initialization and testing sections, both of which must be present. (See examples below.) The For Each
statement steps through each value in a list.
In addition, in Visual Basic .NET:
There is no unified way of defining blocks of statements. Instead, certain keywords, such as "If … Then" or "Sub" are interpreted as starters of sub-blocks of code and have matching termination keywords such as "End If" or "End Sub".
Statements are terminated either with a colon (":") or with the end of line. Multiple line statements in Visual Basic .NET are enabled with " _" at the end of each such line. The need for the underscore continuation character was largely removed in version 10 and later versions.[2]
The equals sign ("=") is used in both assigning values to variable and in comparison.
Round brackets (parentheses) are used with arrays, both to declare them and to get a value at a
given index in one of them. Visual Basic .NET uses round brackets to define the parameters of subroutines or functions.
A single quotation mark ('), placed at the beginning of a line or after any number of space or tab characters at the beginning of a line, or after other code on a line, indicates that the (remainder of the) line is a comment.
CHAPTER VI Literature Survey S. No 1.
Title Mail
Year Security 2015
Gateway
Methodology
A security gateway is Less designed
Mechanism
for
Email
Disadvantage
email
to
from
convenience
when
protect confined in a large network. external
attacker.
Security
2.
Enhancing
Email 2013
Security
3.
An elliptic curve based
by
signcryption scheme is
Signcryption based
introduced to improve
on
the
Elliptic Curve
electronic mails.
BSPNN:
A
boosted 2011
security
novel
Less efficiency.
of
Machine High computational cost.
subspace
Learning
algorithm,
probabilistic neural
namely Boosted
network
Subspace Probabilistic
for email security
Neural
Network
(BSPNN) is proposed.
4.
Enabling
Email 2003
A new approach to Less security.
Confidentiality
email security that
through the use of
employs
Opportunistic
encryption
Encryption
security
opportunistic and
a
proxy
to
facilitate
the
opportunistic exchange of keys and encryption of electronic mail are proposed. 5.
CryptoNET: Design 2009
The
design
and The proxy server represents
and implementation
implementation
of the Secure Email
secure, high assurance
System
and very reliable Email
of
system are described.
a a single point of failure.
CHAPTER VII System Testing Software Testing Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to the process of executing a program or application with the intent of finding software bugs (errors or other defects). The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies, assemblies and/or a finished product. It is the process of exercising the software with the intent of ensuring that the software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of tests. Each test type addresses a specific testing requirement. Software testing is the process of evaluation a software item to detect differences between given input and expected output. Also the feature of a software item is assessed. Testing assesses the quality of the product. Software testing is a process that should be done during the development process. In other words software testing is a verification and validation process. Types of testing There are different levels during the process of Testing .Levels of testing include the different methodologies that can be used while conducting Software Testing. Following are the main levels of Software Testing:
Functional Testing.
Non-Functional Testing.
Functional Testing
Functional Testing of the software is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. There are five steps that are involved when testing an application for functionality. Steps Description
I
The determination of the functionality that the intended application is meant to perform.
II
The creation of test data based on the specifications of the application.
III
The output based on the test data and the specifications of the application.
IV
The writing of Test Scenarios and the execution of test cases.
V
The comparison of actual and expected results based on the executed test cases.
An effective testing practice will see the above steps applied to the testing policies of every organization and hence it will make sure that the organization maintains the strictest of standards when it comes to software quality. Unit Testing This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance team. The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.
Limitations of Unit Testing Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing. There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with other units. Integration Testing The testing of combined parts of an application to determine if they function correctly together is Integration testing. There are two methods of Integration Testing
Bottom-up Integration testing
Top- Down Integration testing
S.No. 1
Integration Testing Method Bottom-up integration This testing begins with unit testing, followed by tests of progressively higherlevel combinations of units called modules or builds.
2
Top-Down integration This testing, the highest-level modules are tested first and progressively lowerlevel modules are tested after that.
In a comprehensive software development environment, bottom-up testing is usually done first, followed by top-down testing. The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic those it will encounter in customers' computers, systems and network.
System Testing This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team. System testing is so important because of the following reasons:
System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and technical specifications.
The application is tested in an environment which is very close to the production environment where the application will be deployed.
System Testing enables us to test, verify and validate both the business requirements as well as the Applications Architecture.
Regression Testing Whenever a change in a software application is made it is quite possible that other areas within the application have been affected by this change. To verify that a fixed bug hasn't resulted in another functionality or business rule violation is Regression testing. The intent of Regression testing is to ensure that a change, such as a bug fix did not result in another fault being uncovered in the application. Regression testing is so important because of the following reasons:
Minimize the gaps in testing when an application with changes made has to be tested.
Testing the new changes to verify that the change made did not affect any other area of the application.
Mitigates Risks when regression testing is performed on the application.
Test coverage is increased without compromising timelines.
Increase speed to market the product.
Acceptance Testing This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application. More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application. By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system. Alpha Testing This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:
Spelling Mistakes
Broken Links
Cloudy Directions
The Application will be tested on machines with the lowest specification to test loading times and any latency problems.
Beta Testing This test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a "real-world" test and partly to provide a preview of the next release. In this phase the audience will be testing the following:
Users will install, run the application and send their feedback to the project team.
Typographical errors, confusing application flow, and even crashes.
Getting the feedback, the project team can fix the problems before releasing the software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of your application will be.
Having a higher-quality application when you release to the general public will increase customer satisfaction.
Non-Functional Testing This section is based upon the testing of the application from its non-functional attributes. Non-functional testing of Software involves testing the Software from the requirements which are nonfunctional in nature related but important a well such as performance, security, and user interface etc. Some of the important and commonly used non-functional testing types are mentioned as follows: Performance Testing It is mostly used to identify any bottlenecks or performance issues rather than finding the bugs in software. There are different causes which contribute in lowering the performance of software:
Network delay.
Client side processing.
Database transaction processing.
Load balancing between servers.
Data rendering.
Performance testing is considered as one of the important and mandatory testing type in terms of following aspects:
Speed (i.e. Response Time, data rendering and accessing)
Capacity
Stability
Scalability
It can be either qualitative or quantitative testing activity and can be divided into different sub types such as Load testing and Stress testing. Load Testing Load testing is a process of testing the behavior of the Software by applying maximum load in terms of Software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of Software and its behavior at peak time. Most of the time, Load testing is performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test etc. Virtual users (VUsers) are defined in the automated testing tool and the script is executed to verify the Load testing for the Software. The quantity of users can be increased or decreased concurrently or incrementally based upon the requirements.
Stress Testing This testing type includes the testing of Software behavior under abnormal conditions. Taking away the resources, applying load beyond the actual load limit is Stress testing. The main intent is to test the Software by applying the load to the system and taking over the resources used by the Software to identify the breaking point. This testing can be performed by testing different scenarios such as:
Shutdown or restart of Network ports randomly.
Turning the database on or off.
Running different processes that consume resources such as CPU, Memory, server etc.
Usability Testing This section includes different concepts and definitions of Usability testing from Software point of view. It is a black box technique and is used to identify any error(s) and improvements in the Software by observing the users through their usage and operation. According to Nielsen, Usability can be defined in terms of five factors i.e. Efficiency of use, Learn-ability, Memorability, Errors/safety, satisfaction. According to him the usability of the product will be good and the system is usable if it possesses the above factors. Nigel Bevan and Macleod considered that Usability is the quality requirement which can be measured as the outcome of interactions with a computer system. This requirement can be fulfilled and the end user will be satisfied if the intended goals are achieved effectively with the use of proper resources. Molich in 2000 stated that user friendly system should fulfill the following five goals i.e. Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory to Use and Easy to Understand. In addition to different definitions of usability, there are some standards and quality models and methods which define the usability in the form of attributes and sub attributes such as ISO-9126, ISO-9241-11, ISO-13407 and IEEE std.610.12 etc. UI vs Usability Testing
UI testing involves the testing of Graphical User Interface of the Software. This testing ensures that the GUI should be according to requirements in terms of color, alignment, size and other properties. On the other hand Usability testing ensures that a good and user friendly GUI is designed and is easy to use for the end user. UI testing can be considered as a sub part of Usability testing. Security Testing Security testing involves the testing of Software in order to identify any flaws ad gaps from security and vulnerability point of view. Following are the main aspects which Security testing should ensure:
Confidentiality.
Integrity.
Authentication.
Availability.
Authorization.
Non-repudiation.
Portability Testing Portability testing includes the testing of Software with intend that it should be re-useable and can be moved from another Software as well. Following are the strategies that can be used for Portability testing.
Transferred installed Software from one computer to another.
Building executable (.exe) to run the Software on different platforms.
Portability testing can be considered as one of the sub parts of System testing, as this testing type includes the overall testing of Software with respect to its usage over different environments.
CHAPTER VIII Conclusion This work introduces ABUSE system, which we are building to exemplify our principle of leveraging humans in the design of secure systems. We chose to address secure email in particular due to several concerns about the expressiveness of S/MIME email technology, including cases in which names lack specificity, properties—not names—influence trust decisions, and properties mean different things in different contexts. ABUSE addresses these first two concerns by enabling users to delegate trustworthy attributes to each other, and then bind them to S/MIME messages sent over email. Humans are leveraged at both ends of the process: humans hand out attributes to each other, and humans decide whether the attributes bound to a message are enough to build trust in the displayed content. The third concern is addressed by bridging across distinct PKIs and by mapping foreign attributes into a local context. Through development and testing of ABUSE, we hope to answer two long-term questions: whether issuing credentials in distributed way will actually work, and also whether users will actually understand these distributed credentials, enabling them to make more accurate trust judgments about incoming messages from unfamiliar senders.
REFERENCES [1] A. Whitten and J. Tygar, “Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.” in 8th USENIX Security Symposium, 1999. [2] A. Whitten, “Making security usable,” Ph.D. dissertation, Carnegie Mellon University School of Computer Science, 2003. [3] S. Garfinkel, “Design principles and patterns for computer systems that are simultaneously secure and usable,” Ph.D. dissertation, Massachusetts Institute of Technology, 2005. [4] L. G. Zucker, “Production of trust: Institutional sources of economic structure, 1840–1920,” in Research in Organizational Behavior. JAI Press Inc., 1986, vol. 8, pp. 53–111. [5] Y.-H. Chu, J. Feigenbaum, B. LaMacchia, P. Resnick, and M. Strauss, “REFEREE: Trust management for Web applications,” Computer Networks and ISDN Systems, vol. 29, no. 8–13, pp. 953–964, 1997. [6] A. Schutz, “On multiple realities,” in Collected papers 1: the problem of social reality, M. Natanson, Ed. The Hague: Martinus Nijhoff, 1962, pp. 207–259. [7] H. Garfinkel, “A conception of and experiments with “trust” as a condition of stable concerted actions,” in Motivation and social interaction: Cognitive determinants, O. Harvey, Ed. New York: Ronald Press, 1963, pp. 187–239. [8] B. Ramsdell, “Secure/Multipurpose Internet Mail Extensions (S/MIME) version 3.1 message specification,” July 2004, RFC 3851. [9] B. Ramsdell and S. Turner, “Secure/Multipurpose Internet Mail Extensions (S/MIME) version 3.1 certificate handling,” July 2004, RFC 3850. [10] R. Housley, W. Ford, W. Polk, and D. Solo, “Internet X.509 Public Key Infrastructure Certificate and CRL Profile,” 1999, RFC 2459. [11] D. R. Kuhn, V. C. Hu, W. T. Polk, and S.-J. Chang, “Introduction to public key technology and the federal PKI infrastructure,” 32/sp800-32.pdf, NIST, February 2001.
http://www.csrc.nist.gov/publications/nistpubs/800-
[12] R. Nielsen, “Observations from the deployment of a large scale PKI,” in 4th Annual PKI R&D Workshop, C. Neuman, N. E. Hastings, and W. T. Polk, Eds. NIST, August 2005, pp. 159– 165. [13] A. Kapadia, “personal communication,” Aug. 29, 2006. [14] S. W. Smith, C. Masone, and S. Sinclair, “Expressing trust in distributed systems: the mismatch between tools and reality,” in Forty-Second Annual Allerton Conference on Privacy, Security and Trust, September 2004, pp. 29–39. [15] J. Beale, “personal communication,” Sept. 3, 2006. [16] N. Li and J. C. Mitchell, “RT: A role-based trust-management framework,” in Proceedings of The Third DARPA Information Survivability Conference and Exposition (DISCEX III). IEEE Computer Society Press, Los Alamitos, California, April 2003, pp. 201–212. [17] N. Li, B. N. Grosof, and J. Figenbaum, “Delegation logic: A logic-based approach to distributed authorization,” ACM Transactions on Information and System Security (TISSEC), vol. 6, no. 1, pp. 128–171, February 2003. [18] N. Li, J. C. Mitchell, and W. H. Winsborough, “Design of a rolebased trust management framework,” in Proceedings of the 2002 IEEE Symposium on Security and Privacy. IEEE Computer Society Press, Los Alamitos, California, May 2002. [19] N. Li, J. C. Mitchell, and W. H. Winsborough, “Beyond proof-of-compliance: Security analysis in trust management,” Journal of the ACM, vol. 52, no. 3, May 2005. [20] T. Jim, “Sd3: A trust management system with certified evaluation,” in SP ’01: Proceedings of the 2001 IEEE Symposium on Security and Privacy. Washington, DC, USA: IEEE Computer Society, 2001, p. 106. [21] A. Herzberg, Y. Mass, J. Michaeli, D. Naor, and Y. Ravid, “Access control meets public key infrastructure, or: Assigning roles to strangers,” in Proceedings of IEEE Symposium on Security and Privacy, May 2000, pp. 2–14.
[22] M. Blaze, J. Feigenbaum, and J. Lacy, “Decentralized trust management,” in Proceedings of IEEE Symposium on Security and Privacy, May 1996, pp. 164–173. [23] M. Blaze, J. Figenbaum, J. Ioannidis, and A. D. Keromytis, “The KeyNote trustmanagement system version 2,” September 1999, RFC 2704. [24] H. Cunningham, “Information Extraction, Automatic,” Encyclopedia of Language and Linguistics, 2nd Edition, 2005. [25] D. A. Norman, The Design of Everyday Things. Basic Books, 1988. [26] E. Allman, J. Callas, M. Delaney, M. Libbey, J. Fenton, and M. Thomas, “DomainKeys Identified Mail Signatures (DKIM),” April 2006, Internet Draft, http://www.ietf.org/internetdrafts/draft-ietf-dkim-base-01.txt. [27] R. Rivest and B. Lampson, “SDSI - A Simple Distributed Security Infrastructure,” April 1996, http://theory.lcs.mit.edu/∼rivest/sdsi10.html. [28] C. Ellison, B. Frantz, B. Lampson, R. Rivest, B. Thomas, and T. Ylnen, “SPKI Certificate Theory,” September 1999, RFC 2693. [29] D. Chadwick, “The PERMIS X.509 role based privilege management infrastructure,” in Proceedings of 7th ACM Symposium on Access Control Models and Technologies (SACMAT 2002), 2002. [30] N. Goffee, S. Kim, S. Smith, W. Taylor, M. Zhao, and J. Marchesini, “Greenpass: Decentralized, PKI-based Authorization for Wireless LANs,” in Proceedings of 3rd Annual PKI R&D Workshop. NIST/NIH/Internet2, April 2004. [31] S. Tuecke, V. Welch, D. Engert, L. Pearlman, and M. Thompson, “Internet X.509 Public Key Infrastructure (PKI) Proxy Certificate Profile,” 2004, rFC 3820. [32] S. Farrell and R. Housley, “An Internet Attribute Certificate Profile for Authorization,” 2002, RFC 3281.
[33] S. Cantor, J. Kemp, R. Philpott, and E. Maler, “Assertions and protocols for the OASIS security
assertion
markup
language
(SAML)
v2.0,”
2005,
http://docs.oasis-
open.org/security/saml/v2.0/saml-core-2.0-os.pdf. [34] T. Moses, “eXtensible Access Control Markup Language (XACML) version 2.0,” 2005, http://docs.oasis-open.org/xacml/2.0/access control-xacml-2.0-core-spec-os.pdf. [35] “XrML frequently asked questions,” http://www.xrml.org/faq.asp, visited on Sept. 30, 2006. [36] S. Brostoff, M. A. Sasse, D. Chadwick, J. Cunningham, U. Mbanaso, and O. Otenko, “RBAC What? Development of a Role-Based Access Control Policy Writing Tool for EScientists,” in Workshop on Grid Security Practice and Experience, Oxford, UK, July 2004, pp. V21–38. [Online]. Available: http://www.cs.kent.ac.uk/pubs/2004/2067 [37] R. Houskey and T. Polk, Plannning for PKI. Wiley, 2001. [38] “Educause — educause major initiatives — higher education bridge certification authority,” http://www.educause.edu/HEBCA/623, visited on Jan. 24, 2007. [39] “W3C semantic web,” http://www.w3.org/2001/sw/, visited on Sept. 30, 2006. [40] P. Bouquet, L. Serafini, and A. Zanobini, “Semantic Coordination: A New Approach and an Application,” in 2nd International Semantic Web Conference, October 2003, pp. 20–23. [41] A. Doan, J. Modhavan, P. Domingos, and A. Halevy, “Learning to Map Between Ontologies on the Semantic Web,” in Proceedings of The Eleventh International WWW Conference, May 2002. [42] M. Elkins, “MIME security with pretty good privacy (PGP),” October 1996, RFC 2015. [43] D. Chadwick, G. Lunt, and G. Zhao, “Secure Rolebased Messaging,” in Eighth IFIP TC-6 TC-11 Conference on Communications and Multimedia Security (CMS 2004),Windermere, UK, unknown 2004. [Online]. Available: http://www.cs.kent.ac.uk/pubs/2004/2069 [44] G. Zhao and D. Chadwick, “Evolving messaging systems for secure role based messaging,” in 10th IEEE International Conference on Engineering of Complex Computer Systems