Introduction to Testing Web Application Security Unlike most client-only applications, Web based applications have viewable code, access to the contents of the Web server, and information that can be intercepted. Because of these new opportunities to communicate, there are also new opportunities for misuse. Most of the security testing literature looks at vulnerabilities of the network and Web server, but this article discusses Web security testing at the application level. Much attention is given to the network and Web server, but sometimes the applications they are housing are not secure. Additionally, when testing a Web application, the tester does not always know the network architecture, operating system, or Web server for each implementation of the application. Finally, the risk areas discussed in this article allow average users to stumble on secure information. These risk areas do not require a malicious user to run a Perl script or use network penetration tools; they can be exploited with the browser and a text editor. The amount of security testing one should perform generally depends on the type of application under test. The level of testing September/December 2001
required should be based on the security requirements, which should be defined by a requirements analyst, like other nonfunctional requirements. When there are no requirements or requirements are not complete, the tester can use the risk areas identified in this article to raise concerns. Even though every application is different, there are some general industry-standard risk areas for Web applications. The risk areas that we will examine in this article are: • User authentication • User authorization • Security holes in the application • Data access through the URL • Altered client code
User Authentication In order to be identified as a user, so one can gain authority to perform tasks, some Web systems require users to register and be verified via a login process. Generally, the users will login with a User ID and a password. Before one can just log in to the application, their user profile must be created in the system. For some Web applications, the user can register and create an online
http://www.testinginstitute.com
At a Glance: Because we now have new opportunities to communicate, we also have new opportunities for misuse. Even though every application is different, there are some general industrystandard risk areas for Web applications. This article addresses Web security testing at the application level.
by Tim Van Tongeren
profile. For other Web applications, only the system administrator can create user profiles. The registration process should be tested for potential security risks. For example, if the user sets the password, it should be a required field. If the password is assigned by the system, there should be business rules concerning generation, delivery and first-use of passwords. For example, duplicate, identical users should not be allowed to register. The verification portion of the application should be tested to ensure that valid user and password combinations allow access and that invalid user and password combinations do not allow access. There may be other security requirements like a log file of failed logins, the disabling of profiles after a certain number of incorrect logins, or the disabling of access after a certain number of failed logins. There should also be verification of proper logout processes, including the handling of session timeouts, user requested logouts, and logouts by navigating away from the site. Another potential security risk with logins is when cookies are used. A cookie is information stored on the client machine
Journal of Software Testing Professionals
41
with which the Web server can interact (read and write privileges.) Cookies can store any information the user sends to the Web application, either entered by the user or automatically sent by the client browser. Additionally, cookies may store information derived by the server. Generally, cookies store things like names, User IDs, passwords, and browsing trends. By storing the authentication information on the client machine, the user does not need to enter the User ID and password at each login. With these applications, the system should be tested to ensure that cookie processing is secure. The most common way to ensure cookie security is with encryption. If not, the cookies.txt file could be read or altered. If a malicious user knew the structure of the cookies used by the application, they could check clients for the cookies and extract additional personal information contained in them. Likewise, they could create a cookie on their client to pose as another user, and be allowed access to perform the tasks allowed for that user.
User Authentication with Cookies Let’s go through a sample test of cookies. The cookie for this application stores the username. There is also a requirement that the most recent successful login shall become the default User ID for the computer. Based on these requirements here are a few scenarios we can test: Scenarios 1. Valid login 2. Valid login, return to site and use cookie for authentication 3. Valid login, valid login with different username 4. Invalid login due to unregistered username 5. Invalid login due to incorrect password for username
Using these scenarios, we can consolidate them into test cases. Test Case 1 will incorporate Scenario 1 and 2. Test Case 2 will incorporate Scenario 3 and 2. Test Case 3 will incorporate Scenario 2 and 4. Test Case 4 will incorporate Scenario 2 and 5. Test Cases 1. Log in with USERNAME1. Verify that the cookie stores USERNAME1. Go back to the site and verify that the cookie stored USERNAME1 as the default username for that client. 2. Log in with USERNAME1. Log out and verify that the cookie stored USERNAME1. Return to the site and log in with USERNAME2. Verify that the cookie stores USERNAME2. 3. Log in with USERNAME1. Log out and attempt to log in with USERNAME3 (which does not exist). Verify that the system gives a login error. Verify that the cookie stored USERNAME1. Go back to the site and verify that it allows login with USERNAME1. 4. Log in with USERNAME1. Log out and attempt to log in with USERNAME2 and an incorrect password. Verify that the system gives a login error. Verify that the cookie stored USERNAME1. Go back to the site and verify that it allows login with USERNAME1. To verify that the cookie stores the correct User ID the tester can take either a blackbox or white-box approach. Using the black-box approach, the tester would use the Web application to display the current User ID. Even if the password is not stored, the User ID may populate in the login field. The white-box approach to verification would have the tester verify the contents of the cookie file on the client machine. Since cookies are generally encrypted, this will require a separate program that decrypts the contents of the cookie for verification. A variation on this application might
42 Journal of Software Testing Professionals
http://www.testinginstitute.com
include a checkbox on the login screen to allow the user to tell the system to store the username (e.g. “remember me”). In this case, you will need to make sure that when the box is checked, the cookie stores the username, but when it’s not checked, the cookie does not store the new username and that it retains the current default username. Also, when testing cookies, don’t forget to delete old ones if the test case requires the server to act as if this is a new visitor. In between execution of each of the test cases described in our example above, the cookie should be deleted on the client machine, as an initialization step, to ensure a clean test.
User Authorization Once granted permission to access the application, the user is allowed to perform tasks. Most Web applications have multiple user groups or roles with different levels of authority. Some of these groups may include the general public, registered members, moderators, and the administrator. Members of each of these groups have different privileges within the application. The definition of each group and access allowed to its members should be specified in the requirements. To verify that these rules are enforced by the system, the tester will need to perform positive and negative testing for each system task with a user profile from each group. For example, let’s look at some security requirements for a message board application. In order to perform a test from Table 1, you should create a user in each of the user groups. Then for each user, test each of the authorities. (See Table 1.) If the system supports the functionality to change a user’s group, then additional testing should be done to ensure that the allowed actions are changed as well. In the above example, if a user is moved from the Moderator group to the Members group, he should no longer be able to
September/December 2001
User Group Public Members Moderator Administrator
Read Message Allowed Allowed Allowed Allowed
Post Message Prohibited Allowed Allowed Allowed
Delete Message Prohibited Prohibited Prohibited Allowed
Delete User Prohibited Prohibited Prohibited Allowed
Table 1: User groups and authorities for a message board application delete a message.
Security holes in the program During security testing, the tester should verify that the system is safeguarded against a command language invocation attack. This type of attack attempts to force the server to execute operating system commands by using escape characters, overflowing the buffer, or overriding parameters in the programs. The easiest targets are the default programs included with a system since these vulnerabilities are well known. For example, there are several CGI programs that come standard on UNIX. Since anyone with UNIX has access to them, malicious users can figure out how to use or break them. With some of these programs, if a bad character is sent as a parameter the program will exit and then the rest of the command, still in the buffer, is executed at the command line. So, let’s say that a backtick was an escape character that exited a specific program named lookup.pl. This program would be working fine until it attempted to process the backtick. Upon hitting the backtick character, the program would hit an error condition where it failed over to the command line. Everything after the backtick, still in the buffer, would be executed at the command line. Imagine what would happen if you typed in this URL: www.server.com/cgi-bin/lookup.pl`rm-rf
would be sent to the system’s command line and it would start deleting files. The same principle applies to proprietary applications. Buffer overflows and escape characters should not give access to the operating system. To avoid this security hole, test the application for escape characters. Make sure that the system does not rollover to the command line when the application receives bad parameters. In the above example, perhaps the application should give a general error message instead.
Data access through the URL Another attack that can be made from the location field of a browser is parameter tampering. When a malicious user uses this technique they modify the parameters of SQL statements in the URL to try to retrieve or modify data. You may have done something similar to save time. Let’s say you are looking up stock prices on a certain site. If you like viewing the 5-year chart for the stock performance, but the default chart is the 1year chart, you may just go up the URL and change the Year field. If you want to switch tickers you may just change it in the URL rather than submitting a new request through the Submit button. For a certain application, the URL for the fiveyear, detailed chart of McDonalds might be:
these parameters it submits a query for a five-year set of data with the detailed view for the ticker of “MCD”. Each value is a specific field in a pre-built query that is sent to a database. Then retrieving the information in the database, the application builds a Web page. Based on the previous example, we can view the chart for IBM, for example, by changing the “MCD” to “IBM”. www.server.com/5yr+DetailView+IBM Now, let’s take this data access a step further. Let’s say your current URL is this: www.server.com/pgm.exe&Function=Re view&Order=1029343 This URL returned a page that reviewed a customer order #1029343. Now, what would happen if the server did not verify commands before invoking them? For example, if you changed the order number, the system would allow you to see other orders, as well as the order details. What if you changed the Function from Review to Create, Update, or Delete? Some URLs are not that obvious but unless the URL is encrypted it doesn’t take very long to figure out the syntax. One preventative measure that sites can use is passwords or entry codes that change every second. If the command in the URL doesn’t have the code-of-the-second, it won’t perform the query.
www.server.com/5yr+DetailView+MCD Other, less destructive attacks might only steal password files or other sensitive data. But in this case, the command “rm-rf”
September/December 2001
When this specific application receives
http://www.testinginstitute.com
For a stock application, like we discussed above, public users can usually view all of the data. However, with other applications
Journal of Software Testing Professionals
43
users should not have access to all data in the database. The server should verify the data before simply processing it. For example, what would the server do with this URL? www.server.com/pgm.exe&Function=Re view&Order=Hello! After doing this on one Web server, the system presented an error screen that contained a login to administer the database! The data entered caused a variable type mismatch error, which rolled over to a generic error-handling module. That module pointed to a login page for the database, which may have been helpful during development, but not in a production environment.
Altered Client Code Rather than having application parameters in the address, as described above, some applications have parameters in the client code. If one viewed the source code, they could see those parameters and attempt a similar attack, called embedded SQL hijacking. Data access that should be disallowed might be allowed, but there are other problems as well. Parameters such as item prices in the client code can be altered. If the server does not verify these fields, the customer could purchase the item at the wrong price. To make matters more complicated, verification logic can be inside the client code. In an effort to improve the performance of the Web server, developers may include all of the verification logic with the client rather than the server. Even so, the client code could be altered. Client-based verification logic such as field lengths, data format, and mandatory field verification are dangerous. If the field length edits are changed, a buffer overflow would be allowed and might crash the application. If the data format verifications
are altered, the application could write data improperly to the database, causing lost or corrupted data. If the client is changed allow mandatory fields to be bypassed, a customer request may not be fulfilled. Each program on the Web server should verify all data it receives: from clients, other programs on the Web server, and third party sources. If the program assumes the data is correct and processes it, there could be a security hole. Client code can be changed. Programs can be written to send malicious transactions to your program. To test this a tester can save the page to the local drive. Then, modify the code to attempt the desired query, bypass the client edits, or alter the price of an item. Finally, open the page in a browser. The page should make a call to the Web server as if it was the real page. If the server does not catch the improper requests from the altered client, there is probably a security concern.
Conclusion Many opportunities exist in Web applications for security violations. The good news is you are now aware of some of those issues and are able to incorporate them into your next test plan. With this knowledge you should be better prepared to test Web applications and to protect your company’s software from unauthorized use.
information about the system or login screens. After testing the security of the application, the tester may need to validate the security of third party applications with which the main application interacts, the Web server which hosts the application, and the network on which the web server resides. About the Author Tim Van Tongeren is a senior quality assurance analyst at WorldCom. He has experience across the software development lifecycle working on government and private projects with several Fortune 500 companies.
Call for Submissions JSTP is looking for articles reflecting real life experiences with any of the following areas: • Testing • Requirements • Bug Tracking and Reporting • Incident Management • Configuration Management
Testers wanting to learn more about Web application security might also be interested in security risks due to improper error handling. Overly descriptive error messages sometimes include data that can be used for unauthorized use. Improper error handling sometimes results in login screens or other point of access not meant for the public user. Additionally, performance issues occasionally cause errors due to simultaneous database updates or system crashes, which subsequently offer
44 Journal of Software Testing Professionals
http://www.testinginstitute.com
• Release Management • Risk Assessment • Test Process Measurements and Improvements For guidelines, please visit our Web site:
www.testinginstitute.com
September/December 2001