Dynamic Prevention Of Cross Site Scripting Attacks Computer Science
Now a day's most of the people are using the web-applications irrespective of their educational status. As the number of persons are increasing the number and types of attacks are also increasing. It is the responsibility of the web-application developers to ensure the users security i.e., their information is secured when accessing their web-sites.
There are different types of attacks that are targeting web users. They include sql-injection, denial of service, Cross-site scripting (XSS), etc. From the past few years xss attacks are becoming major security issue. Their frequency of occurrence is more even when compared with other types of attacks.
There are some approaches which are used in preventing xss attacks. Even though we implement those approaches attackers have some scope to attack. So we are developing a new approach shadow pages for dynamically preventing xss attacks. Our approach works by determining web-applications intent , actual response and then comparing them , there by eliminating any malicious code.
The web-applications intent is determined by computing shadow pages, the actual response is determined by computing actual response. These both are computed before the response is sent to the user(i.e, at server side) and compared .Any malicious code present in the real page is removed and safe response is sent to the user.
Key words : Cross-site scripting (XSS), Attack Prevention, Web applications, SecurityAim :
To implement a technique for dynamically preventing Cross-Site Scripting attacksObjectives :
To prevent unauthorised script content from being output on the response from the server side and to detect any malicious scriptable content that may go undetected through any input filtering mechanism present in the web application codeScope :
The purpose of the article is the prevention of XSS attacks. In this article we are using the concept of shadow pages for dynamically preventing XSS attacks.Introduction :What is Cross-site scripting ?Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications which allow code injection by malicious web users into the web pages viewed by other users. Examples of such code include HTML code and client-side scripts. An exploited cross-site scripting vulnerability can be used by attackers to bypass access controls such as the same origin policy. Recently, vulnerabilities of this kind have been exploited to craft powerful phishing attacks and browser exploits. Cross-site scripting was originally referred to as CSS, although this usage has been largely discontinued.Cross-Site Scripting - Types of AttackType 0 or "DOM- based Attack"
The malicious code exploits a wrong usage of page's DOM (Document Object Model) on the provided page.
The vulnerability is on client-side, not the server.Type 1 or "Reflected Attack" or "Non-Persistent Attack"
The malicious code is echoed by the server in an immediate response to an HTTP request from the victim.
User (generally) assume clicking on a URL is safe.
Error pages, search functions and forms are excellent sources of XSS targets since they echo
back the user input.Type 2 or "Stored Attack" or "Persistent Attack"
The malicious code is stored by the system, and may later be embedded by the vulnerable system in an HTML page provided to a victim.
All web based collaboration features have this potential problem.
Email was the first, and it allows you to directly target your victim and serve your code from a server the user trusts, to a user the server trusts.
Logs target the attractive high value user, the administrator.Frequency of XSS attacks in real world:Problem Definition :
Cross-site scripting poses server application risks that include, but are not limited to, the following:
Users can unknowingly execute malicious scripts when viewing dynamically generated pages based on content provided by an attacker.
An attacker can take over the user session before the user's session cookie expires.
An attacker can connect users to a malicious server of the attacker's choice.
An attacker who can convince a user to access a URL supplied by the attacker could cause script or HTML of the attacker's choice to be executed in the user's browser. Using this technique, an attacker can take actions with the privileges of the user who accessed the URL, such as issuing queries on the underlying SQL databases and viewing the results and to exploit the known faulty implementations on the target system.DRAWBACKS OF THE EXISTING APPROACHES :Filtering
It fails when User supplies "Broken HTML".
A script may result from multiple output locations in a web application.
The output of a web application must be analyzed in its entirety to identify script content.
A robust mechanism to identify script content is needed, as there are a myriad of ways to encode the unauthorized script content that may escape filters but may appear on the client browser.Input validation
While effective for most types of input, there are times when an application, by design, must be able to accept special HTML characters, such as '<' and '>'. In these situations, HTML entity encoding is the only option.Cookie securityCookie security is effective in most situations (if an attacker is only after the cookie), but obviously breaks down in situations where an attacker is behind the same NATed IP address or web proxy. IE (since version 6) and Firefox (since version 18.104.22.168) have an HttpOnly flag which allows a web server to set a cookie that is unavailable to client-side scripts but while beneficial, the feature does not prevent cookie theft nor can it prevent attacks within the browser.Eliminating scripts
The most significant problem with blocking all scripts on all websites by default is substantial reduction in functionality and responsiveness (client-side scripting can be much faster than server-side scripting because it does not need to connect to a remote server and the page or frame does not need to be reloaded).
Another problem with script blocking is that many users do not understand it, and do not know how to properly secure their browsers.
Another drawback is that many sites do not work without client-side scripting, forcing users to disable protection for that site and opening their systems to the threat.Proposed Approach :
The central theme of the XSS injection attacks is to introduce script code that would perform malicious operations, instead of the operations that were intended by the web application. A web application is written by a programmer implicitly assuming benign inputs, and encode programmer intentions to output a particular web page on these inputs. The presence of an unauthorized script in the output, which will be executed by the browser is an example of a deviation from the web application's intentions.
The key idea in our approach is to learn the intention of the web application while creating the HTTP response page. This is done through shadow pages, which are generated every time a HTTP response page is generated. These pages are similar to the real HTTP responses returned by the web application with mainly one crucial difference: they only retain the (authorized) scripts that were intended by the web application to be included, and do not contain any injected scripts.
Fig. 1 :- Example server side application and generated HTML pages
Given the real and shadow pages, one can compare the script contents present in the real page with web-application intended contents, present in the shadow page. Any "difference" detected here indicates a deviation from the web application's intentions, and therefore signals an attack. As a running example, consider the code snippet of a simple web application given in Fig. 1 (i). This code embeds the user specified name and generates Admin-script / Non-Admin-script based on whether the user is admin. Notice that the parameter "uName" is vulnerable to injection and can be exploited by specifying malicious values. Fig. 1 (ii) and (iii) show responses generated for a benign user uName=Alan, and for a malicious user name uName=<script>evil();</script>, respectively. Conceptually, Fig. 1 (ii) is a shadow page (contains only the intended scripts for a non-admin user - f(), Non-Admin-script()) for the response shown in part (iii). The injected attack at line 3 in part (iii), has no equivalent script at line 3 of the shadow page part(ii), and presents an intuitive example of attack detection in our approach.
Fig. 2. The XSS-GUARD server side defense approach
Fig. 2 depicts the block level architecture of our approach. In the pre-deployment view, a web application is retrofitted (step A) through an automated transformation to facilitate generation of shadow pages and then deployed (step B) in place of the original application. In the post deployment view for any HTTP request received (step 1) by the web application, the instrumented application generates (step 2) a shadow page corresponding to the actual HTTP response (real page). The real and shadow pages are compared (step 3) for equivalence of script contents and any attacks found in the real page are eliminated. The modified HTTP response page is sent (step 4) to the client.
The key benefits of the XSS-GUARD approach are:
- Deployment friendly. Our approach does not require any significant level of human involvement in terms of code changes to be applied for XSS defense. It is based on a fully automated program transformation technique that removes the injected scripts.
- Strong resilience. Our approach is highly resilient to some very subtle scenarios that occur in XSS inputs, as illustrated by our comprehensive evaluation.
- Acceptable overheads. Our approach does not impose an undue burden on web application performance.Shadow Pages: Computing web application intent
A web application is written implicitly assuming benign inputs (with filtering to remove malicious input). It encodes programmer intentions to output a particular web page on these inputs. The XSS-GUARD approach is to capture these intentions using shadow pages. Naturally, the shadow page will differ according to the input provided to the web application; a shadow page is therefore defined for a particular run of the web application. Formally, a shadow page of a web application P on any input u is the output response of the web application on some benign input v, on which P traverses the same path as it traverses on u. Finding such benign inputs v, in general, is undecidable. We avoid this problem by using some manifestly benign inputs (such as a string of a's), and force the web application to act on these benign inputs along the same control path dictated by these real inputs.
Fig. 3. Transformed running example and generated HTML pages (real and shadow)
More specifically, in order to construct the shadow page, we use explicitly benign user inputs; those that do not contain any meta characters of the scripting language. As these inputs are manifestly benign and do not contain any script content, the corresponding web application output will be free of injected script content, while retaining content authorized by the web application. Hence, an HTTP request with explicitly benign inputs will result in an exploit free HTML response from the web application.
We automatically transform the original web application to generate the shadow response pages apart from the real response pages.
For every string variable v in the program, we add a variable vc that denotes its shadow. When v is initialized from the user input, vc is initialized with an explicitly benign value of the same length as v. If v is initialized by the program, vc is also initialized with the same value.
For every program instruction on v, our transformed program performs the same operation on the shadow variable vc. Departure from these mirrored operations comes in handling conditionals, where the shadow computation needs to be forced along the path dictated by the real inputs. Therefore, the logic for path-selection in the program is not transformed and acts on the real inputs.
Each output generating statement (writing output to the client), is replaced by appending the arguments to a buffer. This is done both for the real and the shadow values. After the last write operation, transformation adds invocation to a method responsible for detecting and disabling the XSS attacks.
The transformed web application for the running example is shown in the Fig. 1.3. It also shows real and shadow pages generated by this transformed application. The real and the shadow pages are stored in variables re and sh respectively and follow the transformation outlined previously. On line 23 in the transformed application real and shadow pages are passed on to a routine XSS-PREVENT that identifies and removes all the injected attacks and returns a retrofitted page, which is then returned to the client.
The generated shadow pages possess the following properties:
- The set of scripts in the shadow page is precisely that intended for the control path dictated by the real inputs. This is by virtue of a transformation that "mirrors" the computation on manifestly benign values on the same control path dictated by the real inputs. More specifically, when the user input is admin, the shadow page will contain the scripts f and Admin-script (and only those), and for a non-admin user, the shadow page will only contain the scripts f and Non-Admin-script.
- The transformation maintains the length of the shadow page to be the same as the real page. This is true as long as the functions defined in the web application are length preserving , a criterion satisfied by all the functions in the Java Standard library string criterion satisfied by all the functions in the Java Standard library string manipulation suite. As a result the shadow and real pages are of the same length. Moreover, the offsets of the script content in the real and shadow pages are the same e.g., Non-Admin-script start and end offsets are same in both the real and the shadow pages.Conclusion :
Now a days most of the people irrespective of their educational status are using web applications. The number of attacks are also vast and causing more effect. XSS attacks are one of those attacks. In this article we proposed a technique for dynamically preventing the xss attacks.We have used the concept of creating shadow pages which gives no scope for any malicious code is sent to in the response pages to the web-application users.
Article name: Dynamic Prevention Of Cross Site Scripting Attacks Computer Science essay, research paper, dissertation