Google

11 Fallacies of Web Application Security

Written on:May 13, 2013
Comments are closed

Introduction

By far, application security testing is one of the best parts of my job. Working one-on-one with application developers, I find that nearly all want to do the right thing when it comes to security but many face common misconceptions about security vulnerabilities (and the necessary remediation actions). This is my attempt to address some of those misconceptions surrounding authentication, session management, access control, SQL Injection, XSS, and CSRF vulnerabilities.

If you’re a security professional, some (or all) of these may seem fairly obvious, but as I continue to hear these and similar assertions, it may be an indicator that we need to do a better job communicating with developers and the business units (preferably during design and development and not immediately prior or after product release). This is also not meant to be a dig at developers. I think traditionally security has not been a major component of application development (from the classroom to the workplace). Collectively, we need to work together to figure out how to better merge the two disciplines (possibly a topic for a future post.

11 Fallacies of Web Application Security

1) We don’t need to worry about securely storing passwords because our application doesn’t store or process any sensitive data.

Some developers (and business process owners) seem to have the false impression that password protection requirements are directly related to the type of data processed and stored by their application – applications that store sensitive data (financials, etc) require stricter password storage controls while those that only process publicly accessible or other non-sensitive data do not. In reality, all passwords should be treated equally. In many cases passwords might be considered the most sensitive data handled and stored by the application and are often the one thing that malicious actors are after. If you’re asking users to register with your application using a personal web mail (or other external) account, they are entrusting you as the custodian of their credentials and there’s a good chance they’re reusing a password. If your application was to get breached and all of the user email addresses and passwords were compromised, the headline might read something like “Company X application compromised, 25,000 unprotected passwords stolen”. Do you think that at that moment, the fact that the application doesn’t store sensitive data will matter? If you’re an application developer, do yourself a favor and choose a strong one-way password hashing function (aka iterative, adaptive or key derivation function), such as PBKDF2 or scrypt, regardless of the type of data processed by your application, especially if it’s publicly facing.

2) We can’t switch to a more secure key derivation hashing function (such as PBKDF2) because we would have to force all of our current users to change their password and that’s not acceptable.

If you have previously implemented a fast hashing algorithm such as MD5 or SHA-1 and cannot force all of your users to change their password in order to make the switch to a more secure adaptive function, there are other options. You might consider maintaining two entries in your database table, one for the old hash and the other for the new hash. All new users and any existing users that change their password would be switched to the new algorithm and the old hash value would be removed. Eventually, password changes, resets and account cancellations would leave the original hash column blank. Of course, a downside to this approach is that while this attrition period is underway, the old users are still left vulnerable should the password data be compromised. You would also have to implement some application logic to determine a new user from an existing user to properly generate password hashes for authentication.

Another method that I’ve helped successfully implement involves running the old hashes through the new key derivation function and replace them entirely. This can be done in a single action, migrating all of the users at once, without any noticeable impact. [Note: be sure to check and double check that the new hash works before blowing away the old ones!] In this case, when a new user creates an account or an existing user authenticates, the application would run the password through the original hash algorithm (i.e. MD5) and then run that hash through the new hash algorithm (i.e. PBKDF2). The caveat with this method is that you’ll have to execute two hashing algorithms for every authentication event, which is not as efficient (though the extra fast hash algorithm step has not seemed to cause any noticeable delays in my experience).

You could also consider a hybrid of the two where all existing hashes are run through the new iterative function (requiring two hash functions) and all new account creations and password changes only use the new iterative function so eventually the original hash function can be removed. The point is there are options if you are managing an application that uses a less secure, fast hashing algorithm, have immediate concerns with password storage security but can’t cause undue burden on your users.

3) Use of a strong two-way encryption function is preferred over secure hashing algorithms for password storage.

Even when implemented properly, no one-way password hashing function is impervious to attack — if someone wants to crack a single hash, it’s usually only a matter of time and computing power. However, the main concern with using a reversible encryption algorithm such as AES-256 to protect credentials is that there is usually a single point of failure – the encryption key. Should that key be compromised (from storage, from memory, etc), all passwords are now compromised as well. On the other hand, if one were to use a secure password hashing algorithm (aka adaptive key derivation function) such as PBKDF2 or scrypt, it would take much more effort to compromise the entirety of accounts as each has been uniquely salted and run through an iterative hashing process, significantly increasing the work factor.

Even without direct access to the encryption key, use of a reversible encryption algorithm can lead to compromise of user credentials should the application be vulnerable to other security issues. For example, let’s say I’ve discovered a SQL injection vulnerability that gives me access to the user table and the encrypted passwords. Unfortunately for me, the encryption is performed by the application, so access to the database does not provide direct access to the algorithm or key. Let’s say however that I have the ability to change a user’s password via the application interface and then read that password directly from the database. With control over the original clear text and access to the resulting cipher text, I can either execute an iterative dictionary-style attack (by finding all other users with the same cipher text) or possibly determine the encryption algorithm being used and then determine the key.

4) We only need to protect the login (or other sensitive functions) with HTTPS.

Don’t’ forget that even though passwords aren’t being transmitted after authentication, session cookies are – and they should be afforded the same protection. Also, any application that has sensitive functions requiring CSRF defense must protect the confidentiality and integrity of those CSRF tokens. See one of my previous posts for an example of how transmitting CSRF tokens over HTTP can cause problems.

There’s no question that mixed content is much more difficult to manage. If your application requires authentication and you expect it to be used over un-trusted networks (and most web apps will) do yourself and your users a favor and implement SSL from login to logout. On a related note, using client-side encryption functions to protect authenticators prior to transmission (instead of using SSL) is not secure. If it’s implemented on the client it’s susceptible to compromise.

5) We initially present administrative functions to all users but we “take them away” immediately so they can’t be accessed.

No matter how often I come across it, presentation layer “security” always confounds me. Another thing that perplexes me is the number of web application developers I talk to that have never used an intercepting proxy or understand that the request-response process between client and server can be interrupted and tampered with. (Teachers of application developers take heed — these need to be mandatory tools in your classrooms!) This one is pretty simple — no matter how quickly functionality is removed from the presentation layer, it’s always too late. Once this access has been presented to the client it can’t be taken back. Access control needs to be implemented on the server.

6) We’re safe from SQL injection because we use stored procedures.

I speculate part of this misconception stems from overgeneralizations in the security community such as “to prevent SQL injection, be sure to use prepared statements or stored procedures”. Some of my most significant SQL injection finds have been from insecurely constructed stored procedures. Dynamic queries stored in the database are just as insecure as if they were implemented within the application. Regardless of the chosen approach to implementing and executing database queries, they must be parameterized.

7) We set the HttpOnly flag on our cookies so XSS is not a big deal.

I believe this assertion simply comes from a fundamental misunderstanding of what XSS is and what is capable of. The “alert(document.cookie)” example is frequently seen in XSS tutorials and admittedly, as testers, we tend to gravitate towards alerts and prompts. The fact is of course, is that XSS allows for the execution of malicious scripts and the possibilities go well beyond simple cookie stealing. I’ve seen heavily obfuscated injected JavaScript designed drop malware. One refresh XSS attack and the user’s been redirected to an un-trusted site.

As another example, consider two web applications hosted in the same domain and both have their cookies scoped to the parent domain (organizationx.xyz). Application A is not vulnerable to XSS but also does not secure its cookies (using the ‘HttpOnly’ flag). Application B does secure it’s cookies but it is vulnerable to XSS. Assuming we can target a user of both systems, we now have the ability to steal the session cookies of Application A using the XSS vulnerability of Application B.

The takeaway from these two examples — XSS is more than just cookie stealing and one application’s XSS vulnerability can be another application’s security compromise.

8) We protect against XSS using input validation.

That fact that XSS is being considered is great; unfortunately, this approach is bound to fail. Trying to come up with a blacklist of every possible XSS injection (across all browsers and versions) is nearly impossible and more often than not will miss something. I’ve seen everything from stripping user input of any word starting with the letters “on”, to removing the closing angle bracket (‘>’), equal sign (‘=’) or other potentially malicious characters. Input validation is fine if (and only if) you are also using escaping/output encoding to render any user input that does make it through the validation benign. Another note on this one – reliance on out-of-the-box input validation such as ASP.NET request validation is flawed. See my previous post on why this can be a problem.

9) We protect against malicious file upload by checking the Content-type headers or file extension

This approach will fall short because content-type headers and file extensions can easily be modified. I’m not saying that these should’t be done, but successfully protecting against malicious file upload should not rely on any one validation step. It requires multiple controls which can include file extension white lists, file name input validation, file header inspection, restricting access to upload directories, implementing CSRF protection, and others. For a good reference point on this subject, check out this OWASP page.

10) There’s no point fixing the CSRF bug since we can’t fix the XSS bugs.

I’ve only gotten this one once but I found it very interesting. Let’s ignore the “can’t fix the XSS bugs” part for a moment and focus on the CSRF issue. I believe an assertion like this is due to a fundamental misunderstanding of CSRF. Similar to how some think that XSS attacks only target cookies, others believe that CSRF attacks are only viable when there is also a XSS vulnerability present. At its core, CSRF simply means an unauthorized, untrusted user can take actions on behalf of another trusted user. While the presence of XSS certainly does raise the risk and associated possibilities of CSRF attacks, if the application performs any sensitive functions whatsoever, CSRF can still be a major issue.

Let’s say for example the application allows you to email yourself a copy of your previous financial transactions in a.csv file. The only parameters necessary are the date range and the destination email address. Without a way to validate this transaction (i.e. a unique, session or transaction based CSRF token) anyone with the knowledge of the required parameters can trick a user to send this data to an account of their choosing. What if the application in question has a requirement for absolute data integrity? If an attacker can write data to the application using a CSRF attack, that integrity is now compromised. Neither of these scenarios involved a XSS attack but both could spell big trouble for an application owner.

Now to address the other, more troubling portion of this statement – “we can’t fix the XSS bugs”. Sometimes, resistance to fix XSS vulnerabilities is justified with the assertion that the application code vulnerable to XSS is a feature, not a bug — for example, a Rich text field that allows a user to insert formatted input, which is then stored and presented to other users. The reality is, XSS is preventable even in Rich Text fields and I’ve yet to come across XSS that cannot be prevented.

11) All [fill in the blank] vulnerabilities are always [Low/Medium/High] risk

As much as I love people getting fired up about fixing vulnerabilities, not all vulnerabilities can be considered high risk all the time. Similarly, what might be considered low risk to one application might pose significantly more risk to another. There needs to be some risk management and prioritization associated with remediation, especially when you’re dealing with more than one vulnerability and more than one application (i.e. most environments). Take for example a stored/persistent XSS affecting a restricted portion of an application that has two trusted users, properly protects its cookies, and implements CSRF protections. In this case, likelihood of a successful exploit may be considered low. Contrast that with a stored XSS vulnerability on the public forum of a banking site that interfaces with both authenticated and unauthenticated users and does not protect it’s session cookies in any way. Are these XSS vulnerabilities both of equal risk? Maybe not. Application testing requires critical thinking when assigning risk to findings.

This assertion that vulnerabilities of a particular type always carry the same risk is sometimes the result of over-reliance on automated scanning tools. Most scanners assign a standard risk rating to vulnerabilities but that often leads to the trap of thinking “Well, Scanner X says this XSS risk is rated 9/10 so it must be High”. The problem is, Scanner X probably isn’t so good at determining mitigating controls, the criticality of the system and the data it processes, the likelihood of the vulnerability being exploited and other factors.

A quick note on vulnerability scanners — I think regular vulnerability scanning is a necessary component of an application security program but for many applications, it shouldn’t replace a manual penetration test. In addition to the above issue of properly assigning risk to identified vulnerabilities, scanners (even the large commercial ones) often present false positives (vulnerabilities that aren’t really vulnerabilities) and false negatives (missing vulnerabilities completely). Through manual testing and application behavior analysis I’ve found SQL injection and XSS vulnerabilities that scanners have missed and conversely combed through reported vulnerabilities that ended up being false positives. That being said, I almost always scan a web application with a vulnerability scanner as part of testing just to be thorough. In addition, regular vulnerability scanning is a great way to maintain continued security oversight.

This is by no means meant to be an exhaustive list — these are just 11 of the ones I tend to hear the most. I’m sure I missed some good ones so be sure to leave me a comment if you think of more.

Sorry, the comment form is closed at this time.