Over the last couple of years it seems bug bounties have gained increased attention and are being embraced by more organizations — and for good reason. For companies, it can provide a cost-effective means of obtaining professional security testing and for the security community, any form of sanctioned testing against a live/production application is a good thing :). In May of this year I decided to undertake a little experiment where I would try my hand at multiple bounty programs in a relatively short amount of time in order compare the various approaches and see what might make one program better (or worse) than another.
Scoping the Effort
To determine which bounty programs I would participate in, I turned to a relatively new start-up called Bugcrowd (more on it later), which hosts a comprehensive list: https://bugcrowd.com/list-of-bug-bounty-programs. At that time, the list was considerably smaller than it is today, which may be a testament to how quickly bounty programs have grown in popularity over the last six months. My criteria for choosing the programs to participate in were as follows:
First, I didn’t want to choose a program that had gotten too much attention from security testers and media alike. Given the short amount of time I would be allocating to this little endeavor, I figured it might be more difficult to discover a bug in an over-exposed program which would prevent me from evaluating the associated submission and remediation process. As a result, I eliminated Google, Facebook and Mozilla as candidates. I did however, include PayPal. Although it had been getting quite some press at the time, most of it seemed to be negative and I wanted to see if there was merit to these claims.
While I tried to eliminate what I considered to be over-saturated programs, I also didn’t want to limit myself to small companies or newly created bounties so my second criteria was to ensure I had a representation of both large and small companies as well as new and established bug bounty programs.
The third criteria was to choose programs with varying rewards, ranging from no reward to a monetary payout.
To get somewhat of a respectable sample size, I planned to select at least 20 bounty programs in which to participate with a goal of submitting a bug to at least 10. If any of my submissions did not qualify as valid bugs, I would make an effort to discover and submit another bug so that I could see how the entire bug bounty process worked for that organization. I gave myself two months to complete this little experiment, since I would be doing all of this outside of my work day, during my personal time.
In the end I did look at approximately 20 bounty programs, and ended up submitting a total of 19 bugs to 14 of these programs (which I consider to be the resulting sample size). 11 of the 19 submitted bugs were deemed to be valid submissions under the rules of the respective program and qualified for the resulting reward(s) (if any). I completed all of my submissions within a 6 week period from 4 May to 11 June. Here’s a synopsis of the results:
|Bounty Program||# Submitted Bugs||# Qualifying Bugs||Resulting Rewards|
|Ebay||2||2||Wall of Fame|
|PuppetLabs||1||1||Wall of Fame|
|Microsoft||2||1||Wall of Fame|
|Scorpion Software||1||1||Wall of Fame|
|Etsy||1||1||$, T-shirt, Wall of Fame|
|PayPal||3||1**||$, Wall of Fame|
|Adobe||1||1||Wall of Fame|
|Apple||2||1||Wall of Fame|
* Never received response | ** Some/all bugs already discovered by another researcher | *** Risk accepted, not deemed a qualifying bug
Based on my experiences with these 14 programs, I noted things that worked well and others that didn’t and came up with a list of recommendations that would benefit both the hosting organization as well as the participating security researchers. Here they are…
Keep the Submission Process Simple
The process for submitting bugs ranged from a simple email, to online forms, to creating web portal accounts. Personally I prefer the ability to submit my findings to a published email address and companies like Etsy do just that, making the submission process extremely easy.
Some companies, like PayPal, request that you PGP encrypt your submission due to the potentially sensitive nature of the vulnerabilities which is a completely reasonable request. I found that although it has its nuances that take some getting used to, Mailvelope works well with Gmail and Chrome. After initial submission, PayPal conducts all follow-up communication via a secure email portal. The upside is that all of the messages are contained in one location. The downside is it requires you to create yet another account (in addition to the email account used for original submission and the valid PayPal account that is required for receive any resulting bug bounty payments).
Other companies, like Ebay, have an online form to facilitate the submission of various information necessary for it to be considered a valid entry. The problem with Ebay’s form is that it requests submission of the detailed steps to reproduce the vulnerability as well as accompanying screenshots but provides only a text box and no method to upload files. Instead, it requires the researcher to host that information elsewhere and provide a link within the submission form. This requires the submitter to either host it somewhere without authentication (since the identity of the eventual reviewer is unknown) or to host it somewhere with authentication and provide credentials via this web form. I wasn’t going to do either, so I simply submitted my entry, received a follow-on email alerting me that it was incomplete and submitted the remainder of the details via an email attachment. Pretty cumbersome.
My recommendation is to either support the use of traditional email for submission and all subsequent communications or, if you’re going to require use of a form for the initial submission, provide a means for submitting attachments/evidence. When it comes to requiring use of a special portal account, I’m not a huge fan. I’m always wary of creating additional accounts, especially on sites that I’ve found security vulnerabilities which means I usually have to use a throw-away email address and single-purpose credentials.
Even organizations that don’t have a formal bug bounty program should have a simple means of reporting discovered vulnerabilities. Outside of this bug bounty experiment, I’ve reported many software and website vulnerabilities to organizations and far too many either don’t have a relevant published email address or require the submitter to become a subscribed member of their site/service and/or provide all sorts of personal information (address, phone number, etc). Making reporting too cumbersome can definitely deter responsible disclosure.
Ensure Open Communications and Process Transparency
It’s pretty frustrating not knowing the status of a submitted bug. Was it received? Is it valid? If there’s the possibility for a reward, when will it be issued?
Once again, Etsy really shined here. Once the bug was submitted there was immediate (same day) feedback on the validity of the bug, the expected timeline for remediation, and the next steps for bounty payment. Yahoo and Zynga were two other companies that did a great job with communication. Contrast that with EngineYard, which did immediately confirm receipt of the submitted bug but after that, never sent any subsequent communications and failed to respond to status requests in the following weeks.
One of the factors that influences bounty payments and the ability to discuss discovered vulnerabilities is the time-to-fix. It might be assumed that the more complicated the infrastructure or web application, the longer it will take to test and implement a fix for the discovered bug and researchers should understand that. On the other hand, Yahoo and Amazon are examples of large companies with complicated infrastructures that implemented fixes within one business day. Of course, this also depends upon the nature of the bug and the required remediation action. Etsy, which had a slightly more complicated bug could not implement a fix quite so quickly due to scope and testing requirements, but still did a great job on keeping me updated as to the status of the expected fix timeline. Other companies, like PayPal and Adobe took a relatively long time to fix, with little-to-no communication between the initial report and eventual remediation. Since PayPal’s program involves a monetary payout which isn’t fully paid until the bug is fixed, this might be even more frustrating for researchers. I originally submitted my qualifying bug to PayPal on 5 June and did not get confirmation of a fix and resulting payment until 13 December — and that was only after I sent and email requesting status.
Bottom line, organizations should have a clearly communicated process that starts with a timely confirmation of bug submission and continues throughout the process to ensure the submitter understands timelines, reward payout, and expectations for public disclosure as early as possible.
Clearly Identify Valid vs. Invalid Submissions
Another point of frustration is finding out a security flaw does not meet the criteria of a valid submission. I experienced this with my first submission to Microsoft as well as my submission to Piwik. Addressing this issue is pretty simple…if there are certain types of bugs that don’t qualify (either because the organization is willing to accept the risk or because of inherent product design flaws that cannot be fixed) they should be clearly identified on the submission page. Again, Etsy is a good example of how to do it right: http://www.etsy.com/help/article/2463
Credit Duplicate Submissions
When participating in a bounty program, it can take considerable time and effort to discover and document valid, exploitable bugs. On several occasions I was informed after submission that my bug didn’t quality, not because it didn’t meet the criteria of a valid bug, but rather because someone else had already found it. I received this response from PayPal for my first two submissions, which isn’t surprising based on the 6 month time-to-fix that I eventually experienced with my third valid submission. This was a common complaint I had heard about PayPal prior to participating in its bounty program and I definitely validated it with my experience.
I can certainly see where it’s important to have a “first-to-discover” criteria when it comes to monetary payout and I wouldn’t expect payment if I wasn’t the first to find a bug; however, I think a valid duplicate bug submission should at least result in researcher recognition. No less effort went into finding the bug for subsequent researchers and they shouldn’t be penalized for a lengthy time-to-fix process. From what I can tell, Apple seems to follow this model as several other researchers, in addition to myself, were credited with finding a single bug on one of its sites.
I think the reward a company is willing to bestow upon its bounty participants directly reflects that organization’s perceived value in the program. Personally, I wasn’t participating in these particular bounties for any financial incentives (though they were a nice bonus!) and most programs (aside from Facebook and more recently, Microsoft) don’t have large monetary payouts. I’m willing to bet this is a deterrent for some researchers and security professionals who might otherwise participate in these programs. It’s similar to the conundrum we’re facing in the software exploit world — why give it away for free to the “good guys” when the “bad guys” are willing to pay so much more? Organizations that have bounty programs should realize that they are getting participant’s labor and expertise pretty cheap — much cheaper than if they were to be hired as consultants. Yet, at the time, only two of these 14 companies actually pay for valid submissions. I found it odd that a company as large as Amazon is willing to support a bug bounty program but not willing to at least have a recognition page. If I were to participate in a bounty again, I would go back to a company like Etsy, which recognizes the value of the contributions from the security community by providing a financial award, swag, and public recognition. Although they weren’t in the scope of this exercise, Google and Facebook are prime examples of rewarding researchers with significant monetary payouts for qualifying bugs.
A Promising Alternative: Centralized, Crowd-Sourced Bug Bounties
As I mentioned earlier, when I started this little experiment I obtained my list of bounty programs from a site called Bugcrowd. It was a relatively new startup earlier this year, and has since established quite an impressive model for crowd-sourced bug bounty programs. The way it works is organizations run their bounty programs through Bugcrowd, which manages all bug submissions and eventual payouts. The submission process all happens through a centralized, authenticated web portal using standardized forms and communication channels. Each bounty has its own clearly defined scope, list of valid and invalid bugs, and rules of engagement. Participants can sign up for bounties as well as track all of their open and prior bug submissions. This centralized, standardized approach to managing bug bounty programs greatly streamlines the process and implements several of my earlier recommendations. It’s also beneficial for the participating companies, especially those that many not have the resources to manage a bounty program full-time.
Around the same time that I was participating in these various decentralized bounty programs, I was also participating in the beta launch of Bugcrowd. Since then, Bugcrowd seems to always be hosting several bounty programs, some of which pay pretty substantial monetary rewards. Again, I haven’t participated in these bounties for the money but one of the great things about Bugcrowd is that it hosts “Charity” bounties where you can volunteer your services for non-profit organizations. Valid bugs result in documented CPEs which benefits people like me who (begrudgingly) have to document those continuing education requirements associated with professional certifications such as the CISSP. Over the last several months I’ve participated in several of these charity bounties when I have the time and I can certainly say that Bugcrowd has matured very rapidly and made the process very easy and worthwhile.
Aside from being one of many participating security professionals, I’m not affiliated with Bugcrowd in any way so please don’t take my endorsement as having ulterior motives — I’m just a fan of their approach. That being said, I encourage you to check them out and sign up as a participating security professional if you’re so inclined: https://bugcrowd.com/.
If it’s not clear already, if I had to choose the best program from the 14 I participated in, it would be Etsy due to its painless submission process, open lines of communication, process transparency and assortment of bounty rewards. Even though it had no monetary reward, Yahoo was another example of a well-run program (at least in my limited experience). Despite earning the most monetarily from PayPal, it’s lengthy process would serve as a deterrent from future bounty participation. Similarly, EngineYard’s failure to respond, Ebay’s cumbersome submission process and Amazon’s lack of recognition or incentives would probably strike them from my list in the future as well.
To wrap up, my recommendations for a well-run bounty program include:
- Simple submission process
- Open lines of communication and process transparency
- Clear submission guidelines and criteria
- Appropriate recognition for valid bug entries
- Appropriate rewards for bounty program participants
I believe a centrally managed, crowd-sourced bug bounty initiative such as Bugcrowd will help to standardize and improve the bug bounty process to everyone’s benefit. Companies that are considering implementing some form of bounty program but don’t have the resources to manage it full-time may be well served to check them out. I definitely plan on at least continuing to participate in their charity bug bounties when I have the time.