“You have to be testing your own code before you can start [a bug bounty program]," said Moussouris. Otherwise, the company winds up paying out for “low-hanging fruit” or issues its own developers could have likely uncovered.
Bug hunting begins at home
It all boils down to the organization’s security maturity. Regular testing would uncover basic issues and show organizations what to fix so that the same mistakes aren’t introduced over and over again.
Teams need to know how to perform root analysis so that they can understand the scope and extent of the vulnerability, as well as the risk posed, and to assign priority so the issue can be fixed. They need to have the time and expertise in-house to be able to process reports coming in from researchers, or they will wind up paying for basic vulnerabilities and duplicate issues.
It’s also important to have transparency in the processing of bug reports. Perhaps the company's investigation is taking longer than expected, or the bug was previously reported by someone else (or discovered internally). The transparency would clarify why a company decides the report is not really a bug or why the researcher won’t get a payout because the report was a duplicate of an earlier one.
Smaller companies with a simpler product lineup and infrastructure can skip this first step and use bug bounties to jump-start their security programs, but having an application security plan beforehand is a must for large companies. “You can’t just jump in,” Moussouris said.
Setting a bug baseline
A growing consensus is emerging that companies must offer bug bounties if they want to have secure products.
Oracle’s CSO was rebuked by researchers for calling bug bounties “the new boy band” in early August. “Many companies are screaming, fainting, and throwing underwear at security researchers to find problems in their code and insisting that This Is the Way, Walk In It: if you are not doing bug bounties, your code isn’t secure,” Mary Ann Davidson wrote in the blog post, which has since been removed.
Moussouris noted there are many ways to work with security researchers, and the public bug bounty program, where researchers submit reports, then get paid for their efforts, is one model. The important detail is to align the program where researchers submit reports at a time when it’s the most useful for developers. Fixing code after it is released is expensive, so asking researchers to test the code during the beta period would be far cheaper and more effective. This model may appeal to some companies more than a traditional bug bounty program.
For example, Microsoft invited researchers (under Moussouris’ watch) to submit reports for the last version of Internet Explorer before it was production ready. Another way is to not offer bounties for existing vulnerabilities, but to ask for defensive techniques the company can use in future products. Microsoft, again, with Moussouris, launched the Blue Hat prize to work with researchers on mitigation bypasses for memory-based attacks.
Sign up for Computerworld eNewsletters.