Powerful Expectations: Effective Communications for Bug Bounty Programs

Q&A with Reginaldo Silva, security researcher and former security engineer at Facebook/Instagram 

Clear, effective communication is essential for successful bug bounty programs for several reasons. It helps create a collaborative and transparent environment between security researchers and the organization running the program. When programs communicate well, researchers can easily understand the program's guidelines, scope, and expectations, leading to more focused and relevant submissions. Additionally, timely and precise communication can help prevent misunderstandings and conflicts, ensuring that both parties are on the same page throughout the process and protecting triage teams from burnout. Effective communication also fosters trust and builds a positive reputation for the organization, encouraging more researchers to participate and contribute to the program's success.  

Years ago, I had the pleasure of working closely with Reginaldo Silva on bug bounty communications at Facebook/Instagram (before it hallucinated into Meta). Among other things, we still keep in touch about various communication blunders across bug bounty programs as well as how researchers might do better. 

I’m honored and excited to share a slice of the wisdom he’s gained from two decades of independent bug hunting and managing bug bounty programs. 

As a researcher, what do you expect from bug bounty programs? 

Reginaldo: When I submit a report, my goal is to have the issue I report acknowledged, properly assessed for impact, and fixed as soon as possible–especially when there’s some sort of reward that depends on the issue involved being fixed, but even when there isn't any reward. A pending report feels like a to-do item you never get rid of. The worst case is when there is a log hiatus in communication for a complex report, and the company asks for more information months later. By that time, both sides may have lost the context that was necessary to understand what was originally reported.

Melanie: Yes, a lack of documentation makes it more difficult to have a productive conversation. Not only documentation about technical investigations but also how and why decisions are made along the way, starting from the initial acknowledgment through payout and disclosure. Documentation is critical for both sides, especially when they disagree on the outcome.    

What are bug bounty programs expecting from researchers?

Reginaldo: When managing a program, there are several things to balance. The goal of a bug bounty program is to reduce risk to the company. Ironically, some of the risks the company faces might come from the program itself, specifically when it comes to communications. Of course, the risk of not interacting with the research community is far greater, which is why bug bounty programs became so popular.

When running a program, our ideal state is for high and critical risk issues to be fixed right away, and the lower risk issues dealt with accordingly. For example, a program might require that at least 90% of all high/critical be closed within a week of receiving the report and 90% of all issues are closed within 30 days after being triaged.            

Melanie: Yeah, the irony about communication risk is very familiar. Years ago, several bug bounty programs launched as PR stunts, but the companies weren’t truly ready to manage a program well and the resulting backlash from the research community also created negative press attention. Security, legal, communications, etc. individually, are but a single source of risk, and companies need to balance them all – and they all do it differently based on their business, culture, and tolerance for different kinds of risk. We advise clients to treat bug bounty programs first and foremost as a way to reduce security risk, and that means it has to be operationally honest, sustainable, and effective. The communication piece comes in to help support that goal by facilitating accurate information sharing and mutual understanding.    

Frequent or high-profile misunderstandings are one of the primary reasons bug bounty programs reach out to Discernible for communications support. What are some of the most common misunderstandings you’ve seen?

Reginaldo: Expectations around urgency create a lot of confusion. For example, a lot of programs say they will respond in a “timely” manner, but the definition of “timely” varies by experience, culture, and perceived severity of the issue. This is a primary source of frustration for researchers. If there’s any consolation, it’s one of the big sources of frustration for the team running the bug bounty program, too. At the other end, there’s an engineering team that will be implementing the fix (that may or may not be the security team itself, it usually isn’t). And that engineering team has different priorities and incentives than the researcher who submitted the issue. That’s not necessarily bad. For example, some issues need to be addressed at a framework level, and those will take longer to implement but, for the lower-risk issues, the engineering team will not necessarily want to submit “one-off” fixes.

This creates the potential for a tense dynamic: the security and engineering teams are aware of the issue, and what they consider to be a root cause. The researcher has confirmation that the report was valid and has an incentive to find similar issues. If the researcher (or a different researcher) finds new issues with the same root cause, the company might want to treat them as duplicates, even though the final solution is not yet ready. There are lots of opportunities for miscommunication and frustrated expectations in situations like that.

Especially in newer relationships between researchers and bug bounty programs, there are communication challenges related to trust not yet having been established. The researcher might not have sent everything they know at once, and the company cannot be completely transparent about its internal processes, and always needs to be extra careful with their words, as the interactions might become public at any time without notice. Once trust has been established, things flow more naturally.

Melanie: I’ve seen that duplication example so many times and in my experience, the reputation of a company’s overall brand often has as much impact as interactions with the bug bounty team on how much a program is trusted by researchers in the early stages of a new relationship. Fair or not, even the best security teams are judged by the decisions of the businesses they work for. 

Your comment about being careful with your words as a member of a bug bounty team is also something that comes up a lot in our work at Discernible. You need to understand the real and perceived power dynamics between companies and researchers. On the company side, you’re always expected to take the high road, so treating all your correspondence as something that could become public is a good approach for keeping your cool and demonstrating respect. Communicating with the expectation that everything will become public also helps when advocating for more disclosure of bug bounty reports. I am a big supporter of disclosing as much as you safely can to better position your communications strategy for incident response. However, I’ve seen that exposing the level of professionalism employees demonstrate in their correspondence can be even scarier than disclosing vulnerability details.   

What does a successful bug bounty relationship look like?

Reginaldo: One of the most important things I learned from running a program is that the incentives for the company and the researcher are typically aligned. A well-run program will work as a feedback mechanism, part of a larger, more well-structured program.  So, the company views the bug bounty program as one of the many activities the security teams do, along with, for instance, internal reviews, external reviews, vulnerability management, red teaming, detection, etc.

A successful bounty relationship happens, then, when the company perceives the researcher as someone who will help the company take less risk, by pointing out vulnerabilities to be fixed and by not being a risk themselves. So, it looks about the same for both sides: the researcher might get some direct access to the engineering team, for instance, or might be one of the first to be able to test a certain area. Some companies have NDAs researchers can sign to get access to pre-release features.

The best scenario includes mutual respect and admiration between the people who work at the company and the researcher. I’ve been part of this kind of relationship, was hired as a researcher, and then hired more researchers.

Melanie: I love this! Relationships are so important in every aspect of our lives because productive relationships enable us to move forward in business and personal endeavors, while unproductive or toxic relationships hold us back or create more obstacles in our path. I’m starting to see more bug bounty hunters and programs adopt this way of thinking and come to the understanding that if we focus on the long-term outcomes we want to achieve, trying to “win” individual conversations feels trite. Once you’re committed to the relationship and establish trust, you have a lot more room for creative problem-solving. 

What are some effective communication techniques bug bounty teams need to use? 

Reginaldo: The basic is having some documentation to help manage researcher expectations. For instance, telling researchers how long it typically takes to triage an issue, how long it takes to close, what are some examples of low, medium, high, and critical impact issues the company expects to receive, as well as some issues it’s already aware of and/or are out of scope. 

A must-have, in my opinion, is to train the people who interact with researchers on how to communicate effectively, to evaluate the quality of the interactions themselves, and to remediate communication problems such as de-escalating, dealing with language barriers, and giving as much information as possible when requesting something from the researcher. This has to be run as a process, not as a one-time thing, and has to evolve with the program.                                                                    

Melanie: Completely agree. :)                                                                                                                       

Can you share a few examples of common communication errors researchers make? 

Reginaldo: The most important thing is not being aware of what kinds of issues the company expects to receive, and what it perceives to be part of the threat model. A company that has to adhere to regulations, HIPAA, for instance, will have a very different threat model from an IoT device manufacturer.

The second is a misunderstanding of how a bug bounty program works. Generally, the people working on large programs and who are the first point of contact with the researchers deal with a large scope and will not be able to know everything about the product(s) referenced in every researcher report. They might even be new to the program themselves. 

The researcher’s first goal is to help that person decide where to send their report. The person on triage has two questions they are trying to answer: 

  1. Is this a privacy or security issue? It might be fake, spam (a popular form right now is LLM output), expected behavior, or a bug that has no security or privacy implications.

  2. Where should it go next? Should it go to the engineering team, a second level of triage, back to the researcher to request more information, or closed as a duplicate or known false positive? 

Researchers should write their reports thinking of the humans that will handle it, including how it would be handled, by whom, and in what order.

Some mistakes signal a researcher is not acting in good faith, such as: 

  • Trying to overstate the risk without backing it up in the report in the hopes of getting a better reward

  • Threatening, using extortion language, or making veiled threats

  • Behaving unprofessionally, using offensive language, or being sexist (really!) 

Other mistakes simply reveal a lack of experience:

  • Failing to be explicit about the report’s impact 

  • Being too succinct or too verbose 

  • Sending long video reports that meander around the point

Melanie: How someone communicates can reveal a lot about them, and that’s easy to forget both as the sender and receiver of messages. This is why it’s so important to understand upfront what your communication goals are, setting program communication priorities like “demonstrate respect” or “preserve trust” enables bug bounty team members to make smarter decisions about their interactions with researchers. For example, if maintaining a reputation for fairness is important to your team, it should be clear in the way you communicate with all researchers that you will prioritize fairness. The same is true for researchers because, in addition to your reputation score from the platforms, bug bounty teams talk to each other. If you want to be known as both competent and professional, it’s a good idea to keep your emotions in check when things don’t go your way. There is absolutely a time, place, and manner for lodging complaints that have a real influence on programs – tantrums on social media are not it. It’s so easy for programs to dismiss someone behaving immaturely.    

Reginaldo: Researchers should also remember that more than one person will typically handle the report and, while everyone will have read it, not everyone will have the same context. Usually, the people who handle the request later know more about the system in the report, so it's ok to go deeper, technically, but always maintain common courtesy and professionalism, and understand you are talking to a person on the other side, not a company.

Storytime!

Can you share an example of when poor communication led to serious misunderstanding?

Reginaldo: First, I want to note how the general public sees privacy and security issues, and that it’s a matter of perception as much as it is one of being technically correct. A common source of miscommunication is when the security team tells a researcher that "this is not an issue," when one of two things happened: 1) the security team didn't understand the report fully (and here, language barriers might play a role), or 2) what is being reported is an issue but it's not privacy or security related. 

Security teams handling bug bounty reports are usually aware that some reports have a potential risk of "exploding," becoming a public story, and many times for the wrong reasons. It was important to me having a communications expert to turn to when that happens. It not only saved the company a few times, but it turned some potentially negative stories into positive ones.

Things have improved since I ran a bug bounty program, but even just a few years ago someone could send a non-issue to a security program, especially one at a widely recognized brand, collect some interesting responses from the security team, go to a journalist, and have a story published that would affect the general public's perception of the company, even if the security team was technically correct.

Melanie: This relates to my earlier comments about how the reputation and brand trust of the parent company impacts how the bug bounty team should communicate – especially if you don’t have a dedicated communications advisor for your security organization. It’s really difficult when bug bounty situations explode for the average corporate communications team to get up to speed fast enough to respond in a way that acknowledges the nuance and context of the research community. 

Reginaldo: An example where miscommunication almost sent things awry but turned to one of the best bug bounty stories I have to share was when a researcher sent a report about an important and subtle issue that led to comments being deleted on Instagram. The initial report was difficult to understand and the researcher seemed to take a very direct and "in your face" approach to their communications. As a result, the triage team struggled to reproduce the findings in the report. 

I got the report, reproduced it, and was able to establish a rapport with the researcher through subsequent correspondence. We rewarded him with a $10,000 bounty and only then did we learn that we were communicating with a 10-year-old child in Finland! Knowing that information changed the whole way we perceived the interaction and it became clear that there was no malicious intent. It was a misunderstanding on our part.

Melanie: I remember this report and it turned out to be a wonderful opportunity for the young hacker to publicly show off his achievement. This example illustrates why it’s so important to question our assumptions when speaking with people for the first time. A lot of the emotional labor involved in managing a bug bounty program comes from the stories we tell ourselves about what the other person is or isn’t trying to do instead of learning how to ask the right questions to clarify their intent and build mutual trust. 

I’ve talked to a lot of very upset security teams and researchers who simply forgot to focus on their long-term goals, which made every situation feel like an argument they had to win. Instead, we can communicate that a relationship is important to us by giving that person the benefit of the doubt and trying to meet them where they are. If we don’t, it’s common for the other person (and any subsequent journalist or third party) to interpret our behavior as apathetic. 

What is something you learned working with me at Facebook that you still apply to your communications today? 

Reginaldo: Oh, I learned so much from you! Being succinct, on point, transparent and thinking of the audience. Running away from jargon and clichés like, "security is very important to us." Instead, being inclusive and making sure that there's a factual message with meaning. 

Even though we started working together many years ago, I go through the mental exercise you taught me when we first met, thinking about what you would advise me to do, and I still ask for your advice often. But the most important thing I learned is how to evaluate whether what I'm saying is aligned with what I want to portray and perceived as such by everyone who will eventually read what I wrote. 

Melanie: Love it! Thank you for sharing your insight with us!


Connect with Reginaldo on Twitter or LinkedIn.


To learn more about Discernible’s bug bounty communication training and workshops, please contact us at discernibleinc.com/contact

Previous
Previous

📬 Mailbag: Where should security communications be on the organization chart?

Next
Next

My Top Takeaways from 2023 - and Your Resolutions for 2024