Kaymera Blog

Real Talk: The Risks of Vulnerability Disclosure

Written by Maryna Gaidak | Apr 26, 2021 5:00:00 AM

Vulnerability disclosure is a hotly debated topic in the cybersecurity community. 

What is the concept all about? Vulnerability disclosure is the practice of reporting security flaws (bug in code) in computer hardware and software, and processes. Security testers disclose vulnerabilities directly to system owners or other involved parties, including in-house developers or third-party developers designing and implementing the affected systems. 

But by and large, system vendors hush about the disclosed vulnerabilities and only make it public after a patch or other mitigation measures are available. Luckily, affected organizations act with speed to fix the flaws before hackers exploit them. 

It is essential to understand the concept of zero days. A zero-day vulnerability is known to a potential attacker but not yet disclosed or fixed by the system’s vendor. There is technically no defense against a zero-day exploit developed for this kind of vulnerability. 

The term zero-day applies to the time between discovery and patching. Simply put, a zero-day vulnerability has zero days between the discovery and the first attack.

 

Disclosures 

As aforementioned, the vulnerability disclosure process brings information about flaws in systems, networks, hardware, applications, firmware, and business processes into the public domain to allow vendors to fix the flaws. The concept effectively enables system users to mitigate against the same flaws before the bad guys find and exploit them. 

Certainly, vulnerability disclosure is essential in keeping systems safe. But what happens if the disclosure process itself is also vulnerable? Enterprises need to understand the approaches reporters deploy in sharing vulnerabilities. 

Responsible, full, and coordinated disclosures are the three widespread approaches in vulnerability disclosure. 

  1. Responsible Disclosure

Ask an Ethicist defines responsible disclosure as a process where an individual or a group reporting the vulnerability contacts the party responsible for the affected software. Currently, organizations are establishing programs for responsible vulnerability reporting. Some, like Google’s Vulnerability Reward Program or Microsoft’s Bug Bounty programs, offer financial rewards. 

In responsible disclosure, the vulnerability reporter agrees to keep the information secret for some time to give the vendor a chance to confirm the flaw and develop, test, and deliver a patch. In other words, public disclosure occurs after the system owner has confirmed and fixed the bug, if ever. 

However, opponents of responsible disclosure argue that it gives system owners too much freedom to conceal or ignore actual issues. They can quickly sweep the bug under the rug if they find it costly or inconvenient to fix. The critic’s concerns are with merit and evidence, as the practice of concealing vulnerabilities has been common in the past. It brings us to the second approach in vulnerability disclosure, full disclosure.   

          2. Full Disclosure 

Unlike in responsible disclosure, where the security researcher reveals the bug directly to the system vendor, full disclosure involves announcing it publicly without giving the owner prior notice – possibly including a proof of concept (POC) exploit to demonstrate the vulnerability. In effect, the public disclosure creates pressure on a company to fix the problem without delays to prevent attacks.

Adherents of full vulnerability disclosure believe the approach is the ultimate way to ensure vendors fix any discovered bug hand over fist. Fixing bugs can be time-consuming and costly for vendors, and acknowledging they have sold software with flaws can damage their reputation. Simultaneously, many vendors assume that if the discoverer does not disclose vulnerabilities, their products remain secure, and they dawdle on fixing them.    

However, if a researcher finds a vulnerability, even cybercriminals can discover and exploit them at any time. The optimal decision in such a situation is for the vendor to fix the flaw quickly, and full disclosure is an effective method to trigger this action.  

On the flip side, the full disclosure approach also informs the bad guys about the vulnerability, degenerating into a race between the system owner and the attacker, one party fixing while the other exploiting the bug. 

          3. Coordinated Disclosure 


Another less popular approach is coordinated disclosure, which is a variant of responsible disclosure among several parties. Under the principle of coordinated vulnerability disclosure, researchers reveal newly discovered bugs in IT assets directly to product vendors and other relevant entities, such as the national CERT and coordinators who will collaborate with the system owner. 

Coordinated disclosure allows the vendor the opportunity to diagnose and offer tested updates and other corrective measures before any party releases the detailed bug report to the public. Affected entities collaborate with the discoverer and other parties throughout the vulnerability investigation, providing updates on case progress. Coordinated disclosure aims to provide timely and consistent patches and guidance to users to prevent attacks.  

 

The Patching Challenge

After any disclosure, the vendor must fix the vulnerabilities and update their products by installing patches. However, users are typically poor at updating software products, which is a significant security challenge.

While patching systems only require a few minutes, past incidents have shown that failing to fix bugs with the latest security updates can prove to be more costly. The WannaCry attack by hacking group ShadowBrokers is an excellent illustration of an exploit on unpatched systems. Threat actors stole and leaked exploits in the underground. In response, Microsoft released a bulletin announcing that patches were available for the discovered flaws. Organizations who were unable to install the updates were reportedly the first ones affected by an incident that resulted in an estimated US $ 4-8 billion loss. 

In another incident, hackers stole the information of more than 160 million individuals after accessing Equifax’s online dispute portal, exposing images and documents uploaded by customers. The attackers exploited a security flaw dubbed CVE-2017-5638 in Apache Struts, an open-source development framework for creating enterprise Java apps. The flaw had been disclosed and patched two months before the incident occurred. However, Equifax has failed to install the updates on all its servers.     

 

Government Zero-Day Stockpiles 

Government intelligence agencies operate the “equities process,” which involves holding stockpiles of zero-day vulnerabilities, sometimes without disclosing them to vendors. Government agencies will keep such discoveries for their offensive intelligence operations. For instance, the infamous Stuxnet attack against the Iranian nuclear project leveraged several zero-days exploits. 

It follows that a researcher discovering a new vulnerability cannot tell whether that bug is part of an adversarial zero-day stockpile awaiting exploits. The situation fans the flame on how quickly a researcher should disclose the vulnerability or the disclosure approach to use for the desired outcomes. 

 

Summary

Vulnerability disclosure is frankly in a muddle today, forcing security researchers to choose the least bad option when it comes to disclosure approaches. Choosing responsible disclosure allows product vendors to fix bugs before the discoverer makes the information public. But this approach pays no heed to flaws that hackers already know or are even exploiting. Responsible disclosure does not also offer a solution to the possibility of the bug being part of an adversarial nation stockpile. On the other hand, users are unquestionably awful at implementing vendor patches. 

In the end, vulnerability disclosures that would otherwise lead to safe fixes leave us at greater risk than ever.