Contents:
According to research published by The Cortex® Xpanse™ team, threat actors are faster at finding vulnerable assets to attack than defenders are at finding those same assets to secure.
The researchers monitored the activities of attackers from January to March 2021 to better understand how long it takes them to start scanning for a vulnerability after it’s announced, following a benchmark called “mean time to inventory” (MTTI).
Most adversary scans observed between January and March began 15 to 60 minutes following announcements of Common Vulnerabilities and Exposures (CVEs). But, in some cases they were much faster. On March 2, threat actors started scanning for vulnerable Exchange Server systems within just five minutes of Microsoft’s disclosure of three zero-days.
The team monitored 50 million IP addresses associated with 50 global enterprises, some of which featured in Fortune 500. They found out companies take an average of 12 hours to find new serious exposures, including insecure remote access (RDP, Telnet, SNMP, VNC, etc.), database servers, and exposures to zero-day vulnerabilities in products such as Microsoft Exchange Server and F5 load balancers.
In other cases, threat actors scan to inventory-vulnerable Internet assets once per hour and even more frequently—in 15 minutes or less—after CVE disclosures.
Nearly a third of all identified issues were related to the Remote Desktop Protocol, a common target for ransomware actors as they can use it to gain admin access to servers.
The Cortex® Xpanse™ team identified three reasons why organizations tend to be slow in MTTI:
- The attack surface is growing with a rapid transition to the cloud, supporting the recent addition of remote workers during the COVID-19 pandemic.
- Vulnerability scanners depend on timely CVE database updates, which query known assets.
- Organizations only perform inventory checks quarterly, via pen tests or red teaming, and may not be comprehensive enough. Besides red teaming, all these approaches only focus on known assets.
Xpanse research found 79% of observed exposures occurred in the cloud. The cloud is inherently connected to the internet and it’s surprisingly easy for new publicly accessible cloud deployments to spin up outside of normal IT processes, which means they often use insufficient default security settings and may even be forgotten.
An attack surface represents the digital and physical vulnerabilities that can be found in your hardware and software environment, being the total number of vulnerabilities that an unauthorized user can potentially use to access and steal data.
Organizations need a record system of every asset, system, and service on the public Internet, including across all major cloud service providers and dynamically leased ISP space. According to the researchers, this inventory should be comprehensive and match the full and correct set of internet-facing systems and services back to a specific organization.
The Cortex® Xpanse™ team recommends security teams look at the following list of services and systems to limit the attack surface.
- Remote access services (e.g., RDP, VNC TeamViewer);
- Insecure file-sharing/exchange services (e.g., SMB, NetBIOS);
- Unpatched systems vulnerable to public exploit and end-of-life (EOL) systems;
- IT admin system portals;
- Sensitive business operation applications (e.g., Jenkins, Grafana, Tableau);
- Unencrypted logins and text protocols (e.g., Telnet, SMTP, FTP);
- Directly exposed Internet of Things (IoT) devices;
- Weak and insecure/deprecated crypto;
- Exposed development infrastructure;
- Insecure or abandoned marketing portals (which tend to run on Adobe Flash).
By keeping the attack surface as small as possible you’ll be able to maintain a strong security posture and limit or eliminate the impact or damage an attacker can inflict.