How to Reduce Cyber Risk Through Astute Spending

“If we cannot make sense of everything we see on the battlefield, we need a better way of doing things”. 

This quote from General Sir Richard Lawson Barrons, former Commander of the UK’s Joint Forces Command, sums up a key security spending and operational pain point: information flow. 

Collecting and responding to data is at the core of how security functions. To stop threats, security teams need to be able to understand and act on data from across networks, endpoints, and critical servers. 

In our recent Reducing Risk through Astute Spending webinar, SenseOn’s CEO, Dave Atkinson, explores the information challenges facing security teams and how reducing risk with astute spending can be one of the most value-added things security teams can do in 2023. 

Dave also gives an example of how a 4,000-person company saved hundreds of thousands of pounds through astute spending. 

If you missed the webinar, you can still watch it on our website.

Here are a few of the core points Dave covered in the webinar. 

Why Well-Funded Security Programs Fail to Stop Threats

On the defensive side of the cyber front line, defenders use cyber risk management to quantify and assess cybersecurity risks based on the potential cost they will create for a business. 

Security teams use standards, specifications and protocols to reduce the likelihood that a business will suffer a data breach or some other type of cyber incident and, consequently, the potential costs tied to these incidents. 

The problem is that threat actors are sophisticated enough to get through these barriers.

Attackers don’t stop once they meet a novel security measure/security system or an organisation with a supposedly mature security posture. Instead, they adapt their attacks to overcome defences and figure out ways to bypass risk reduction strategies.

For example, when companies began to invest in backups to mitigate the risk of ransomware, threat actors started using sensitive data exposure as leverage for ransom payments, aka “double extortion.” 

Or consider that, despite spending billions on cybersecurity, the US department of defence was still compromised via the SolarWinds supply chain attack (Forrester predicted that in 2022, 60% of incidents would come from third-party risk). Or how attackers are bypassing anti-phishing filters with unique social engineering tactics like hyper-personalised phishing emails.

This is where the adage “defenders have to be lucky always; attackers only have to get lucky once” starts to bite. 

It is almost impossible to reliably counter dynamic, cunning adversaries in a world of insider threats, zero-day vulnerabilities, and AI-powered malware with static information security tool kits and models. Against fast-moving cybersecurity threats and ever-expanding attack surfaces, no amount of castle walls will stand. 

The fact that cybercriminals don’t play by our rules is bad enough, but what’s worse is that the tool stacks security teams use are not helping with risk mitigation either.

The Downside of Best of Breed 

If resilience to cyber attacks were a function of having a certain number of security controls and solutions, most defenders today would be in a much better place than they are right now.

Sadly, this is not the case.

Although a typical medium-sized organisation now has between 50 and 60 security tools operating within its environment (a figure that grows annually), the global cost of cybercrime (which includes data loss, disaster recovery, reputational damage, regulatory fines, disruptions to business, etc.) is increasing by at least 15% annually

This means that having more tools deployed will not keep your organisation safe.

To see how we got here, it helps to look at the two eras of security tooling evolution.

  • 1990s to 2010: Rules and Signatures. The first generation of enterprise security solutions were platforms and general-use tools (i.e., antivirus) that worked by matching network behaviour to predefined ideas of what threats looked like. 

  • 2010 to 2020s: Best of Breed. As the volume and capability of threats soared in the 2010s, security solutions niched down. We saw the emergence of black box siloed tools that gave security teams deep capability within defined areas, i.e., next-generation antivirus (NGAV), endpoint detection and response (EDR), user and entity behaviour analytics (UEBA), etc. We also saw the emergence of automation technologies such as security orchestration, automation, and response (SOAR).

The evolution of best-of-breed tools gave security teams confidence that they could mitigate advanced threats in particular areas. 

However, the recent proliferation of narrow-scope tools also comes with a downside.

As the average number of security solutions rose throughout the 2010s, the number of false and unnecessary alerts facing security teams soared. So did the volume of data that security tools need to process.

Today’s security operation centres (SOCs) are paying the price for this information overload. 

Companies locked into three-year contracts with network detection and response (NDR) vendors can find themselves needing to re-negotiate their arrangement within 12 months due to the overwhelming volume of data.

Processing the immense volume of real and false alarms generated is becoming an almost comically challenging problem.

One example Dave mentions in the webinar is a SOC at a financial institution that receives 25 million monthly alerts. If the SOC tried to address these alerts, the cost would be $1.5 billion in salary alone. 

Where to Look for Hope

For Dave, the light at the end of the tunnel comes from two interlinked developments: 

  • The rise of the MITRE ATT&CK framework.

  • The development of “best of suite” automated platforms.

MITRE ATT&CK, the first source of hope, emerged in 2013. As a dynamic, open-source database of threat actor behaviour and techniques, it unlocked a new generation of openness and transparency in the cybersecurity community. 

Security teams can use ATT&CK to map out real-world scenarios and answer questions like “in case of an attack by a particular threat group, where does the capability of my EDR solution begin and end?”

Critically, ATT&CK is scalable. Combining its information with security automation is a promising pathway out of the information malaise security teams find themselves in.

However, getting away from higher volumes of inaccurate alerts depends on another key ingredient: data. The data an automation platform uses needs to be high quality, consistent, and, most of all, trustworthy. 

How to Reduce Cyber Risk and Spending with SenseOn

In the webinar, Dave discusses how SenseOn connects identity with full deep packet inspection and telemetry to collect high-quality, in-depth, and timely data through its Universal Sensor. 

Dave also explains how SenseOn automatically ties observations to the ATT&CK framework, uses a cloud-based security information and event management (SIEM)/SOAR platform that can be deployed in any AWS region, and leverages hyper-automated SOC services.

But SenseOn does much more than just advance the theoretical side of cybersecurity.

To demonstrate, Dave gave a real-world example of how SenseOn helped a 4000-person construction company reduce its alert volume by over 95%. The company, which used Microsoft’s Sentinel SIEM, was able to integrate with SenseOn through the Microsoft Graph API.

SenseOn could then take the vast volumes of data created by Sentinel, analyse it and automatically sort suspicious behaviour from false alarms. 

This helped the company make over £200,000 in direct savings alone. It also enabled them to scale back their hiring requirements.  

Previous
Previous

Resurgent USB Malware: Battling Raspberry Robin

Next
Next

Reducing Risk Through Astute Spending: Optimising the Security Toolstack