APRA, test before you trust

For years, the cyber security industry has been warning about the inevitability of cyber breaches and for years we’ve been accused of stoking fears to push product.

Finally, it seems, our warnings have been heeded. Earlier last month, the Australian Prudential Regulation Authority released a draft prudential standard for information security. This standard aims to shore up the cyber defences – detection and response capabilities in particular – of Australia’s financial industry.

Submissions are currently being sought and, once finalised, the standards would be legally binding –compelling Australia’s financial services industry to keep its systems secure against the latest attacks.

While the move is an admirable first step, we need to ensure the standards are worth the paper they’re written on and don’t become just another regulatory box-ticking, back-slapping exercise.

The old saying goes, ‘give an inch, take a mile’. In this case, any ambiguities and space for interpretation has to be limited. For example, the current draft states “An APRA-regulated entity must have robust mechanisms in place to detect and respond to information security incidents in a timely manner”.

Define ‘robust’. Define ‘timely’. In fact, define ‘detect’ – should the technology be able to actually detect incidents, or is it good enough that the vendor promised it would be able to do so?

For example, in recent discussions with security analysts at three of Australia’s big four banks, each and everyone of them lamented their inability to detect advanced security threats. The common thread through all of these discussions was that large investments had been made in technologies that promised detection capabilities, however the deliverables were rudimentary at best. They spoke of the significant amount of money wasted as they struggled to operationalise technology investments, and how it was near impossible to recruit experienced security staff on 24/7 basis.

Despite a history of non-performance, money was being thrown at sandboxes, EDRs, SIEMs and now at vendors promising the world with machine learning in the hopes the detection capabilities that were promised could be achieved by burning cash in a prayer pyre.

This is where APRA could bring clarity to matters. Technologies that claim to be able to detect, should be tested and must pass a certain benchmark before they’re approved to be used as a detection mechanism.

The same principal should apply to Security Operations Centres and their detection and response capabilities. I’m not just talking about availability SLAs – these mean nothing if the service provider is incapable, it’s like having a security guard who’s always asleep on the job. Sure, they’re ‘there’, but what’s the point? SOCs need to be thoroughly tested to ensure they work.

Let’s take fire alarms as an example. If you want to sell a fire alarm in Australia, you must first prove your product is able to detect a fire. It must be submitted to testing by CSIRO and conform with Australian standards – if your product doesn’t pass these evaluations, its not a fire alarm and can’t be sold as such.

Every state in Australia mandates that dwellings must have fire alarms installed and, as in Queensland, in some cases must be replaced if they were manufactured more than 10 years ago.

Even though the fundamental nature of fire hasn’t changed for millions of years, a product that has passed evaluations testing its ability to detect a fire must be replaced because it is no longer as reliable as newer models.

This is a premise that could, and should, be applied to information security. The fundamental nature of cyber attacks changes every day – attack methods are continuously being honed and refined. Despite this, technologies that are more than a decade old are still being relied upon to do what they were never able to do very well in the first place.

Ask anyone who has used an IPS, an EDR tool or a SIEM. Alert fatigue is one of the most critical issues in cyber security today. Products that are sold as a means of detection either inundate analysts with so many alerts and false positives it is impossible to identify the real threats, or new attack techniques simply slip by completely undetected – the noise becomes the threat while the advanced threat hides within the noise.

After recent red-team testing projects, technology leaders at multiple Australian firms could not believe the controls they had implemented did not pick up a single thing, despite huge investments in both SIEM and MSSPs. And yet, in APRA’s current draft Prudential Standard, financial firms with these controls would technically have “robust mechanisms in place to detect…”.   

Of course, this is a problem that goes far beyond Australia’s financial services sector. This is a discussion that vendors and service providers across the globe must be willing to have. Testing is one thing, but what is the point of testing if failing that test has no consequences?

APRA has a chance here to draw a line in the sand. It has a chance to lead the world, and hold vendors and service providers accountable to their promises. And so my submission to APRA will call for testing of products, services and MSSPs to ensure a minimum level of capability.

After all, our warnings are finally being taken seriously. It’s only right that we take the services we offer just as seriously. 

Tags: No tags

Comments are closed.