Craig Rowland, CEO of Sandfly Security, read this paper and has similar observations plus a few more where, saying:
"The desire to get the widest possible attack coverage is driven by marketing, not by usefulness. My opinion for many years is that false positives are much worse than false negatives. Yet every industry eval that happens focuses on the opposite.
Humans would much rather deal with some missed attacks than 500K false alarms. Genuine attackers rarely just activate a single alarm. They will activate many alerts as they traverse a network. You're OK missing a few things if it means what you do see is accurate and reliable.
A product that scores 100% in an isolated test environment for detection turns into a smoke alarm mounted above your stove while cooking bacon in the real-world. False alarms drive burnout and make it easier for attackers to hide. This has been known forever.
The goal of an IDS design is to reduce your signature set to a very accurate and quiet core. The best compliment we get is when customers say that our product after initial setup is sometimes too quiet. That's the point. We don't want to do the below.
Blackbox alerting is a problem and we don't do it. Clear explanations and ability to see how the signature works are mandatory in IDS products. Sandfly tells you in plain language what is going on. All of our signatures are able to be viewed and modified.
The comment on ambiguity also rang true. The alerts should be very descriptive of what is going on so there is no way to mis-interpret it. Don't make the analyst's life harder by making them dig into raw data until they need to do an in-depth investigation.
I don't agree with their assessment that AI/ML will fix everything. These systems are only as good as their input data. I wouldn't expect any magic if they are flooded with false alarms on the inputs. Overall a good paper to read with valid insight into SOC teams and IDS."
You can also follow this thread on Twitter: https://twitter.com/CraigHRowland/status/1486426865807945728