sharkbot 3 days ago | next |

I’ve been thinking about this topic thru the lens of moral philosophy lately.

A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.

Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.

A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.

jiggawatts 3 days ago | root | parent | next |

I work at medium to large government orgs as a consultant and it’s entertaining watching beginners coming in from small private industries using - as you put it - consequentialism and virtue ethics to fight against an enterprise that admits only duty ethics: checklists, approvals, and exemptions.

My current favourite one is the mandatory use of Web Application Firewalls (WAFs). They’re digital snake oil sold to organisations that have had “Must use WAF” on their checklists for two decades and will never take them off that list.

Most WAF I’ve seen or deployed are doing nothing other then burning money to heat the data centre air because they’re generally left them in “audit only mode”, sending logs to a destination accessed by no-one. This is because if a WAF enforces its rules it’ll break most web apps outright, and it’s an expensive exercise to tune them… and maintain this tuning to avoid 403 errors after every software update or new feature. So no-one volunteers for this responsibility which would be a virtuous ethical behaviour in an org where that’s not rewarded.

This means that recently I spun up a tiny web server that costs $200/mo with a $500/mo WAF in front of it that does nothing just so a checkbox can be ticked.

sebazzz 2 days ago | root | parent | next |

Oh man, web application firewalls and especially Azure Application Gateway are the bane of my existence. Where I work they literally slap an Azure Application Gateway instance on every app service with all rules enabled (even the ones Microsofts recommends not to enable) in block mode directly when provisioning the stuff in Azure. The app is never observed in audit mode.

Result is that random stuff in the application does not work for any user, or only for some users, because some obscure rule in Azure Application Gateway triggers. Especially the SQL injection rule of Azure Application Gateway seems to misfire very often. A true pain to debug, then a true pain for the process to get the particular rule disabled.

And then not even to start about the monthly costs. Often Azure Application Gateway itself is more expensive than the App Service + SQL Database + Blob Storage + opt. App Insights. I really think someone in the company got offered a private island from Microsoft for putting Azure Application Gateway as a mandatory piece in the infrastructure of every app.

Yes, our most of our security has been outsourced to cheap workers in developing countries like India, which are of course rated on maintaining the standard and not rated on thinking and understanding what you want and putting things in context, and probably also work 60-70 hours per week during ungodly times so you can hardly blame them. It is truly the process that is broken.

vrighter a day ago | root | parent |

Well what if they were intelligent and could actually really understand the data and its schema before deciding whether to allow or reject the request... wait... that's just the application itself.

jiggawatts 18 hours ago | root | parent |

It all boils down to trust. Management don’t trust the developers to do the right thing because they outsourced development to the lowest bidder. They futilely compensate for this by spending a mere $500/mo for a WAF.

MBA thinking at its finest…

lll-o-lll 3 days ago | root | parent | prev | next |

So WAF. Bad? I don’t know enough about it. If it’s just a way to inject custom rules that need to be written and maintained, the value seems low or negative. I had hoped you got a bunch of packages that protected against (or at least detected) common classes of attacks. Or at least gave you tools in order to react to an attack?

ozim 3 days ago | root | parent |

Just slapping WAF in front of your services without configuring and maintaining rules is bad.

Without someone dedicated for maintenance of WAF it is just a waste. Where not many companies want to pay for someone babysitting WAF and it can be full time job if there is enough changes on layers behind.

dataflow 3 days ago | root | parent | prev | next |

> they’re generally left them in “audit only mode”, sending logs to a destination accessed by no-one.

Aren't these still useful for figuring out what happened if you're hacked?

lmm 3 days ago | root | parent |

Maybe, if the attacker didn't bother to hack into the WAF itself (generally a softer target than whatever's behind it) and if you bothered keeping or understanding the logs (extremely unlikely to be a good use of resources).

dataflow 3 days ago | root | parent |

You don't need to understand the logs at the time you gather them for this, you just need to keep them long enough to cover the breach, and to be able to understand them after the fact. Hardly seems like an obvious waste to me, and well worth $500/mo.

lmm 3 days ago | root | parent |

> you just need to keep them long enough to cover the breach, and to be able to understand them after the fact

And avoid leaking customer information/passwords/etc. through them until then, which is the hard part.

djbusby 3 days ago | root | parent |

Yep. I've seen WAF in "audit mode" and it's got load of client API keys in there, among other fun things.

Check the box for WAF but adds a new risk.

tryauuum 3 days ago | root | parent | prev |

can it even be considered a firewall if it's running in an "audit only mode"?

TeMPOraL 3 days ago | root | parent |

(Corporate IT sec answer): it says "firewall" in the name, so yes.

jiggawatts 3 days ago | root | parent |

This is the correct answer!

Every corporation over a certain size has a rule that everything needs a firewall in front of it… even if the something is a cloud service that only listens on port 443.

treflop 3 days ago | root | parent | prev | next |

I don't think this is limited to security.

I have friends who are very scary drivers but insist on backseat driving and telling you about best driving practices, and coworkers who are insistent on implementing excessive procedures at work but constantly are the ones breaking things.

I think following rules gives some people a sense of peace in a chaotic and unpredictable world. And I can't stand them.

CSSer 3 days ago | root | parent | next |

Do you mean the rules or the people? I don’t mean to sound facetious.

treflop 2 days ago | root | parent | prev |

A little of both. I understand getting a warm fuzzy feeling that you did the right things, but if you don't achieve your goal, what's the point?

But let me clarify -- OP mentioned a contrast between consequentialism and virtual ethics and I think you can be "too much" consequentialism too. I'm wouldn't call myself a rule follower but I also follow rules 99% of the time too. It does create a sense of order and and predictability and I value that.

There is a right balance where you do follow rules but you also know when to break them. What I can't really stand are rigid people -- diehard rule followers or diehard "no one can tell me what to do." I find working with rigid people hard because you have to work around their "buttons."

awesome_dude 2 days ago | root | parent | prev |

Often the reason that they know all of these rules is because they are constantly being bitten/yelled it for breaking them

Speaking as someone who is constantly trying to keep good procedures in the team because of all the footguns I have collected over time

xxpor 3 days ago | root | parent | prev | next |

Securities laws are written in terms of duty ethics ("fiduciary duty", "duty of due care", etc). That's all anyone at the top would care about.

immibis 3 days ago | root | parent |

It quickly turns into: what can I get away with, while claiming I performed the duty?

xxpor 2 days ago | root | parent |

Agreed!

immibis a day ago | root | parent |

It gets worse than that: it rewards people who try to break the law as much as possible without getting caught, while people who follow it are punished.

That's true of most laws, but the system punishes law breakers to make it better to follow the law overall. When the law is vague and subjective, the people who get the most reward are the ones who are willing to see how far they can push it.

danjl 3 days ago | prev | next |

The vast majority of the security "industry" is about useless compliance, rather than actual security. The chimps have put their fears into large enterprise compliance documents. This teaches the junior security people at enterprise companies that these useless fears are necessary, and they pass them along to their friends. Why? Not just because of chimps and fear, but also $$. There is a ton of money to be made off of silly chimps.

goalieca 3 days ago | root | parent | next |

I’m an engineer who now works security. Very few of us come from an engineering background. Most lack the technical skill to do much than apply controls and run tooling. Some try to do design work but imagine a junior dev with 2-3 years experience trying to write a service.

Those of us who are architects and coders don’t often get to do it anymore because we’re not working on single projects or solutions.. so we become people who swoop in on a project for a month at a time to make sure there’s no major smells before moving on. Our understanding our your system is shallow as a result.

sebazzz 2 days ago | root | parent |

> I’m an engineer who now works security. Very few of us come from an engineering background. Most lack the technical skill to do much than apply controls and run tooling.

I think you probably hit the nail on the head there. Often the people in Infosec I work with are not interested into putting things in context, thinking into the actual impact of a control not being met. Instead, just a bunch of controls are thrown out without any regard to the actual security.

Now I have to say, most of our security has been outsourced to cheap workers in developing countries like India, which are of course rated on maintaining the standard and not rated on thinking and understanding what you want, and probably also work 60-70 hours per week during ungodly times so you can hardly blame them.

Spivak 3 days ago | root | parent | prev | next |

Compliance is useful, just not for security.

* You get a cool industry certification that you can put on your website to justify the vague "we take your security seriously" platitudes we spew.

* It lets you stop putting money and effort into security once you've renewed your certs this year.

* You don't need to hire a dedicated security person, any sysadmin can check boxes.

* You can say you followed industry best practices and "did all you could" when you get breached.

It's the answer to "how do we not care about security?" across an entire industry that stands to make billions from said lack of care. In a depressing way, the company with useless performative security certs will fare better after a breach then the one without them but that actually tried.

My less cynical take about this is that if you need to actually care about security because you'll be up against sophisticated targeted attacks then you probably already know that. For everyone else there's checkboxes to stop companies from getting owned by drive-by attacks.

NoPicklez 16 hours ago | root | parent |

Your first sentence isn't necessarily true

There is compliance everywhere and compliance is often complying with larger industry "requirements" or considered best practice controls.

If you start a business from scratch, I don't know any company that has developed their own controls library from scratch without complying with some sort of framework or baseline controls set.

The frameworks and control sets that you often comply exist and are there for a reason, but your mileage may vary if you choose to use them.

NoPicklez 16 hours ago | root | parent | prev |

Well one of the big problems is that businesses don't do root cause analysis on incidents and learn what controls failed, or should have been in place that may have prevented the incident.

Additionally, actually testing if the controls works. I work in testing controls and I find a lot of controls might be developed well, but just simply aren't being done due to resource constraints.

habitue 3 days ago | prev | next |

The ironic thing about the chimp story is that probably chimps are immune to the problem and humans are the only species that would fall for it. It takes chimps a long time to learn to copy others. I doubt they could sustain a superstition like this for long even if you managed to induce it through great effort.

It's humans that copy each other without a second thought. It's a great heuristic on average. These kinds of fables are correctives against our first instinct to replicate other's behaviors, but if we actually tried to reason through everything from first principles we'd never get anything done.

Copying is the plain pieces in the lucky charms, thinking things through is the marshmallows.

brandall10 3 days ago | prev | next |

I just read the book The Phoenix Project. It's over a decade old so some of the principles are obvious/quaint at this point, or perhaps not quite as applicable.

That said, one of the things that caught me off guard is the dressing down of the head of security by a member of the board. More or less, they were told what they did was clog the flow of useful work. The message conveyed is similar to this post.

TeMPOraL 3 days ago | root | parent | next |

> More or less, they were told what they did was clog the flow of useful work.

That sounds like a very valid complaint, too rarely heard these days.

People seem to forget that security always comes at a cost, so security decisions are always trade-offs. The only perfectly secure system is the one that does absolutely nothing at all.

Does forcing everyone's machine to run real-time scans on all file I/O improves our security more than it costs us in crippling all software devs? Maybe. Being on the receiving end of such policies, including this particular one, I sometimes doubt this question was even asked, much less that someone bothered to estimate the expected loss on both sides of the equation. Ignoring the risks doesn't make them go away, but neither do costs go away when you pretend they don't exist.

too_pricey 3 days ago | root | parent | prev | next |

The Phoenix Project has been very influential on me in my security career, at least partially because I share the name of the ineffectual CISO and want so desperately to avoid the link.

I think the book is still very applicable, and every security practitioner needs to be hit over the head with it (or at least The DevOps Handbook or Accelerate). Security generally is decades behind engineering operations, even though security is basically just a more paranoid lens for doing engineering ops; the ideas from Phoenix are still depressingly revolutionary in my field.

alsetmusic 3 days ago | root | parent | prev |

> More or less, they were told what they did was clog the flow of useful work.

Similar to seeing IT as a cost rather than a benefit.

colek42 3 days ago | prev | next |

I've been thinking about this a lot. First, the author should replace security with compliance. Currently they are two different things. There is a huge divide between compliance teams and developers, they speak completely different languages. I'm writing an entire series about it. I do think we can fix the problem, but it is going to be a lot more work than it was to get development and operations on the same page.

https://productgovernance.substack.com/publish/posts/detail/...

pphysch 3 days ago | prev | next |

This is quite a simplification. There are a lot of useless/dubious controls out there, but the problem is rather the contradiction between security pragmatism and compliance regimes.

####

Government: I need a service.

Contractor: I can provide that.

Government: Does it comply with NIST 123.456?

Contractor: Well not completely, because control XYZ is ackshually useless and doesn't contribute--

Government: hangs up

deathanatos 3 days ago | root | parent | next |

I think it's fine to implement a useless control to get a customer.

Just don't pretend that you're doing it because it is a useful control, pretend that you're doing it because jumping through that hoop gets you that customer, and "we're a smaller fish than the government". Especially with the government (especially if it's the USA…) there are going to be utterly pointless hoops. I can pragmatically smile & jump, … but that doesn't make it useful.

JoshTriplett 3 days ago | root | parent | next |

Exactly. There is absolutely a threshold of money that will get me to implement FIPS. There is no threshold of money that will get me to say it's a good idea that has any value other than getting the (singular) customer that demands FIPS.

SAI_Peregrinus 3 days ago | root | parent |

The core idea of FIPS doesn't seem terrible at first glance: a validation program to ensure known attacks are protected against.

The obvious issue is that known attacks have progressed significantly faster than FIPS has been updated, so in practice it doesn't defend against actual attackers. Compliance-based security pretty much always falls into this trap, and often is even worse because compliance with the standard is considered the maximum that can be done instead of the minimum that must be done. FIPS' fatal flaw is that in many cases it mandates a maximum security level that is now outdated.

It's a lot like building or electrical codes: if they're treated as the minimum as intended things stay safe, but if they're just barely complied with then buildings tend to fall down and/or catch fire.

encomiast 3 days ago | root | parent | prev |

I guess as a company I would agree that it's fine to implement a useless control to get a customer. As a tax-payer...not so much. We spend so much money (at least in the U.S.) on garbage.

not2b 3 days ago | root | parent | prev | next |

Note, though, that "the government" (NIST to be specific) says that requiring passwords to be changed every 90 days is counterproductive and shouldn't be done, yet many corporations (including my employer) still mandate it. Corporate bureaucracy can be as backward and counterproductive as government bureaucracy.

count 3 days ago | root | parent | prev |

"We have an alternate implementation / mitigation" gets you passed the hangup, for folks who need the magic words for 'thats dumb. we do it right'.

EE84M3i 3 days ago | prev | next |

Other than the myriad of problems with passwords that NIST has killed in competent circles, what are some other "useless controls"?

commandar 3 days ago | root | parent | next |

>that NIST has killed in competent circles

Just because this is my favorite soapbox - anyone that has to deal with passwords should go read NIST SP800-63B:

https://pages.nist.gov/800-63-3/sp800-63b.html

I was kind of shocked by just how gosh-darned reasonable it is when it came out a couple of years ago. It's my absolute favorite thing to cite during audits.

"Are you requiring password resets every 90 days?"

"No. We follow the federal government's NIST SP800-63B guidelines which explicitly states that passwords should not be arbitrarily reset."

I've been pleasantly surprised that I haven't really had an auditor push back so far. I'm sure I eventually will, but it's been incredibly effective ammunition so far.

Jedd 3 days ago | root | parent | prev |

Alas, in Australia one of the more popular frameworks in gov agencies is Essential Eight, and they are a few years away from publishing an update with this radical idea.

NoPicklez a day ago | root | parent |

My understanding is that Essential Eight doesn't require password rotation

Jedd a day ago | root | parent |

If so then I'll be doubly frustrated - I've been assured by our domain experts that this is a requirement of the model.

Did it used to be and was since retracted? I suppose it may be a local or state-based 'implementation augmentation'.

I've trawled just now through the signals directorate site and can find plenty of references to passwords, but nothing specifically covering this.

encomiast 3 days ago | root | parent | prev | next |

I bumped into controls mandating security scans, when people running the scans don't need to know anything about the results. One example prevented us from serving public data using Google Web Services because the front-end was still offering 3DES among the offered ciphers. This raised alerts because of the possibility of Sweet32 vulnerability, which is completely impractical to exploit with website scale data sizes and short-lived sessions (and modern browsers generally don't opt to use 3DES). Still, it was a hard 'no', but nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.

We also had scans report GPL licenses in our dependencies, which for us was a total non-issue, but security dug in, not because of legal risk, but compliance with the scans.

mmsc 3 days ago | root | parent | next |

"Why do we have to do X? Because we have to do X and have always had to do X" is a human problem coming from lack of expertise and lack of confidence to question authority.

It's a shame, your story isn't unique at all.

TeMPOraL 3 days ago | root | parent |

Not just lack of expertise and confidence, but also lack of trust, and possibly also a real overhead of running a large org.

Like, IT sec does not trust employees. This burns absurd amount of money day in, day out, due to broadly applied security policies that interfere with work.

Like, there's a lot of talk about how almost no one has any business having local admin rights on their work machine. You let people have it, and then someone will quickly install a malicious Outlook extension or some shit. Limits are applied, real-time scans are introduced too, and surely this inconveniences almost everyone, but maybe it's the right tradeoff for most of the org's moderately paid office workers.

But then, it's a global policy, so it also hits all the org's absurdly-highly paid tech workers, and hits them much worse than everyone else. Since IT (or people giving them orders) doesn't trust anyone, you now have all those devs eating the productivity loss, or worse, playing cat-and-mouse with corporate IT by inventing clever workarounds, some of which could actually compromise company security.

In places I've seen, by my guesstimate that lack of trust and ability to issue and monitor exceptions to security policies[0] could easily cost as much as doubling the salary of all affected tech teams.

As much as big orgs crave legibility, they sure love to inflict illegible costs on themselves (don't get me started about the general trend of phasing out specialist jobs and distributing workload equally on everyone...).

--

[0] - Real exceptions, as in "sure whatev, have local admin (you're still surveilled anyway)", instead of "spend 5 minutes filling this form, on a page that's down half the time, to get temporary local admin for couple hours; no, that still doesn't mean you can add folders to exclusion list for real-time scanner".

foobarchu 2 days ago | root | parent | next |

Another of my favorite examples is companies going "everyone needs cyber security training" and applying a single test to their entire global staff with no "test out" option. I watched a former employer with a few hundred thousand employees in the US alone mandate a multi-hour course on the most basic things, which could have been negated with some short knowledge surveys.

The same employer also mandated a multi-hour ethics guidelines course yearly that was 90% oriented towards corporate salespeople, and once demanded everyone take what I believe was a 16 hour training set on their particular cloud computing offerings. That one just have cost them millions in wasted hours.

dataflow 3 days ago | root | parent | prev | next |

> nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.

Isn't it just a burden on the security team & the organization at a whole if nothing else? If every team gets to exempt themselves from a ban just because they use the thing responsibly, then suddenly the answer to the question of "are we at risk of X which relies on banned thing Y" can become a massive investigation you have to re-do after every event, rather than a simple "no".

I don't know the details of your situation obviously, maybe there's something silly about it, but it doesn't seem silly to me. More generally, "you can only make an exemption-free rule if 100% of its violations are dangerous" is not how the world works.

NoPicklez a day ago | root | parent | prev |

This is often the result of poor risk management or lack of risk management understanding.

Compliance assessments at least the assessments I have worked with, take a risk based approach and allow for risk based decisions/exemptions.

If you have a vulnerability management process which takes what the scanning solution says at face value and therefore your process assumes ALL vulnerabilities are to be patched, then you're setting yourself up for failure.

too_pricey 3 days ago | root | parent | prev | next |

I actually wrote blogs about two of my (least) favorites: [VPNs](https://securityis.substack.com/p/security-is-not-a-vpn-prob... [Encryption](https://securityis.substack.com/p/security-is-not-an-encrypt...). Thank you for pointing out I don't link to them in this original post.

Password resets are definitely one, and I still have to tell prospects and customers that I can't both comply with NIST 800-63 and periodically rotate my passwords, every single day. Other ones I often counter include other aggressive login requirements, WAFs, database isolation, weird single tenancy or multitenancy asks, or for anti-virus to be in places that they don't need to be.

convolvatron 3 days ago | root | parent | prev |

in the spirit of this article, can anyone explain why the Linux host-level firewall is a useful control?

deathanatos 3 days ago | root | parent | next |

I think it depends a bit on circumstance, but I think I'd start with "way too much software binds to 0.0.0.0 by default", "way too much software lacks decent authn/z out of the box, possibly has no authn/z out of the box", and "developers are too lazy to change the defaults".

So it ends up on the network, unprotected.

lmm 3 days ago | root | parent | prev | next |

Do you mean "why is running a firewall on an individual host useful"? Single-application hosts are quite common, and sadly some applications do not have adequate authentication built-in.

Do you mean "why does Linux allow firewalling based on the source host"? Linux has a flexible routing policy system that can be used to implement useful controls, host is just one of the available fields, it's not meant to be used for trusting on a per-host basis.

l0b0 3 days ago | root | parent | prev | next |

It's a catch-all in case any single service is badly configured. This often happens while people are fiddling around trying to configure a new service, which means they are at the most vulnerable.

dogman144 3 days ago | root | parent | prev |

There's always an edge case, gotta know the various sec controls to slice the target risk outcome, vs target outcome == specific implementation. Security hires who are challenging employees are the latter types.

Edge case and your answer, in spirit - public-facing server, can't have a HW firewall in-line, can't do ACLs for some reason, can't have EDR on it.... at least put on a Linux host-level FW and hope for the best.

mozzieman 3 days ago | prev | next |

For security compliance, it might be "useless" but it is not useless if that compliance enables your company to ship products and earn revenue.

ISO27Auditor 2 days ago | root | parent |

Agreed. As an ISO 27001 auditor I see a growing demand for security compliance certification / attestations (ISO 27001, SOC 2), and it's client driven 95% of the time. So, in the end, it’s often worth it to go ahead and do it.

ISO 27001 is more affordable (2k-3k for audit, and additional 1k-3k for external provider to manage everything for you), SOC 2 will set you back at least 10k

NoPicklez a day ago | root | parent |

100%

Third party cyber risk management is a hot topic in cyber security at the moment. If you want people to buy your solution, you need to be able to demonstrate you have appropriate information security controls. A good way to do that is ISO 27001, all the way up to SOC reports.

thomastjeffery 3 days ago | prev | next |

The chimps in a cage metaphor is a great introduction to a problem that exists in all software development. I call it the Walls of Assumptions.

When we write software, we answer three questions: "What?", "How?", and "Why?".

We write out the answers to "What?" and "How?" explicitly as data and source code. The last answer, alas, can never be written; at least, not explicitly. When we are good programmers, we do our best to write the answer Why implicitly. We write documentation, tutorials, examples, etc. These construct a picture whose negative space looks similar enough to live in Why's place.

No matter what, the question "Why?" is always answered. How can this be, if that answer is never written? It is encoded into the entropy of the very act of writing. When we write software, we must make decisions. There are many ways a problem could be solved: choose only one solution. A chosen solution is what I call an "Assumption". It is assumed that the solution you chose will be the best fit for your program: that it is the answer your users need, or at least that it will be good enough for them to accomplish what they want.

Inevitably, our Assumptions will be wrong. Users will bring unique problems that your Assumption isn't compatible with. While you hoped your Assumption would be a bridge, it is instead a Wall.

The Walls of Assumptions in every program define a unique maze that every software user must traverse to meet their goals. Monolithic design cultivates a walled garden, where an efficient maze may fail entirely to lead the user to their goal. Modular design cultivates an ecosystem of compatible mazes that, while less efficient, can be restructured to reach more goals.

---

The eternal hype around Natural Language Processing and Artificial Intelligence is readily explained with this metaphor. The most powerful feature of Natural Language is Ambiguity. Ambiguity allows us to encode more than one answer into data, which means we actually can write the answer to Why; we just can't read it computationally. Artificial Intelligence hinges on the ability for decision to be encoded into software. I'm not talking about logical branches here: I'm talking about the ability to fully postpone the answering of Why from time-of-writing to runtime.

---

For the last year or two, I've been chewing on a potential solution to this problem that I call the Story Empathizer. So far, the idea is too abstract; but I still think it has potential.

erulabs 3 days ago | prev | next |

Security is having a bit of a hay day as everyone fights to build a moat against smart kids and AI. SOC2 and friends are a pain in the ass, but are a moat more than most these days. Security theater? The answer is at least “mostly”, but a moat nonetheless. You can feel the power swinging back into the hands of the customer.

When all software is trivial, the salesman and the customer will reign again. Not that I’m hoping for that day, but that day may be coming.

zzyzxd 3 days ago | prev | next |

I think the "chimps in a cage" needs some followup experiments to tell the whole story -- replacing the banana with a much higher value reward, or placing another water hose which fires if chimps stopped trying to reach the reward ;)

Most likely, useless controls exist because the company thinks they are good enough for the business and there's no incentive to improve or replace them.

satisfice 3 days ago | root | parent |

The chimps story is made up. There was a study that tried to test something like that but only in one case, out of many trials, was a chimp discouraged from doing something by another chimp, due to the second chimp’s fear.

too_pricey 3 days ago | prev | next |

I wrote this! I'm excited to see this get attention here. I'll be responding to folks' comments where I feel like I have something to add, but please let me know if you have any questions or feedback!

MattPalmer1086 2 days ago | root | parent |

There's certainly a lot of cargo cult security controls out there. One of the big issues is simply that it is very hard to change established practices. It takes a lot of effort, and senior people who are not security experts have to sign off on the "risk" of not doing what all their peers are doing.

There is one word I would change in your post title. Security has a useless controls problem, not security is a useless controls problem.

UltraSane 3 days ago | prev | next |

If money was no object I would just hire continuous pen testers to test your infra and every time they are able to do something they shouldn't be able to then fix how they did it and then repeat endlessly. I think it is analogous to immersing a tire in water and looking for bubbles to find leaks and then patching them.

nshkrdotcom 3 days ago | prev | next |

DITE and CSPM are indeed important problems upon which to reflect security-wise.

But, reflecting on XSS: What a shame that we can't evolve our standards, protocols, software, and hardware to fix such issues fundamentally.

boveus 3 days ago | prev | next |

> Cross-site scripting (XSS) safe front-end frameworks like React are good because they prevent XSS. XSS is bad because it allows an attacker to take over your active web session and do horrible things

What? React is not "Cross-site scripting safe"

Many security controls do require more than a 2-3 sentence explanation. Trying to condense your response in such a way strips out any sort of nuance such as examples of how react can be susceptible to XSS. Security is a subset of engineering and security decisions often require a trade off. React does protect against some classes of attacks, but also exposes applications to new ones.