Security awareness is not a control: Rethinking human risk in enterprise security

Tags:

Organizations have been responding to phishing, business email compromise, and credential theft in essentially the same manner for over ten years. They essentially follow a playbook that involves investing in awareness training, running phishing simulations, and requiring employees to complete annual security modules. The reason behind this is simple and the reasoning behind these efforts is straightforward: if people can better spot malicious emails and recognize malicious activity, incidents will decrease.

Yet, the amount of money lost because of business email compromise keeps rising. Credential harvesting is still successful. Conventional multi-factor authentication is frequently circumvented by adversary-in-the-middle phishing kits. Under duress, senior executives, including seasoned finance leaders, continue to approve fraudulent payments.

A deeper misclassification in enterprise security strategy is shown in this persistence. Although awareness is an educational measure that promotes culture rather than imposes results, it has been viewed as a control. This distinction has important ramifications for how businesses evaluate and control risk.

The core misunderstanding

A true security control prevents, detects, or limits an outcome regardless of what an individual does, knows or does not know. Conditional access rules, for instance, do not depend on an employee having a good day, and network segmentation does not depend on an employee remembering a policy. Likewise, Segregation of duties in finance exists precisely to ensure that no single individual can independently authorize high-risk transactions. These mechanisms are engineered to constrain risk structurally rather than depend on behavioral perfection.

Security awareness has its own purpose to influence behavior through the improvement of human judgment in situations that deal with time pressure and often incomplete information. Although these initiatives can lessen the possibility of poor decisions, they are unable to ensure consistent results for a varied workforce with individual differences working in a variety of environments. Human performance is inherently variable, especially when exposed to different conditions, and training does not eliminate that variability.

When organizations term security awareness as a “layer of defense,” they implicitly place it alongside technical and procedural safeguards, which can distort how risk is understood and assigned. Responsibility for incidents thus shifts subtly toward individuals, especially when an employee clicks a malicious link or authorizes a fraudulent request. The resulting narrative often emphasizes human error rather than examining whether the surrounding system allowed a single, foreseeable mistake to create material impact.

Examining whether the organization’s controls were made to foresee anticipated human mistakes and limit their effects before they cause enterprise-level harm is a more constructive line of inquiry. 

The predictability of human error

Human error is sometimes viewed as an exception in security incident conversations, as if a breach happened because someone made a mistake that should have been prevented. Human error is a constant in complex systems, especially in huge organizations where everyday operations are shaped by scale, pace, and conflicting agendas. The pertinent question is whether the surrounding environment has been constructed with the inevitable occurrence of mistakes in mind, rather than whether mistakes will occur at all.

Modern social engineering campaigns reflect a sophisticated understanding of how organizations function. Attackers study and understand reporting lines, financial processes, vendor relationships, and executive communication styles, sometimes gleaned from previously compromised accounts in similar industries. They time their messages to coincide with legitimate business activity and plan these messages to align with travel schedules, invoice payments and quarter-end reporting pressure. In many business email compromise cases, there is no malware involved and no technical exploit in the traditional sense of it and attacks are successful because it takes advantage of the trust that is ingrained in regular operations blended in seamlessly with established routines.

Under such conditions, expecting flawless human performance is unrealistic. Employees manage high volumes of communication while also combating deadlines and performance expectations. Senior leaders frequently make decisions with incomplete information, balancing urgency against risk to keep the business running. When a request appears in line with organizational standards and past experience, even highly skilled individuals may misunderstand it. These mistakes are a natural result of cognitive load, environmental clues, and institutional dynamics, not necessarily proof of carelessness.

This reality is acknowledged by high-risk industries like aviation and healthcare, which create multi-layered protections to stop a single error from turning into a disaster. Checklists, redundancy, and cross-verification processes are embedded as part of organizational pipelines to ensure that systems remain safe even when individuals are imperfect. On the other hand, the same discipline has not always been used in enterprise cybersecurity. A single compromised credential or a single configuration error, exemplified in the CrowdStrike outrage,  can still result in serious operational or financial harm in many settings. When that degree of fragility is present, the system’s authority distribution and error-absorbing capabilities become more pertinent than individual behavior.

Awareness cannot function as a primary safeguard

There are structural limitations that prevent awareness from serving as a dependable control. First, cognitive load and decision fatigue are unavoidable in complex organizations. Even experienced professionals make mistakes due to reduced scrutiny when under pressure and awareness training does not eliminate this human reality. Awareness training may increase general suspicion, but it cannot eliminate the reality that individuals must constantly triage information under time pressure and occasional lapses in judgment are statistically inevitable as a result.

Secondly, organizational dynamics further complicate the picture, especially in traditional societies where this is strongly upheld. Hierarchy and perceived authority are exploited in many successful business email compromise incidents as a result. Requests that seem to come from senior executives are implicitly urgent and significant for the organization. Employees are frequently trained to support executive instructions rather than impede them, particularly when it comes to urgent financial concerns, which could slow down business processes.

Lastly, the widespread adoption of multi-factor authentication has also contributed to an inflated sense of security. While MFA greatly improves security over password-only settings, not all implementations are impervious to modern attack methods. Push fatigue attacks take advantage of routine approval patterns, adversary-in-the-middle frameworks can steal and replay session tokens, and device code / OAuth consent phishing can provide continuous access without the need for conventional credential theft. In these cases, there is a likelihood employees comply with established security procedures and still be compromised because the architectural design allows it.

When combined, these reasons show why awareness is not a reliable main protection. It can strengthen best practices and lower risk at the edges, but it cannot make up for shoddy identity architecture, brittle finance procedures, or inadequate monitoring.

Treating human risk as a design constraint

A more pruned approach reframes human risk as an engineering consideration as opposed to a behavioral flaw. Security leaders should assess which decisions entail a disproportionate amount of risk when carried out in isolation, rather than asking how to train staff to recognize every potential phishing variant.

Salient questions in this regard include:

Should a single email request ever be sufficient to initiate a high-value transfer?

Are payment instruction changes subject to enforced out-of-band verification?

Does identity infrastructure continuously validate session integrity?

Are anomalous financial behaviors detected in real time?

This shift moves the focus from persuading individuals to behave perfectly toward building systems that remain resilient when they do not.

What structural controls should look like

An enterprise strategy that effectively tackles human-centric threats includes defenses that function without constant monitoring. Device-bound passkeys and hardware-backed credentials are examples of phishing-resistant authentication techniques that lessen vulnerability to push-based manipulation and token interception. Compared to static MFA prompts alone, conditional access policies that also assess device health and onboarding status, geolocation anomalies, and behavioral risk signals offer greater assurance.

In the same vein, financial workflows should embed separation of duties and enforced verification. Secondary validation should be required through separate channels for high-value transactions, vendor banking changes, and urgent payment requests. Systems for tracking transactions should also be able to spot anomalous payment amounts or departures from historical trends.

Particular consideration should also be given to identity telemetry. Persistence tactics frequently employed in business email compromise campaigns can be found by keeping an eye on mailbox rules, atypical travel, OAuth grants, privileged role assignments, and session oddities. Using Privileged Identity Management solutions, privileged access should be time-bound and approval-based to reduce the blast radius of credential misuse. Although human error cannot be eliminated, these precautions greatly lessen the chance that a single error will result in severe material loss.

From blame game to architecture

It makes sense that organizations would choose to gravitate towards awareness-raising campaigns. They are visible, often reasonably priced when bundled as part of existing security tooling and quantifiable. On the other hand, more funding and cross-functional cooperation are needed for architectural redesign, identity modernization, and workflow reorganization.

Threat actors, however, are becoming more adept at taking advantage of human behavior patterns that are predictable in current corporate procedures. They only require people to be human because they comprehend the urgency, trust, and operational complexity that frequently accompany working in time-sensitive professions, high-pressure conditions, and the ensuing complacency.

The rational response is to assume imperfection and build accordingly.

A more honest assessment of systemic risk

Security awareness remains an important component of organizational culture. Employees should understand common attack patterns and feel empowered to report suspicious activity. However, awareness should be viewed as a supporting measure rather than a primary safeguard.

When a single decision can still trigger substantial financial or operational damage, the organization’s exposure is rooted in design. Resilient enterprises acknowledge that human error is inevitable and ensure that their identity architecture, financial controls, and monitoring capabilities are robust enough to absorb it.

Reframing awareness in this way does not diminish its value. It places it in the correct category and forces a more honest assessment of systemic risk. Until that shift occurs, many organizations will continue to invest heavily in training while leaving structural weaknesses intact, and attackers will continue to exploit the gap between education and engineering.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *