Skip to content

Social Engineering & Physical Security

Social engineering attacks target the human, not the computer. No amount of technical controls completely protects against a well-executed social engineering attempt - which is why user awareness is treated as its own security layer.


Social Engineering Attacks
├── Digital (remote)
│ ├── Phishing - mass email lures
│ ├── Spear Phishing - targeted, personalised email lures
│ ├── Whaling - spear phishing targeting executives
│ ├── Vishing - voice/VoIP calls
│ ├── Smishing - SMS-based lures
│ └── Spoofing - forged sender identity (email/caller ID)
├── Physical / In-person
│ ├── Tailgating - following an authorised person through a door
│ ├── Impersonation - pretending to be IT support, a vendor, etc.
│ ├── Shoulder Surfing - observing credentials being entered
│ ├── Dumpster Diving - recovering data from discarded materials
│ └── Baiting - dropping infected USB drives / physical media
└── Hybrid
└── Pretexting - constructing a fabricated scenario to extract info

The attacker sends a mass email that appears to come from a legitimate source (bank, IT support, delivery service). The goal is to:

  • Harvest credentials via a fake login page
  • Deliver malware via attachment or link
  • Trick the user into performing an action (wire transfer, password reset)

Anatomy of a phishing email:

From: [email protected] ← typosquatted domain
Subject: Urgent: Your account has been suspended
Dear Customer,
We have detected suspicious activity on your account.
Please verify your identity immediately to avoid suspension:
[Click here to verify] → https://paypa1.com/login.php
This link expires in 24 hours. ← artificial urgency

Red flags to teach users:

IndicatorExample
Sender domain mismatchpaypa1.com vs paypal.com
Mismatched link URLDisplay text says one URL, href goes to another
Artificial urgency”Act within 24 hours or your account is deleted”
Generic greeting”Dear Customer” instead of your name
Requests for credentialsLegitimate services never ask for your password via email
Grammar/spelling errorsThough AI-generated phishing is increasingly fluent

A targeted variant - the attacker researches the victim (LinkedIn, company website, social media) to craft a convincing, personalised message.

From: [email protected] ← domain typosquat
Subject: Wire transfer for Project Atlas acquisition
Hey Sarah,
As discussed in Monday's board meeting, please process the
$187,000 wire to our acquisition partner. Details attached.
I'm in a meeting, call me after.
- Jason

This is also called Business Email Compromise (BEC) when it targets financial transactions. The FBI estimates BEC losses exceed $43 billion globally (2023 IC3 report).


Spear phishing targeting C-suite executives (CEO, CFO, CISO). Attackers invest significant reconnaissance time. The payoff is large: executive credentials give broad system access, and wire transfer requests from “the CEO” often bypass normal approval checks.


Attacker calls the victim pretending to be:

  • IT support (“We’ve detected a breach, I need your credentials to secure your account”)
  • A bank fraud department (“We’re blocking a suspicious transaction - verify your card number”)
  • A government official (IRS, HMRC, etc.)

VoIP makes caller ID spoofing trivial. The call appears to come from a real company’s phone number.

Defence: Always call back on a number you look up independently. No legitimate IT team ever asks for your password.


SMS-based phishing. Common lures:

  • Fake delivery notifications with a tracking link
  • Bank fraud alerts
  • “Your Apple ID has been compromised”

Short URLs in SMS make it impossible to inspect the destination without clicking.


Forging the From field of an email to impersonate a trusted sender.

Envelope From (Return-Path): [email protected] ← actual sender
Header From (visible): [email protected] ← forged display name

Email authentication protocols that prevent spoofing:

ProtocolWhat it does
SPF (Sender Policy Framework)DNS record listing which IPs are allowed to send mail for a domain
DKIM (DomainKeys Identified Mail)Cryptographic signature on outgoing email, verified by receiver using DNS
DMARCPolicy that tells receivers what to do when SPF/DKIM fail (quarantine, reject)
Terminal window
# Check SPF record for a domain
dig TXT google.com | grep spf
# Check DKIM record
dig TXT google._domainkey.google.com
# Check DMARC policy
dig TXT _dmarc.google.com

The attacker constructs a fictional scenario (pretext) to manipulate the victim into providing information or taking an action.

Example: Attacker calls HR pretending to be a payroll vendor:

“Hi, I’m calling from ADP. We’re updating our direct deposit records and need to verify the last four digits of your payroll routing number.”

Unlike phishing (which is opportunistic), pretexting involves extensive scripting and may span multiple contacts over time to build trust.


An unauthorised person physically follows an authorised person through a secured door without presenting credentials. Often uses social pressure (“Could you hold that? My hands are full”) or social cover (dressed as a delivery person, maintenance worker).

Mitigation: Mantraps / access control vestibules (two interlocking doors - only one opens at a time), security guards, badge reader enforcement, security culture training.


Attacker claims to be IT support, a vendor, a new employee, or an inspector. They’re given access because people assume authority from context (uniform, ID badge, confident demeanour).

Robert Cialdini’s principle of authority is being exploited: people comply with perceived authority figures without challenging them.


Observing credentials being entered in public spaces (coffee shops, airports, open offices). Can be done in person or via hidden cameras.

Mitigation: Privacy screens on laptops, awareness of surroundings when entering PINs/passwords, physical keyboard concealment at ATMs.


Recovering sensitive printed documents, storage media, or physical artefacts from trash bins. Credential lists, org charts, financial documents, and hardware with data can all be recovered this way.

Mitigation: Shred all documents before disposal (cross-cut, not strip-cut). Degauss / physically destroy storage media. Secure destruction certificates for bulk disposal.


Attacker leaves USB drives in car parks, lobbies, or conference rooms labelled with enticing labels (“HR Salaries Q4” or “Executive Bonus Plan”). Curious employees plug them in - the drive auto-runs malware.

Mitigation: Disable USB autorun (via Group Policy / udev rules). Block USB storage at endpoint level. Train users: never plug in a found device.

Terminal window
# Linux: block USB mass storage (persistent)
echo "blacklist usb-storage" >> /etc/modprobe.d/blacklist.conf
Terminal window
# Windows: disable USB storage via registry
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\USBSTOR" -Name "Start" -Value 4

Physical security protects hardware and access points that software controls can’t reach.

ControlPurpose
Security guardsMonitor access points, challenge unknown visitors
Badge / RFID readersEnforce identity-based access to areas
Access control vestibule (mantrap)Prevent tailgating - only one door open at a time
CCTV / video surveillanceDeterrent + forensic evidence
Motion sensorsTrigger alerts or cameras in restricted areas
Equipment locksAnchor servers, kiosks to prevent physical theft
Bollards / fencesPrevent vehicle-based attacks / perimeter control
Alarm systemsDetect and alert on unauthorised physical access

Technical defences fail if users aren’t trained. Security culture is a process, not a one-time event.

ComponentFrequencyPurpose
Onboarding security trainingOnce (for all new hires)Establish baseline awareness
Annual refresherYearlyReinforce and update for new threats
Phishing simulationsQuarterlyTest effectiveness, identify vulnerable users
Incident-triggered trainingAfter a near-missImmediate reinforcement
Role-specific trainingAs neededDevelopers, finance, execs face tailored threats
  1. Run simulations without warning - notified tests don’t reflect real behaviour
  2. Immediate feedback - when a user clicks, show them right away what they missed
  3. Track improvement over time, not just click rates - reward progress
  4. Never punish or shame - fear discourages reporting real incidents
  5. Make reporting easy - a “Report Phishing” button in the email client removes friction

Establish a dedicated security contact - a real email address or Slack channel where employees can ask “is this legitimate?” without fear of looking naive. Security teams that are approachable get advance warning of attacks before they succeed.


AttackPrimary Technical ControlPrimary Human Control
PhishingEmail gateway (SPF/DKIM/DMARC), URL filteringTraining to recognise lures; report-phishing button
Spear Phishing / BECDMARC reject policy, payment process controlsOut-of-band verification for wire transfers
VishingCaller ID awarenessPolicy: never give credentials over phone; call back
TailgatingMantrap, badge readersChallenge strangers; enforce “no piggybacking” culture
USB BaitingDisable USB storage via policyTrain: don’t plug in unknown devices
Dumpster DivingSecure shredding binsData handling policy; certificate of destruction
PretextingNeed-to-know access controlsVerification procedures before disclosing info