📖 Training Guide
How roles and modes shape your training experience
Role × Training Mode Matrix
Property 📖 Learning 🎯 Practice 📋 Assessment
Objective Understand threats Identify real vs. fake Prove competency
Threat Pool Real threats only Real + distractors Real + distractors
Hints Full details visible STRIDE + MITRE + Tactic + hover clue ✖ None
Scoring No score 3-step scored 3-step scored
Timer None None 30 minutes — auto-submit
⌨️ Developer Threats framed as code vulnerabilities Identify threats + pick Code Change controls Same, no hints
🔍 Analyst Threats framed as architecture gaps Identify threats + pick Design Decision controls Same, no hints
🏗️ Architect Threats framed as systemic risks Identify threats + pick Security Pattern controls Same, no hints
What Changes by Role
Dimension ⌨️ Developer 🔍 Analyst 🏗️ Architect
Lens banner Code-level framing Architecture framing System-wide framing
Control labels Code Change
Code Practice
Crypto Code
Design Decision
Audit Policy
Architecture Rule
Security Pattern
Defense-in-Depth
Zero-Trust
Learning focus Where in code does this vulnerability live? Which data flow or trust boundary does this cross? How does this enable lateral movement at scale?
Practice focus Can I recognise the threat a developer would introduce? Can I recognise the threat an architect would overlook? Can I map the full attack chain across components?
Assessment focus No hints — pure code-level threat recognition No hints — pure architecture-level recognition No hints — pure systemic threat recognition
How Scoring Works (Practice & Assessment)
Step What You Do Weight How Scored
① Threat ID Select which items in the pool are real threats 50% % correct found, minus 3 pts per false positive
② Risk Rating Rate each threat using OWASP 8-factor (Likelihood × Impact) 25% Accuracy vs. expert benchmark; within 1 = 85%, within 2 = 65%
③ Controls Select effective mitigations from the role-labelled list 25% % correct chosen, minus 15 pts per wrong selection
Pass threshold ≥ 75% total score to pass Grades: A ≥ 90 · B ≥ 75 · C ≥ 60 · D < 60
The Key Insight
Role changes your perspective. Mode changes your difficulty.
A Developer and an Architect looking at the same SQL Injection threat will:
  • See the same threat in the pool SAME
  • Get scored identically SAME
  • But the Developer is asked "what code fix addresses this?" DIFFERENT
  • While the Architect is asked "what security pattern prevents this class of threat?" DIFFERENT

Recommended progression for a new joiner:

Start with Learning mode on Web Application to build foundational vocabulary → then Practice mode on the same scenario to test recall → finally Assessment mode under time pressure to measure readiness.

For SOC team leads: run Assessment mode on Internal Corporate Network and Authentication System — these scenarios map directly to the NTLM relay, DCSync, and credential-stuffing threats most likely to appear in real incidents.

Recommended Path by Role
⌨️
Developer
Start: Web Application (Learning)
Then: Mobile App (Practice)
Challenge: CI/CD Pipeline (Assessment)

Focus on injection, hardcoded secrets, insecure libraries, and authentication bugs.
🔍
Analyst
Start: Authentication System (Learning)
Then: Cloud Infrastructure (Practice)
Challenge: Microservices (Assessment)

Focus on trust boundaries, data flow analysis, and IAM misconfiguration.
🏗️
Architect
Start: Internal Corporate Network (Learning)
Then: IoT Device + Cloud Gateway (Practice)
Challenge: Any scenario (Assessment, all 8)

Focus on attack chains, lateral movement paths, and systemic control gaps.
PASTA — Process for Attack Simulation and Threat Analysis

PASTA is a seven-stage, risk-centric threat modelling methodology. Unlike STRIDE (which asks "what can go wrong?" per component), PASTA asks "what is the attacker trying to achieve, and what is the business impact if they succeed?" — making it well-suited to risk prioritisation and executive reporting.

STAGE 1
Define Objectives
Establish business context: what assets matter, what regulations apply, what would a breach cost the organisation.
e.g. "A credential breach on our payment portal would trigger PCI-DSS breach notification and estimated $2M in fines."
STAGE 2
Define Technical Scope
Inventory the technical environment: infrastructure, OS, middleware, APIs, third-party dependencies, and network boundaries.
e.g. "The application runs on AWS ECS behind an ALB; it uses RDS PostgreSQL and calls three third-party payment APIs."
STAGE 3
Application Decomposition
Produce a Data Flow Diagram (DFD): identify components, data flows, trust boundaries, entry points, and exit points.
e.g. Browser → WAF → App Server → DB. Trust boundaries: internet/DMZ/internal. Entry points: login, file upload, API.
→ TMS DFD per scenario
STAGE 4
Threat Analysis
Enumerate threat actors, their motivations, capabilities, and likely TTPs using threat intelligence and MITRE ATT&CK.
e.g. "Financially motivated external actor using credential stuffing (T1110.004) and SSRF (T1190) targeting the login endpoint."
→ TMS Step 1: Threat Identification
STAGE 5
Vulnerability & Weakness Analysis
Map known CVEs, design flaws, and OWASP weaknesses to the components identified in Stage 3 that threat actors could exploit.
e.g. "The login endpoint lacks rate limiting (OWASP A7), enabling the T1110.004 credential stuffing threat identified in Stage 4."
STAGE 6
Attack Modelling & Simulation
Build attack trees and simulate realistic attack scenarios end-to-end, tracing paths from initial access to target impact.
e.g. Phishing → credential theft → VPN access → NTLM relay → file server → DCSync → full domain compromise.
→ TMS Attack Chain per scenario
STAGE 7
Risk & Impact Analysis
Rate residual risk using likelihood × impact; map findings to business impact; prioritise mitigations by risk reduction value.
e.g. OWASP 8-factor: Likelihood 7 × Impact 8 = CRITICAL. Recommended control: enforce MFA (reduces likelihood from 7 to 2).
→ TMS Step 2: Risk Rating
How TMS Exercises Each PASTA Stage

Each training session in TMS deliberately exercises Stages 3, 4, 6, and 7. Stages 1, 2, and 5 are pre-loaded into the scenario design — giving you the context without requiring a full real-world scoping exercise.

PASTA Stage What TMS Provides TMS Activity Exercised in
S1 Define Objectives Scenario description + difficulty sets the business context Pre-loaded — read at session start Learning
S2 Technical Scope 5-component architecture + trust zone layout per scenario Pre-loaded — shown in DFD Learning
S3 Decomposition DFD with External / DMZ / Internal trust zones + component types Inspect DFD; select which component each threat belongs to Practice · Assessment
S4 Threat Analysis Mixed pool of 15+ real threats + distractors; full MITRE ATT&CK mapping Step 1 — Threat Identification: identify real threats, reject distractors Practice · Assessment
S5 Vulnerability Analysis OWASP category, MITRE technique, and distractor explanations Review OWASP tags and technique IDs shown in Practice hints Learning · Practice
S6 Attack Modelling Attack chain per scenario: step-order lateral movement path Review attack chain on Results page; MITRE tactic chain with coverage gaps Results page
S7 Risk & Impact OWASP 8-factor expert benchmarks per threat (L × I = risk matrix) Step 2 — Risk Rating: rate all 8 factors; scored against expert benchmark Practice · Assessment
Controls Selection → Where does it fit?
TMS Step 3 (Control Selection) sits beyond the 7 PASTA stages — it is the treatment decision that follows risk analysis. In a real PASTA engagement this is captured in the final risk register as recommended mitigations with residual risk scores. In TMS it is role-differentiated: a Developer selects a Code Change, an Analyst a Design Decision, and an Architect a Security Pattern — all addressing the same threat at different layers of the architecture.
PASTA vs STRIDE — When to Use Which
Dimension STRIDE PASTA
Primary question "What can go wrong with this component?" "What is the attacker's goal and what is the business impact?"
Starting point DFD components and data flows Business objectives and threat intelligence
Depth ✔ Fast — exhaustive per component in hours ◑ Thorough — full engagement takes days
Risk scoring ◑ Supported — DREAD or OWASP can augment ✔ Native — risk rating built into Stage 7
Attack simulation ✖ Not native — MITRE ATT&CK mapping needed ✔ Native — Stage 6 is attack tree / simulation
Executive output ✖ Weak — produces threat lists, not risk-ranked findings ✔ Strong — produces business-impact risk register
Developer-friendly ✔ Yes — intuitive per-component categories ◑ Less so — requires threat intel and risk expertise
Best for Sprint-level threat modelling; developer security training; architecture reviews Pre-launch risk assessments; red team scoping; CISO-level reporting; compliance
Used together Recommended: use STRIDE (Stages 3–4) to enumerate threats exhaustively, then PASTA (Stages 5–7) to prioritise by business risk and simulate attack paths. TMS trains exactly this hybrid workflow.
🚀 Start Training