Most organisations still sound confident when they talk about cyber readiness. Policies are signed off, tools are deployed, and annual tabletop exercises give the impression of coordination. Yet when those same teams are tested in realistic drills, decision accuracy drops sharply and containment often stretches into days, not hours

For IT leaders and SOC managers, that gap between confidence and reality is the real risk. The problem is simple: tabletop exercises (TTX) and technical testing (TTP replay, purple teaming, red teaming) usually run in parallel, with different owners, different objectives, and no shared evidence. That leaves everyone relying on assumptions when incidents need to be handled against hard deadlines from NIS2, GDPR, contracts, and customers

Damovo’s Offensive Cybersecurity Advisory Team (Lares) has developed a 6‑step Adversarial Integration Methodology that deliberately fuses TTX and live‑fire TTP replay into a single, closed loop. The aim is straightforward: replace “we think we’d catch it” with evidence that proves how your organisation really detects, escalates, and reports critical attacks.

The readiness gap: when assumptions meet reality

On paper, most organisations believe they can detect, contain and recover from major incidents. In independent benchmarking, however, decision‑making accuracy in realistic drills falls to a fraction of what leaders expect, and median containment times routinely exceed 24 hours. That is a serious problem in a European environment where incident reporting clocks now start within hours, not days.

Classic tabletops are part of the issue. They expose gaps in people and process – confused escalation paths, unclear ownership, missing regulatory steps – but they run on technical assumptions that are never verified. Participants confidently state that “the firewall will block that”, “EDR will definitely alert”, or “the SOC would notice this within ten minutes”, and the exercise moves on.

On the other side, many purple team and detection‑engineering programmes still chase framework coverage. Teams play “MITRE ATT&CK bingo”, working through as many techniques as possible without tying them to specific business risks, decision points or reporting obligations.

The result is a readiness theatre where neither track answers the only question that matters to IT and SOC leaders: “for this attack, in this environment, can we see it, act on it, and demonstrate that we did the right thing in time?”

Why TTX alone is not enough for IT and SOC leaders

From an operational viewpoint, traditional tabletop programmes have three major blind spots.

  • No telemetry: Tabletops generate notes and action items, not log files, alerts or timing data that can feed detection engineering.
  • Unverified assumptions: Engineers are forced to guess how EDR, SIEM, identity, OT and cloud controls would behave because nobody is actually executing the attack.
  • No link to tooling roadmap: Without concrete evidence, it is difficult to prioritise logging changes, rule tuning or architecture work that would materially improve response.

This is made more acute by regulatory timelines. NIS2 incident reporting can require early warnings within 24 hours and more detailed updates within 72 hours, while GDPR imposes its own 72‑hour notification rule for personal data breaches. If TTX outputs are not tied to what your tools can actually see and how fast they see it, leaders are effectively gambling with those deadlines.

Introducing Damovo’s 6‑step Adversarial Integration Methodology

To close this gap, Damovo’s Offensive Cybersecurity Advisory Team (Lares) uses a 6‑step Adversarial Integration Methodology that starts with a single, realistic threat scenario and carries it all the way from tabletop discussion to live‑fire TTP replay and retesting.

At a high level, the six steps are:

  1. Start with a threat your business actually cares about.
  2.  Run a tabletop that captures every assumption – and every clock.
  3. Translate the story into an adversarial TTP playbook.
  4. Replay the TTPs in your environment.
  5. Reconcile telemetry with tabletop assumptions.
  6. Fix, tune, and prove improvement.

Each step is designed so IT leaders, SOC managers, risk, legal and communications are all working from the same story and the same evidence, instead of separate exercises and slide decks.

Step 1: Start with a threat your business actually cares about

Everything begins with a believable scenario that could genuinely disrupt your organisation – not just an abstract “ransomware somewhere in the network”. The scenario is built from:

  • Internal cyber threat intelligence and incident history.
  • Sector‑specific information sharing (for example, ISACs) and current threat reports.
  • Input from legal, risk, and procurement on critical suppliers and third‑party exposure.

 

Examples include compromise of a key SaaS provider leading to cloud privilege escalation in a revenue‑critical business unit, targeted phishing that delivers identity compromise across your hybrid AD and cloud tenants, or data exfiltration via sanctioned collaboration tools used by customer‑facing teams.

If your own engineering leads or application owners do not believe the scenario could happen, they will disengage; this first step makes sure they see their own systems and responsibilities in the story.

Step 2: Run a tabletop that captures every assumption

With the scenario in place, the next move is a targeted tabletop exercise for IT, security, operations, legal and communications. The aim is to stress the organisation while several regulatory and contractual clocks are running in parallel, not just to walk through a generic playbook.

This includes NIS2 early‑warning and notification deadlines, GDPR’s 72‑hour breach reporting requirement, and any sector‑specific or contractual notification clauses. Facilitators pin these onto the scenario and ask, at each stage, who is responsible for classification, who talks to whom, and when notifications start, even if root cause is still unclear.

Most importantly for IT and SOC leaders, every technical statement becomes an explicit assumption: “we would know within an hour”, “the SIEM would correlate these events”, “IAM analytics would flag the behaviour”, or “we could isolate the affected systems before notifying authorities”. Those assumptions are written down with timestamps; they become hypotheses that Step 3 and Step 4 will test.

Step 3: Translate the story into an adversarial TTP playbook

Once the tabletop is complete, Damovo’s offensive engineers translate the narrative into a concrete, hands‑on‑keyboard TTP playbook. The scope is tightly aligned to the scenario instead of trying to exercise every technique in a framework.

For a cloud identity compromise scenario, for example, the playbook may include:

  • Specific credential theft and token reuse paths against your cloud identity provider or SaaS platform.
  • “Living off the land” techniques using existing admin tools, scripting capabilities, and approved services.
  • Data exfiltration routes via your sanctioned cloud storage, email systems or collaboration apps.

 

Throughout this step, IT architecture and SOC teams stay involved so the playbook reflects actual control sets, logging capabilities, and change‑management constraints. That prevents tests from focusing on attacks your environment is not exposed to, or on controls you do not actually run in production.

Step 4: Replay the TTPs in your environment

Next comes a controlled purple team engagement using the engineered playbook in your production or production‑like environment. This is where assumptions from the tabletop are replaced by observable behaviour from your tools and teams.

During this step, Damovo’s Offensive Cybersecurity Advisory Team (Lares) works with your SOC and IT teams to capture three key outputs:

  • Execution reality: which attack steps succeeded, which were blocked, and which were only partially detected.
  • Telemetry footprint: events and alerts from EDR/XDR, SIEM, identity platforms, cloud logs and network controls, plus visibility gaps where expected data is missing or delayed.
  • Timings: actual Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) across teams and layers.

 

Lares describes this as testing the nervous system of your organisation: everything that sits between an attacker’s keystrokes and your dashboards. For SOC managers, this is the moment where detection rules and playbooks are finally exercised against the exact behaviours the business discussed in Step 2.

Step 5: Reconcile telemetry with tabletop assumptions

The fifth step is where the methodology exposes the real readiness gap. The timeline and decisions from the tabletop are laid side by side with the evidence from the TTP replay.

Typical findings include:

  • Leadership assumed “we will detect this within 30 minutes”; telemetry shows the first meaningful alert arrived after 90 minutes because cloud logs were delayed or incomplete.
  • The tabletop stated “the SIEM will correlate these events”; in reality, individual logs were present but correlation rules never fired, so the activity blended into background noise.
  • The plan expected IAM or UEBA to highlight unusual token usage; in the replay, no alert was generated because analytics were not enabled for that provider or tenant.

 

From this, Damovo helps quantify metrics such as Assumption Accuracy Rate (what percentage of TTX technical assumptions proved correct), Detection Fidelity (signal‑to‑noise performance for the tested behaviours), and Remediation Validation (how many identified gaps are successfully closed in retests). These metrics give IT leaders and SOC managers something far more concrete to report than generic “maturity” scores.

Step 6: Fix, tune, and prove improvement

The final step turns all of this insight into measurable progress. Damovo works with your teams to prioritise remediation based on risk, regulatory exposure, and implementation effort, then validates changes through targeted retesting.

Common improvements include:

  • Closing logging gaps and normalising telemetry across cloud, SaaS, network and identity platforms.
  • Tuning or creating SIEM correlation rules, EDR/XDR policies and IAM analytics that specifically detect the replayed behaviours.
  • Adjusting incident classification, escalation and communication runbooks to remove bottlenecks exposed in the tabletop.

 

Relevant parts of the TTP playbook are then rerun to verify that MTTD and MTTR have improved and that previously missed activity is now reliably detected. For IT leadership, this provides a before‑and‑after story for each scenario: here is what we assumed, what we observed, what we changed, and how much faster and more reliably we can now respond.

What this looks like in practice: cloud identity compromise

Consider a scenario many organisations now worry about: compromise of a critical SaaS provider leading to cloud privilege escalation.

  • Step 1, threat intelligence shows active supply‑chain risk against your identity provider.
  • Step 2, the tabletop assumes IAM analytics will flag anomalous token use and the SOC will contain the identity within fifteen minutes, while NIS2 early‑warning and GDPR reporting clocks are already running.
  • Step 3, engineers build a TTP playbook focused on cloud credential theft and token reuse paths that match your environment.
  • Step 4, the purple team executes the token theft. EDR never sees the cloud‑only movement, and SIEM rules fail to trigger because cloud logs are misconfigured.
  • Step 5, it becomes clear that the fifteen‑minute containment assumption was completely unrealistic given the existing telemetry; no alert was generated at all.
  • Step 6, logging is corrected, IAM correlation searches are deployed, and the scenario is rerun. This time, the SOC detects and contains the identity in twelve minutes, turning an assumption into a proven capability.

This is the kind of concrete, evidence‑backed narrative that makes it easier to justify detection‑engineering work, logging investments, and process changes to both internal stakeholders and regulators.

How Damovo can Help

Through Damovo Security Services powered by Lares, organisations can bring this 6‑step Adversarial Integration Methodology into their own environments as part of an ongoing validation programme, not just a one‑off engagement.

Working alongside your IT and SOC teams, Damovo can help you:

  • Design realistic, threat‑informed scenarios that reflect your environment and regulatory exposure.
  • Run tabletops that produce testable assumptions and decision logs rather than generic action lists.
  • Execute targeted TTP replay to validate detection and response for those scenarios.
  • Quantify the readiness gap with metrics that boards, auditors and regulators can understand.

Build a repeatable cycle of remediation and retesting that improves performance over time.

Turn your next tabletop into evidence

If you are planning your next tabletop exercise, this is the right moment to turn it into more than a workshop. By combining TTX with TTP replay in a structured, 6‑step loop, you can move from assumptions and slideware to telemetry, timings, and proven improvement.

Next step: talk to your Damovo account team or contact the Lares team directly to explore how we can help you implement the 6‑step Adversarial Integration Methodology for your organisation.