top of page

From Debriefing to Topological Learning: Rethinking Crisis-Inducing Surprise after October 7


This analysis is a condensed version of an article published in Volume 2, Issue 2 of the Elrom Center’s journal, Aerospace & Defense, in December 2025.

 

By Professor Eviatar Matania[1]

 

It is widely acknowledged that on October 7, 2023, the Israeli Air Force failed to translate its capabilities into effective action and to thwart, or at least significantly curtail, Hamas’s surprise attack and its subsequent infiltration into Israel’s western Negev. According to the IAF’s initial debriefing, the force’s ineffectiveness during the initial hours stemmed from a combination of complete surprise and an absence of preparedness for the scenario that ultimately unfolded. As the Commander of the IAF acknowledged: “Whatever we might have done, without intelligence and advance preparation, we could not have prevented the disaster but only reduced the damage.”

 

The Air Force has since entered a process of learning and adaptation in response to this failure. For example, the Participation and Helicopters Group has been restructured and redefined as the Participation Border Defense Command, reflecting a shift toward preparing the Air Force for potential ground invasions, including combat within Israeli territory. At the same time, the Air Force is seeking to expand and modernize its attack helicopter fleet, procure reconnaissance and defense aircraft, and reinforce the protection of its bases against infiltration and takeover threats.

 

Debriefing and extracting lessons aimed at preventing the recurrence of invasion scenarios similar to that of October 7 constitute a significant component of the learning process emerging from this disaster, yet they represent only an initial step. This paper seeks to move beyond this stage by proposing a foundation for a more comprehensive and holistic approach to learning from what we term a “crisis-inducing surprise”.

We define a crisis-inducing surprise as an event that generates a functional breakdown. This may occur either because the surprised party’s perception of reality collapses—when the scenario falls outside the spectrum of anticipated threats—or because the underlying intelligence picture is fundamentally flawed. Learning, as distinct from debriefing, is oriented towards addressing future scenarios that may differ substantially in their execution from the events of October 7 yet share their underlying logic: a surprise attack that temporarily prevents the IAF from translating its capabilities into effective operational action.

The proposed learning process is structured around a two-dimensional analytical space defined by two axes. The first axis captures the origin of the surprise attack, understood in terms of the operational domain rather than the identity of the adversary. At one end of the spectrum are attacks originating exclusively in the air domain, which falls under the primary responsibility of the Israeli Air Force. This extends to attacks centered on the ground domain, the principal arena for territorial seizure in wars that threaten state sovereignty and survival, as exemplified by the October 7 attacks. At the far end of the axis are multi-domain surprise scenarios, involving the simultaneous integration of several domains (e.g., land and cyber, air and cyber, or sea and land).

The second axis represents the target of the attack, i.e. the primary target and scope of it ranges from surprise scenarios directed primarily at the Israeli Air Force (its operations, assets, and command-and-control systems) to attacks generating localized effects at the state level, affecting a specific geographical region or sector. The events of October 7 constitute a representative case of such a regional-level impact, which may, but does not necessarily, include significant degradation of the Air Force itself. At the far end of the spectrum are crisis-inducing surprise scenarios involving widespread and systemic damage to the state, including large-scale disruption across multiple sectors or the outbreak of all-out war.

Accordingly, the following matrix maps the crisis-inducing surprise scenarios along the two axes mentioned above. The table provides examples of a possible scenario for each category.



Table: The crisis-inducing surprise scenario space along two axes. Mapping this space enables understanding and response planning for potential extreme scenarios in each cell, and consequently, through combination, for the entire infinite scenario space. The table provides examples of possible scenarios for each category.

 

This framework provides a comprehensive perspective on crisis-inducing surprise scenarios, with a particular focus on the Air Force’s preparedness. The central cell—representing a ground attack confined to a specific region of the country—corresponds to the October 7 invasion. The remaining cells capture a wide range of potential crisis scenarios across different domains and levels of impact.

 

For example, the top-left cell (representing multi-domain attacks directed primarily at the Air Force) encompasses several possible scenarios, whether occurring independently or in combination. These include: the disruption of the Air Force’s command-and-control systems through a cyberattack; a ground incursion by elite units from Hamas, Hezbollah, or other actors, supported by UAVs, aimed at damaging and disabling Air Force bases; infiltration into an Air Force installation under the cover of a civilian disturbance or incited mob, followed by attacks by organized cells targeting infrastructure and aircraft; and large-scale precision missile strikes against critical Air Force assets.

 

The learning process is advanced through the use of extreme-case scenarios. By structuring the space of possibilities into distinct categories, the framework generates not a single reference scenario, but a matrix of nine analytically differentiated configurations of surprise. Each cell thus represents a category of potential disruption rather than a specific event. 

 

Selecting an extreme-case scenario within each category—and preparing for it—creates a form of robustness by design. By a fortiori logic, preparedness for the most severe manifestation within a given category extends to a broader range of less extreme, unstated scenarios that share the same underlying configuration. What matters is not the particular scenario chosen, but the deliberate construction of an extreme-case benchmark within each category, ensuring that no configuration of crisis-inducing surprise remains unexamined.

 

How does the proposed topological framework enable coping with a crisis-inducing surprise, which—true to its name—is a surprise that triggers a destabilization of the perception of reality and a psychological readiness crisis?

The answer is based on the following three-stage logic of the topology-based learning process proposed for deploying the space of critical surprise scenarios. First, the deployment of the space is based on a complete principled analysis across two axes of possible critical surprise, a logical space created through an abstract analysis of space deployment that is not sensitive to intelligence or to prior conceptions about the enemy — its capabilities and readiness — nor to a designated threat. As such, this space inherently encompasses the various possible scenarios or at least comes very close to capturing the full range of possible surprise types.

Second, force buildup, readiness, and training for all scenarios should create the capacity to respond to various crisis situations in a nearly automatic manner, since almost every critical surprise scenario that occurs is contained within the matrix or is a combination of several of them. A response that is nearly automatic significantly reduces the time required to respond to a crisis. In other words, even if conceptually the corps and its commanders are in a state of crisis, the semi-automatic response should reduce its consequences and facilitate rapid and near-full operational recovery. The same statement by the corps commander — that “without intelligence and without appropriate preparedness...” — will change because appropriate preparedness will now be in place.

 

 


[1] Eviatar Matania is a full professor at the School of Political Science, Government and International Affairs at Tel Aviv University, where he heads the Master’s Programs in Security Studies and in Politics, Cyber and Government. He also serves as Head of the Elrom Center for Air and Space Studies and Editor-in-Chief of its journal, Aerospace & Defense.. https://elrom.sites.tau.ac.il/khvqrym/eviatar-matania

Contact us

Elrom Center for Air and Space Studies

 

Tel Aviv University, Haim Levanon 30, Ramat Aviv, Tel Aviv 69978

airspace@tauex.tau.ac.il

© 2026 by Tel Aviv University. Created with Wix.com

bottom of page