arxiv:2011.03263v2 [cond-mat.stat-mech] 13 may 2021

19
Hybrid-type synchronization transitions: where incipient oscillations, scale-free avalanches, and bistability live together Victor Buend´ ıa, 1, 2, 3 Pablo Villegas, 4 Raffaella Burioni, 2, 3 and Miguel A. Mu˜ noz 1 1 Departamento de Electromagnetismo y F´ ısica de la Materia e Instituto Carlos I de ısica Te´orica y Computacional. Universidad de Granada, E-18071 Granada, Spain 2 Dipartimento di Matematica, Fisica e Informatica, Universit` a di Parma, via G.P. Usberti, 7/A - 43124, Parma, Italy 3 INFN, Gruppo Collegato di Parma, via G.P. Usberti, 7/A - 43124, Parma, Italy 4 Networks Unit, IMT School for Advanced Studies, Lucca, Italy & Instituto Carlos I de F´ ısica Te´orica y Computacional. Universidad de Granada, E-18071 Granada, Spain (Dated: May 14, 2021) The human cortex is never at rest but in a state of sparse and noisy neural activity that can be detected at broadly diverse resolution scales. It has been conjectured that such a state is best described as a critical dynamical process —whose nature is still not fully understood— where scale- free avalanches of activity emerge at the edge of a phase transition. In particular, some works suggest that this is most likely a synchronization transition, separating synchronous from asynchronous phases. Here, by investigating a simplified model of coupled excitable oscillators describing the cortex dynamics at a mesoscopic level, we investigate the possible nature of such a synchronization phase transition. Within our modelling approach, we conclude that —in order to reproduce all key empirical observations, such as scale-free avalanches and bistability, on which fundamental functional advantages rely— the transition to collective oscillatory behavior needs to be of an unconventional hybrid type, with mixed features of type-I and type-II excitability, opening the possibility for a particularly rich dynamical repertoire. I. INTRODUCTION Neurons in the cerebral cortex fire in a rather irregular and sparse way, even in the absence of external stim- uli or tasks [1–3]. This activity is also manifested at large scales in the so-called resting-state networks [4, 5]. Understanding the origin and functionality of such an energetically-costly fluctuating “ground state” is a fun- damental question in neuroscience, essential to shed light on how the cortex processes and transmits information [6–10]. Two twin sides of spontaneous neuronal activity are synchronization and avalanches. Depending mostly on cortical regions and functional states, diverse synchro- nization levels across a continuum spectrum are observed. Both synchronous and asynchronous states are retained to be essential for diverse aspects of information process- ing; e.g., neuronal synchronization is at the root of col- lective oscillatory rhythms, a crucial aspect for informa- tion transmission between distant areas [11–13], while asynchronous states also play key roles for information coding [14]. Accumulating evidence –including results from the Human Brain Project [15]– suggests that the ground state of spontaneous activity of a healthy cortex lies close to the edge of a synchronization phase tran- sition, neither exceedingly synchronous nor overly inco- herent, allowing for transient and flexible levels of neu- ral coherence, as well as a very-rich dynamical repertoire [10, 15, 16]. Indeed, abnormalities in the synchronization level are linked to pathologies such as e.g. Parkinsonian disease (excess) and autism (deficit) [17–19]. On the other hand, neuronal activity is also observed to propagate in the form of irregular outbursts, termed neuronal avalanches [20–22]. These are cascades of acti- vations clustered in time and interspersed by periods of relative quiescence, which are robustly observed across cortical areas, species, and resolution scales [10, 23– 25]. Neural avalanche sizes (S) and durations (T ) are empirically observed to be scale-free, i.e., their asso- ciated probability distributions exhibit power-law tails P (S) S -τ , P (T ) T -α , and the mean avalanche size obeys hS(T )i∼ T γ with γ fulfilling the scaling relation γ =(α - 1)/(τ - 1), which is a fingerprint of criticality [26, 27]. In other words, neuronal avalanches are scale- free, exhibiting signatures of dynamical criticality [20], with critical exponents close to those of a critical branch- ing process, describing marginal propagation of activity [20–22, 28]. These empirical observations triggered the develop- ment of the “criticality hypothesis ”, conjecturing that the cortex might extract crucial functional advantages –e.g., large sensitivity and optimal computational capabilities– by operating close to a critical point [20, 23, 24, 29– 32]. However, there is still no consensus on what type of phase transition is required for such a critical behavior; a critical branching process, as mentioned above, is the most common and straightforward interpretation, but al- ternative scenarios have been put forward [27, 33–36]. Among these, a Landau-Ginzburg theory has been re- cently proposed to describe cortical dynamics at a meso- scopic level suggesting that the cortex might operate in a regime close to the edge of a synchronization phase tran- sition at which, quite remarkably, scale-free avalanches emerge in concomitance with incipient oscillations [37]. This scenario that had been previously suggested in the literature [35, 37–42] and that is supported by empirical arXiv:2011.03263v2 [cond-mat.stat-mech] 13 May 2021

Upload: others

Post on 01-Feb-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Hybrid-type synchronization transitions: where incipient oscillations, scale-freeavalanches, and bistability live together

Victor Buendıa,1, 2, 3 Pablo Villegas,4 Raffaella Burioni,2, 3 and Miguel A. Munoz1

1Departamento de Electromagnetismo y Fısica de la Materia e Instituto Carlos I deFısica Teorica y Computacional. Universidad de Granada, E-18071 Granada, Spain

2Dipartimento di Matematica, Fisica e Informatica,Universita di Parma, via G.P. Usberti, 7/A - 43124, Parma, Italy

3INFN, Gruppo Collegato di Parma, via G.P. Usberti, 7/A - 43124, Parma, Italy4Networks Unit, IMT School for Advanced Studies, Lucca,

Italy & Instituto Carlos I de Fısica Teorica y Computacional. Universidad de Granada, E-18071 Granada, Spain(Dated: May 14, 2021)

The human cortex is never at rest but in a state of sparse and noisy neural activity that canbe detected at broadly diverse resolution scales. It has been conjectured that such a state is bestdescribed as a critical dynamical process —whose nature is still not fully understood— where scale-free avalanches of activity emerge at the edge of a phase transition. In particular, some works suggestthat this is most likely a synchronization transition, separating synchronous from asynchronousphases. Here, by investigating a simplified model of coupled excitable oscillators describing thecortex dynamics at a mesoscopic level, we investigate the possible nature of such a synchronizationphase transition. Within our modelling approach, we conclude that —in order to reproduce all keyempirical observations, such as scale-free avalanches and bistability, on which fundamental functionaladvantages rely— the transition to collective oscillatory behavior needs to be of an unconventionalhybrid type, with mixed features of type-I and type-II excitability, opening the possibility for aparticularly rich dynamical repertoire.

I. INTRODUCTION

Neurons in the cerebral cortex fire in a rather irregularand sparse way, even in the absence of external stim-uli or tasks [1–3]. This activity is also manifested atlarge scales in the so-called resting-state networks [4, 5].Understanding the origin and functionality of such anenergetically-costly fluctuating “ground state” is a fun-damental question in neuroscience, essential to shed lighton how the cortex processes and transmits information[6–10].

Two twin sides of spontaneous neuronal activity aresynchronization and avalanches. Depending mostly oncortical regions and functional states, diverse synchro-nization levels across a continuum spectrum are observed.Both synchronous and asynchronous states are retainedto be essential for diverse aspects of information process-ing; e.g., neuronal synchronization is at the root of col-lective oscillatory rhythms, a crucial aspect for informa-tion transmission between distant areas [11–13], whileasynchronous states also play key roles for informationcoding [14]. Accumulating evidence –including resultsfrom the Human Brain Project [15]– suggests that theground state of spontaneous activity of a healthy cortexlies close to the edge of a synchronization phase tran-sition, neither exceedingly synchronous nor overly inco-herent, allowing for transient and flexible levels of neu-ral coherence, as well as a very-rich dynamical repertoire[10, 15, 16]. Indeed, abnormalities in the synchronizationlevel are linked to pathologies such as e.g. Parkinsoniandisease (excess) and autism (deficit) [17–19].

On the other hand, neuronal activity is also observedto propagate in the form of irregular outbursts, termed

neuronal avalanches [20–22]. These are cascades of acti-vations clustered in time and interspersed by periods ofrelative quiescence, which are robustly observed acrosscortical areas, species, and resolution scales [10, 23–25]. Neural avalanche sizes (S) and durations (T ) areempirically observed to be scale-free, i.e., their asso-ciated probability distributions exhibit power-law tailsP (S) ∼ S−τ , P (T ) ∼ T−α, and the mean avalanche sizeobeys 〈S(T )〉 ∼ T γ with γ fulfilling the scaling relationγ = (α − 1)/(τ − 1), which is a fingerprint of criticality[26, 27]. In other words, neuronal avalanches are scale-free, exhibiting signatures of dynamical criticality [20],with critical exponents close to those of a critical branch-ing process, describing marginal propagation of activity[20–22, 28].

These empirical observations triggered the develop-ment of the “criticality hypothesis”, conjecturing that thecortex might extract crucial functional advantages –e.g.,large sensitivity and optimal computational capabilities–by operating close to a critical point [20, 23, 24, 29–32]. However, there is still no consensus on what type ofphase transition is required for such a critical behavior;a critical branching process, as mentioned above, is the

most common and straightforward interpretation, but al-ternative scenarios have been put forward [27, 33–36].Among these, a Landau-Ginzburg theory has been re-cently proposed to describe cortical dynamics at a meso-scopic level suggesting that the cortex might operate in aregime close to the edge of a synchronization phase tran-sition at which, quite remarkably, scale-free avalanchesemerge in concomitance with incipient oscillations [37].This scenario that had been previously suggested in theliterature [35, 37–42] and that is supported by empirical

arX

iv:2

011.

0326

3v2

[co

nd-m

at.s

tat-

mec

h] 1

3 M

ay 2

021

2

evidence [15, 43–45]. In spite of these advances, a min-imal model —simpler than the analytically un-tractableLandau-Ginzburg theory— capturing the gist of sucha synchronization phase transition and allowing for in-depth theoretical understanding is still missing.

Here we pose the following questions: can scale-freeavalanches possibly occur at the critical point of thecanonical model for phase synchronization? If not, whichis the minimal model for synchronization able to accom-modate them? What type of phase transition does itexhibit? Answering these questions will pave the roadfor the more ambitious goal of constructing a statisti-cal mechanics of cortical networks, shedding light on thecollective states –with crucial functional roles– that theysustain, as well as advancing our general understandingof phase transitions.

II. LACK OF SCALE-FREE AVALANCHES INTHE ANNEALED KURAMOTO MODEL

We start by analyzing the canonical model for phasesynchronization, customarily used in neuroscience [46–49] and other fields [50]: the Kuramoto model [50–53]. However, given that —as revealed by the Landau-Ginzburg model [37]— node heterogeneity is not an es-sential ingredient to generate avalanches, and that, on theother hand, noise is inherent to neural dynamics, we con-sider here the annealed version of the Kuramoto modelwith homogeneous frequencies [49, 50], i.e.

ϕj(t) = ω +J

N

N∑i=0

sin (ϕi(t)− ϕj(t)) + σηj(t) (1)

where the phase ϕj(t) describes the dynamical state ofthe j-th oscillatory unit, with j ∈ [1, N ], ω is a com-mon intrinsic frequency, ηj(t) a zero-mean unit-varianceGaussian white noise with amplitude σ, and J is the cou-pling strength with all the neighbours on, e.g., a fully-connected network [50–54]. As is well known, Eq.(1) ex-hibits a synchronization phase transition where the syn-chronization (Kuramoto) order parameter, Z = 〈eiϕ〉, ex-periences a (supercritical) Hopf bifurcation from a fixedpoint to a limit cycle, revealing the emergence of collec-tive oscillations [51, 53].

In what follows, we set to analyze if this model(and related models) can exhibit scale-free outburstsof activity. For this purpose, we define and measureavalanches following the standard protocol in neuro-science [20, 23], which relies on the identification ofindividual-unit “spikes” (in the case of models consist-ing of oscillators “spikes” can be identified in an effectiveway as crossings over a given phase value) as well as onthe definition of a time discretization window, needed tocluster close-in-time spikes together [20] (details of theprotocol are carefully explained in Appendix A; see also[55]). Extensive computational simulations (reported indetail in Appendix B) reveal that neither at the critical

point of Eq.(1) nor around it scale-free avalanches canbe found; i.e. P (S) and P (T ) always show exponentialdecays, even if other standard quantities are known to ex-hibit scaling for different versions of the Kuramoto model[56–58].

III. HYBRID-TYPE SYNCHRONIZATIONEMERGES FROM EXCITABLE OSCILLATORS

To search for a better suited minimal model for syn-chronization with scale-free avalanches, we scrutinize theLandau-Ginzburg theory (LG) in [37], known to exhibitthese two features in concomitance. In a nutshell, the LGmodel consists of a set of diffusively coupled units, eachof which represents a mesoscopic region of the cortex,and is described by a set of two dynamical equations forthe local density of: (i) neuronal activity and (ii) avail-able synaptic resources, respectively (see Appendix C fora more detailed presentation of the model). A salient fea-ture is that as the control parameter (which can be takento be the maximum possible level of synaptic resources)is increased, each isolated individual unit can experiencean infinite-period bifurcation from a low-activity fixedpoint to a limit cycle —with zero-frequency and fixed-amplitude at the bifurcation point— where both activityand synaptic resources oscillate in a “spike-like” manner,i.e., with a phase-dependent angular velocity, in contrastwith the sinusoidal oscillation in the Kuramoto model ofEq.1, suggesting that a different minimal model is neededto describe individual units. Moreover, isolated units canproduce spikes even when they are slightly below thresh-old owing to the effect of noise; in other words, theybehave as type-I excitable units [59, 60] (see AppendixD for a brief account of excitability types). Importantly,for sufficiently large values of the control parameter, thecoupled oscillatory units become synchronized and, atthe edge of the synchronization phase transition, scale-free avalanches emerge [37]. Finally, let us recall thata similar phenomenology is obtained in a variant of theLG model, including inhibitory neural populations [37],a well-recognized key player in the generation of neuralrhythms [61].

Guided by these observations, we consider a set of cou-pled oscillatory units, each of them represented by thecanonical form of type-I excitable units (see AppendixD) or “active rotors” as defined by

ϕ = ω + a sinϕ, (2)

where ω and a are parameters [62, 63]. For a > ω thedeterministic dynamics of each isolated unit exhibits astable fixed point at ϕ∗ = − arcsin(ω/a), as well as asaddle point with the opposite sign. The addition of astochastic term ση(t) can induce fluctuations beyond thesaddle, thus generating large excursions of the phase be-fore relaxing back to its equilibrium. On the other hand,for a < ω the system oscillates with phase-dependentangular velocity and, as the “saddle-node into invariant

3

FIG. 1. Illustrative diagrams sketching in a qualitative way the phase diagram —underlining the main bifurcations and phases—as analytically obtained. It reveals the existence of synchronous and asynchronous states, and a collectively-excitable regime,separated by diverse types of bifurcations for the collective order parameter. The central triangular-shaped region (whosesize has been increased for clarity) describes a bistability regime; its vertices correspond to codimension-2 bifurcations (seemain text). Finally, there is also a homoclinic line (Hom) linking the SNL and the BT points. (b) Sketch of the differentphases/regimes represented in terms of their corresponding complex Kuramoto order parameter, Z; a point represents eachregime in the unit circle (since |Z| ≤ 1) with filled circles describing fixed points, open circles stand for unstable fixed pointswith associated limit cycles, and mixed-color circles describe saddles. Bifurcations between different regimes are indicated asarrows.

circle” (SNIC) bifurcation point ac = ω is approached,the frequency of the oscillations vanishes, implying thatthe period becomes infinite, while the amplitude remainsconstant, as in the LG model (let us recall that type-Iexcitability can rely either on a “saddle-node into invari-ant circle” (SNIC) bifurcation, as in the equation above,or in a homoclinic bifurcation, as in the LG model [37]:both types share the common relevant feature of gener-ating spike-like infinite-period oscillations at the bifurca-tion point [59]).

Thus, finally, the full model reads

ϕj = ω+a sinϕj +J

Mj

Mj∑i∈n.n.j

sin (ϕi − ϕj) +σηj(t), (3)

where the sum runs over the (Mj) nearest neighbors ofunit j ∈ 1, 2, ...N in a given network. We consider ver-sions of the model embedded on fully-connected (FC)networks (Mj = N, ∀j), which are useful for analyt-ical approaches and on two-dimensional (2D) lattices(as, at large scales, the cortex can be treated as a 2Dsheet [10, 37]). Let us remark that Eq.(E1) is some-times called Shinomoto-Kuramoto model or Winfree’sring model [64, 65] and has been analyzed in diverse con-texts [64, 66–68]. Its collective state can be quantified bythe Kuramoto-Daido parameters:

Zk = 〈eikϕ〉 ≡ 1

N

N∑j=1

eikϕj ≡ Rkeiψk , (4)

where k is any positive integer number; for k = 1 thiscoincides with the usual Kuramoto parameter. Analyt-ical progress is achieved in a mean-field approximation

(which, as usual, becomes exact for infinitely large FCnetworks) which consists in: (i) writing down a continu-ity (Fokker-Planck) equation for the probability distribu-tion of phase values, P (ϕ, t) [53, 64, 66–68], (ii) expand-ing P (ϕ, t) in power series, and (iii) writing an infinitehierarchy of coupled equations for its coefficients (whichcoincide with the Zk’s; see Appendix E and [64, 67]):

Zk = Zk(iωk − k2σ2

2) +

ak

2(Zk+1 − Zk−1)

+Jk

2

(Z1Zk−1 − Z1Zk+1

). (5)

where the bar stands for complex conjugate. The associ-ated phase diagram has been scrutinized in the literatureby using different low-dimensional closures for this infi-nite hierarchy of coupled equations. For instance, usingthe Ott-Antonsen (OA) ansatz [69] or other more-refinedclosures [67, 68], one can obtain the phase diagram, sum-marized in Figure 2 (a careful discussion can be found inAppendices 6 and 7). Inspection of the phase diagramreveals that there are two main types of collective dy-namical regimes: oscillations (synchronous states) andstable fixed points (corresponding to either high-activityasynchronous states in the upper part of the diagram orlow-activity states in the lower/right part). These areseparated by different types of bifurcation lines. In par-ticular, for low-noise amplitudes, σ, as the control param-eter a is increased there is a collective SNIC bifurcationfrom the oscillatory regime to a phase characterized bya stable fixed point with very-low spiking activity, butsusceptible to collectively react to external inputs, calledthe collective-excitability phase. In analogy with the clas-sification of excitability types we refer to this as type-I

4

synchronization transition. Conversely, for small valuesof a, by increasing σ one encounters a collective Hopf bi-furcation, a type-II synchronization transition, to a high-activity asynchronous state (see Appendix D for a dis-cussion of excitability types). Thus, there are two mainlines of bifurcations to a collectively oscillatory phase inthe two-dimensional phase diagram: one of type-I andone of type-II.

Remarkably, the above two bifurcations lines cannotpossibly intersect each other owing to topological reasons[59], so there is not such a thing as a “tricritical” point.Instead, in the region where they come close to eachother, there exists a triangular-shaped region of bista-bility (green area in Figure 1a and b) [66, 70] delimitedby three bifurcation lines and three co-dimension-2 bi-furcations (which are signaled by red circles in Figure1b).

FIG. 2. Phase diagram and bifurcations of Eq.(E1) ona fully-connected network. (a) Phase diagram computedusing direct simulations on a fully-connected network withN = 5000 oscillators (ω = 1, J = 1). Collective oscillationsare computationally detected with the Shinomoto-Kuramotoorder parameter (see Appendix G) [64]. The location ofthe bistability region was established by numerically solvingEq.(E7) for the first k = 50 harmonics (Z51 = 0). The insetshows a zoom of the bistability region, where codimension-2 points are marked; a star indicates a point with scale-freeavalanches. The three segments (red, green and blue) indicatethree possible types of transition to synchronization.

For the sake of completeness and illustration, Figure 1bpresents a sketch illustrating the (complex) order param-eter in the different phases and the different bifurcationlines. In particular, there is a Bognadov-Takens (BT)point, where the Hopf-bifurcation line finishes, collidingtangentially with a line of saddle-node bifurcations; asaddle-node-loop (SNL) where the line of SNIC bifurca-tions ends, becoming a standard saddle-node line; anda cusp, where two saddle-node bifurcation lines collide.Observe that the bistability region is divided into twohalves by the Hopf-bifurcation line, so the regime of col-lective excitability coexists with either oscillations be-low the Hopf line or the high-activity asynchronous stateabove the Hopf line. In other words, in this region, theHopf bifurcation occurs in one of the branches of two co-

existing solutions, i.e., in concomitance with bistability.In order to obtain a highly-accurate phase diagram

needed for forthcoming analyses, we complemented theabove analytical approximations (i.e., closures) with ex-tensive computational analyses of the complete stochas-tic system (size N = 104) as well as a direct numerical-integration of Eq.(E7) truncated at sufficiently large val-ues of k (k = 50). Results of this combined approach aresummarized in Figure 1a which shows that the overallshape of the phase diagram is qualitatively identical tothe one predicted by low-dimensional closures, but witha slightly smaller bistability region. Remarkably, a verysimilar phase diagram, with the same type of phases andphase transitions is found in 2D lattices (see AppendixH).

The reader can gain insight into the dynamics in eachphase and around the different transition points by in-specting Figure 3, as well as in Supplementary Material1 [71] and contains a set of videos for diverse parameterchoices.

The moral of these findings is that, to analyze gen-eral synchronization transitions, it is necessary to con-sider not only the standard type-I and type-II cases,but also more complex scenarios, including the case inwhich the transition to synchrony occurs in concomi-tance with bistability, i.e., when incipient oscillations co-exist with low-activity asynchronous states; we call these“hybrid-type (HT) synchronization transitions”. This isillustrated in Figure3 (obtained from simulations on FCnetworks), that shows representative raster plots aroundeach of these possible transition types (colored segmentsin Figure 1a identify these three examples in the phasediagram): type-I (SNIC), type-II (Hopf), and HT syn-chronization transition, respectively. In particular, itshows results slightly within the synchronous phase (left),very close to criticality (central), and in the asynchronousphase (right). Surprisingly, even naked-eye inspection al-ready reveals that raster plots nearby the HT transitionexhibit a much larger dynamical richness (see below fora more quantitative analysis).

IV. SCALE-FREE AVALANCHES NEAR THEHYBRID-TYPE SYNCHRONIZATION

TRANSITION

Following the protocol for avalanche detection (Ap-pendix A) we determined the statistics of avalanche sizesand durations in these three scenarios. As shown in Fig-ure 4 for FC networks, power-law distributed avalanchesdo not emerge at the type-II (Hopf) transition (redcurves) nor at the type-I (SNIC) (green curves). Forinstance, for the Hopf-bifurcation case results resemble,not surprisingly, those for the annealed Kuramoto modelwhich also exhibits a type-II collective bifurcation, whilefor the type-I case there are large collective events recruit-ing most of the units in the system into anomalously-largeavalanches, separated by extremely long periods of qui-

5

0 50 1000

2500

5000

Type II(Hopf)

a=1.0, σ = 0.55

0 50 100

a=1.0, σ = 0.67

0 50 100

a=1.0, σ = 0.7

0 1000 2000

a=1.07, σ = 0.48

0 1000 2000

a=1.07, σ = 0.496

0 1000 20000

2500

5000

HybridType

a=1.07, σ = 0.52

0 1000 2000t

0

2500

5000

Type I(SNIC)

a=1.02, σ = 0.3

0 1000 2000t

a=1.024, σ = 0.3

0 1000 2000t

a=1.03, σ = 0.3

FIG. 3. Raster plots in a fully-connected network of size N = 5000 for each of the three considered cases (as indicated in Figure2) — Hopf (type-II) bifurcation, hybrid-type (HT) synchronization, and SNIC(type-I) bifurcation (from top to bottom)— inthe synchronous phase (left column), right at the transition point or very slightly within the synchronous/oscillatory phase(central column), and in the asynchronous phase (right column).

escence (as illustrated in the raster plot in Figure 3). Inany case, we observe no signature whatsoever of scale-freeavalanches at any point along the line of SNIC bifurca-tions (green dots in Figure 4; see also Appendix G).

On the other hand, when entering the synchronousphase through the HT transition, clean scale-invariantavalanche distributions are observed at criticality. Theseavalanches span across many decades and obey finite-sizescaling (see Figure 4c, Figure 4d as well as Appendix G).More specifically, one can fit exponent values τ ≈ 2.1(1),α ≈ 2.5(1) and γ−1 ≈ 0.75(5), respectively [72, 73]. Ob-serve also that there is always a peak, corresponding toanomalously system-size spanning avalanches, but these“bumps” scale with system size in a scale invariant way(much as in self-organized bistability [74]). Analogousresults are found for 2D lattices —even if with slightlydifferent exponent values: τ ≈ 1.7(1), α ≈ 1.9(1) andγ−1 ≈ 0.75(5), which are compatible with the values re-ported in experimental works [34]). In both cases, thescaling relation holds and the corresponding values ofγ coincide with its seemingly robust value reported in[34]. Thus far we do not have a proper analytical un-derstanding of these numerical values and their possibleuniversality.

As a complementary measure of complexity at the dif-ferent types of transition points, we also computed the

probability distribution of the inter-spike intervals (ISI),and its associated coefficient of variation (CV). Onlyaround the HT transition, within the bistability region,we found broad ISI distributions with significant coef-ficients of variation CV > 1 (see Appendix G). Thus,even if type-I and type-II synchronization transitions arewell-known to exhibit signatures of criticality such asfinite-size scaling (see e.g. [75] and [76], respectively)—unlike to HT transitions— they are not able to gener-ate the higher levels of complexity required for scale-freeavalanches and large dynamical repertoires as observedin the cortex.

V. DISCUSSION: ON HYBRID-TYPESYNCHRONIZATION, CRITICALITY, AND

UNIVERSALITY

One could wonder whether the rich phenomenologyemerging around the HT transitions stems from any ofthe three special codimension-2 bifurcation points thatappear in the phase diagram (Figure 2). Remarkably, thesaddle-node-loop (SNL) bifurcation has been previouslyargued to be necessary for the generation of high variabil-ity and dynamical richness in neural networks [77, 78],and it has also been established that the crucial features

6

FIG. 4. Avalanche distributions. (a,b) Avalanche-size and duration distributions for three different types of synchro-nization transitions (right at the transition points): type-I (SNIC) transition (green lines, a = 1.024, σ = 0.3); type-II (Hopf)transition (red lines, a = 1.04, σ = 0.575), and HT transition (blue lines, a = 1.07, σ = 0.499) in FC networks (N = 5000).Only the last one exhibits clear cut power-law behavior, both for size and duration distributions. Insets: As in the main Figurebut for simulations in a 2D lattice (size 642): type-I transition (a = 0.99, σ = 0.05), type-II transition (a = 0.60, σ = 0.64),and HT synchronization transition (a = 0.98, σ = 0.185). (c,d) Finite-size scaling analysis of P (S) and P (T ) in FC networksof different sizes (as specified in the legend) in the HT regime. Insets: As in the main Figures but for 2D lattices of sizes(N = 162, N = 322, N = 642). (e) Averaged avalanche size as a function of the duration for different system sizes. Inset: Asin the main Figure, but for simulations in a 2D lattice for different system sizes. (f) Distribution of avalanche sizes multipliedby sτ to obtain an asymptotically flat curve (to ease visual inspection of scaling) and re-scaled as a function of the system sizeto collapse the different curves. As expected from true scale invariance, the distributions become flat and collapse for differentsystem sizes after rescaling. Parameters at the HT transition (a, σ) for fully connected networks, N = 500 : (1.07, 0.520),N = 1000 : (1.07, 0.505), N = 5000 : (1.07, 0.496) and 2D networks, L = 16 : (0.995, 0.192), L = 32 : (0.982, 0.190),L = 64 : (0.98, 0.185), J = ω = 1. Dashed lines are guides to the eye showing the value of the fitted exponents.

of dynamically-rich neural networks stem from a phasediagram organized around a Bogdanov-Takens bifurca-tion [79, 80]. As illustrated in the actual phase diagram(see Figure 2 and its 2D counterpart in Appendix H)the bistability region is rather small in the parameterspace, so that all the discussed codimension-2 points arevery close to each other. Thus, providing a clear-cutcomputational answer to the question above is a hardproblem. Numerically, we can conclude that scale-freeavalanches appear when entering the synchronous phasewithin the bistability region in the close vicinity of suchcodimension-2 bifurcations.

An aspect that needs further analysis is the relation-ship between criticality and bistability: these two fea-tures are usually opposed to each other, as they corre-spond to either continuous or discontinuous phase tran-sitions, respectively (nevertheless, we should remind thatscale-invariant avalanches can also appear at discontinu-ous phase transitions [74, 81]). However, the onset of syn-chronization in the HT transition occurs within a regionof bistability. The intertwining between criticality and

bistability could well be at the basis of the very rich dy-namical repertoires in this case. It is very plausible, thatthe above-described LG theory, and possibly other mod-els [41], exhibit scale-free avalanches at the edge of syn-chronization, since some kind of bistability is also presentaround the synchronization transition.

Remarkably, the reported complex triangular-shapedstructure —with its bifurcation lines and co-dimension-2points— is rather universal and emerges in other modelsexhibiting a both type-I and type-II transitions (see e.g.[59, 82–84]); in particular, it appears in the paradigmaticand broadly used (Wilson-Cowan) model of excitatory-inhibitory networks [85]. Thus, the phenomenology dis-cussed here is rather universal and not model-specific. Itis noteworthy that other scenarios have been describedto connect lines of type-I and type-II transitions –e.g.,subcritical Hopf bifurcations followed by a fold of limitcycles– which also involve a regime of bistability (see[40]). Note that the phenomenology underlying thesealternative scenarios is, at its core, similar to the onedisplayed at the HT synchronization transition, includ-

7

ing collective excitability and some codimension-2 bifur-cation points. The exciting possibility that scale-freeavalanches can also emerge in such cases will be inves-tigated elsewhere.

An aspect of our model that might need to be modifiedto better reproduce the results in the Landau-Ginzburgtheory [37] is that here there are no true absorbing states,i.e. states from which the system cannot exit (neitheras consequence of the deterministic dynamics nor as theresult of stochasticity). In particular, the noise ampli-tude in [37] vanishes in the absence of activity, while inthe present model such an ingredient cannot be easilyimplemented: large fluctuations can always excite indi-vidual nodes due to the absence of any bona-fide ab-sorbing state. On the other hand, it is well establishedthat absorbing states are needed to generate branching-process exponents (see e.g. [28]). Thus, further work isstill needed to elucidate what happens in modelling ap-proaches as ours if absorbing states are included. Thisproblem will be tackled elsewhere.

Even if the minimal model studied here is exceed-ingly simple to be a realistic model of the cortex, it canprovide us with insight on the basic dynamical mech-anisms needed to generate its complex dynamical fea-tures. Furthermore, it is well-established that diversecontrol, self-organization (or “homeostatic”) mechanismsare able to regulate a network to lie around some targetregime or point [81, 86–88]. Thus, a cortical neural net-work with dynamics akin to that of the present simplemodel could be self-organized to the region near the HTsynchronization transition [74, 81, 89], and by doing so,it could rapidly shift its behavior from synchronous toasynchronous, to collective excitability, or up-and-downtransitions in a dynamical way, allowing for an extremelyrich and flexible dynamical repertoire derived from oper-ating at such an “edge of the edge”. We hope this workopens the door for novel research lines, including renor-malization group analyses [90]), paving the way to thelong-term goal of constructing a statistical-mechanics ofthe cortex.

ACKNOWLEDGMENTS

MAM acknowledges the Spanish Ministry and AgenciaEstatal de investigacion (AEI) through grant FIS2017-84256-P (European Regional Development Fund), aswell as the Consejerıa de Conocimiento, InvestigacionUniversidad, Junta de Andalucıa and European Re-gional Development Fund, Ref. A-FQM-175-UGR18 andSOMM17/6105/UGR for financial support. V.B andR.B. acknowledge funding from the INFN BIOPHYSproject. We also thank Cariparma for their supportthrough the TEACH IN PARMA project. We thank S.di Santo, G. Barrios, D. Pazo and J. Zierenberg for veryvaluable discussions and comments.

Appendix A: Definition and measurement ofavalanches

The protocol to measure neuronal avalanches is basedon the one first proposed by Beggs and Plenz [20], whichhas been widely employed in analyzing both experimen-tal and theoretical data (e.g. [23, 34, 37, 80, 91]). Thisprotocol allows one to study the structure of the spa-tiotemporal clusters of neuronal activity, e.g., spikes inindividual neurons or peaks of the negative local fieldpotentials. Here, we discuss how the protocol is adaptedfor the case considered in this paper of phase oscillators,illustrated in Fig. 5.

No

. O

scill

ato

r

FIG. 5. Construction of raster plots. The individualnoisy oscillators can display spike-like events; these can bedetected by translating their phases into “activities” via thetransformation y(t) = 1 + sinϕ(t); then, by thresholding, theactivity variable y(t) one can define “spikes” as done, e.g.,in local-field-potential measurements. The top panel showsthe activity of 3 randomly selected oscillators and the bot-tom one their corresponding spiking events as a function oftime. The symbol size of each event accounts for the totaltime-integrated activity over the threshold. The left and rightcases correspond to the synchronous phase and the bistabilityregion, respectively.

1. For each unit j, one needs to define its “activity”.Once the activity is defined, a “spike” of a givenoscillator j is a transient event occurring wheneverits phase ϕj(t) crosses a given threshold value whichcorresponds to a significant activity. In particular,it is possible to define the activity as yj = 1+sinϕj ,that takes large (resp. small) values (close to 2, resp0) when the phase crosses π/2 (resp. −π/2).

2. A threshold yth is defined such that whenever yjcrossed, a “spiking event” starts to be tracked; itfinishes when yj goes again below threshold. The

8

total integral of yj − yth along such a large-activitytime window is the size sj of the local event atthe initiation time tkj . k = 1, 2..., etc., label thesequence of spikes.

3. The complete set of spikes for all units j (at timestkj and sizes skj ) defines a raster plot employed forthe subsequent analyses.

4. The inter-event interval (IEI) between all couplesof consecutive spikes (regardless of which unit gen-erated them) is computed, and its probability dis-tribution P (IEI) and its average value 〈IEI〉 arecomputed. A time scale ∆t = 〈IEI〉 is used todiscretize the raster plots in time bins.

5. An avalanche is defined as a series of spikes in be-tween two empty bins (with no spike) such that allconsecutive time bins include some activity. Thesum of all skj in between such two empty bins is theavalanche size S, while avalanche duration T is de-fined as the time elapsed between the two limitingempty bins.

6. The probability distribution function (histogram)of avalanche sizes and durations is then computed.

The empirical detection of avalanches in noisy dataand/or in continuous time series is often exposed to somepotential pitfalls that are important to underline.

• First of all, it relies on an arbitrary choice of athreshold for detecting “spikes” of activity. In ourcase, selecting a threshold yth (e.g. yth = 1.6) en-sures that background noise around the fixed pointis filtered and only significant excursions in thephase value are considered.

• It also relies on the choice of the time bin size as theaverage inter-event interval; this choice has beenmade in accordance with the usual one in neuro-science analyses [20, 23, 34, 92].

• Let us also remark that, instead of using the activ-ity yj with its corresponding threshold, we couldhave alternatively defined a “spike” each time os-cillator hits a predefined phase value, such as π/2.This method does not allow to integrate the “size”for each event, but it makes no difference with theprevious one. However, in our particular case, weseek to mimic the dynamics by [37] and actual LFPrecordings, where each event has a size. Therefore,we stick to the threshold yth = 1.6 for simulations.

• Let us finally stress that computational model anal-yses, like the ones reported here, do not suffer fromsevere subsampling effects that may strongly im-pair empirical measurements of avalanches [93–95].

Appendix B: Avalanches in the annealed Kuramotomodel

Without loss of generality, we fix J = 1 and ω = 1—that is not relevant as a change of variables to a co-rotating reference frame can be used to set ω = 0— leav-ing σ as the only free parameter. Then, σc = 1 indicatesthe critical point for infinitely large systems. Due to thefinite size N of the considered networks, the precise loca-tion of the critical point needs to be computationally esti-mated; in particular, as usually done in finite-size scalinganalyses, σc is estimated as the value of σ such that thevariance of the order parameter R is maximal [37, 53]. Asshown in Figure 6a and Figure 6b, showing results of ourcomputer simulations for a system of N = 500 units, thecritical point is located at σc ≈ 0.98 (and shifts progres-sively towards 1 as N is increased). Observe that at theestimated critical point, owing to finite-size effects, thelevel of synchronization is R ' 0.2. This is illustrated inFigure 6c, which shows three characteristic raster plotswithin the synchronous phase σ = 0.9, the critical pointσc(N = 500) ≈ 0.98 and the asynchronous phase σ = 1.1,respectively.

As an important technical remark, we should empha-size that in numerical simulations of the annealed Ku-ramoto model, for any finite size N , the integration stepδt needs to be small enough as to have sufficient timeresolution, i.e. δt < ∆t = 〈IEI〉, so that avalanches canbe measured. Observe that this might depend on N ;in particular, in the asynchronous phase, as the num-ber of neurons N grows, the 〈IEI〉 decreases, and thusone needs progressively smaller integration steps to mea-sure avalanches. In the limit N →∞, the asynchronousraster plot has an average interevent interval 〈IEI〉 → 0and, consequently, one could say that avalanches are notwell defined if a fixed time bin was considered. On theother hand by considering sufficiently small δt’s for eachcase, our computational analyses –summarized in Figure6 panels (d) and (e)– show that the avalanche statisticsdo not exhibit heavy tails.

Finally, a careful mathematical analysis of this modeland its avalanches in the thermodynamics limit will beaddressed elsewhere.

Appendix C: Bifurcations in the single unit of theLandau-Ginzburg model

The Landau-Ginzburg model [37] was pioneer inproposing that scale-free avalanches occur at the edge ofa synchronization phase transition. It relies on a modelfor the dynamics of individual mesoscopic regions in thecortex. Each such region (or “unit”) is characterized bytwo dynamical variables: its level of (mesoscopic) neu-ronal activity ρ(t) and the amount of available synapticresources R(t). The dynamics (at a deterministic level,

9

(b)(a)

(c)

(d) (e)

FIG. 6. Avalanche statistics in the annealed Kuramotomodel on fully connected networks. (a) Kuramoto or-der parameter (R) as a function of the noise intensity σ fora finite network size N = 500. (b) The Kuramoto criti-cal point is defined as the point of maximum variance ofthe order parameter, χ, which occurs at σc = 0.98 (verti-cal dashed line). (c) Raster plots of the annealed Kuramotomodel at the synchronous phase (σ = 0.95, left plot), thecritical point (σc = 0.98, central plot) and the asynchronousphase (σ = 1.1, right plot). (d) Distributions of size eventsin the three phases for the same representative values of σ asabove. (e) Distribution of time events in all the three phasesfor the same parameter values. Let us underline the lack ofpower-law distributions at criticality, i.e. when the systemundergoes a collective Hopf bifurcation. Parameter values:ω = 1 and J = 1.

i.e. excluding fluctuations) is described by{ρ(t) = (R(t)− a)ρ+ bρ2 − ρ3 + h

R(t) = 1τR

(ξ −R)− 1τDRρ

(C1)

where a, b > 0 are constants, h is a very small externaldriving (that can be set to 0 in the absence of externalstimuli), η(t) is a zero-mean Gaussian white noise, and ξis the maximum amount of available synaptic resources,which serves as a control parameter which regulates thesystem state. In the second equation τR and τD repre-sent the time scales of recovery and depletion of synapticresources, respectively.

Observe that the equation for the activity ρ –assumingthe amount of synaptic resources R is fixed– is the min-imal form of a first-order phase transition with hystere-sis (or saddle-node bifurcation). It displays a quiescent

FIG. 7. Nullclines of the single mesoscopic unit ofthe Landau-Ginzburg model C1. Characteristic flow dia-grams and nullclines of the Landau-Ginzburg mesoscopic unit,for three different values of ξ. The nullclines for the activity ρ(black lines) can display two types of solutions depending onthe value of the available resources, R: up (ρ 6= 0 for large val-ues of R), down (ρ = 0 for small values of R) and bistabilitybetween these two for intermediate values of R. Black dashedlines represent unstable fixed points ρ∗. Nullclines R = 0 arerepresented by red dashed lines. The only stable fixed pointfor low values of the order parameter, ξ, is the absorbing stateρ∗ = 0 (black point). (b) When ξ is increased, the R-nullclineintersects the unstable branch of the ρ-nullcline, giving riseto a limit cycle (red solid line). (c) When ξ is large enough,the up-state fixed point becomes the only stable solution.

(or “down”) state ρ = 0 when R ≤ a, and an activeor “up” state for R > a. On the other hand, the sec-ond equation accounts for the dynamics of the level ofsynaptic resources and includes a slow charge/recoveryterm (dominating when activity is low), and a fast dis-charge/consumption, which dominates the dynamics inthe presence of activity, ρ 6= 0.

A simple analysis of Eqs.(C1) shows that the systembehavior depends on the value of the maximum allowedsynaptic resources, ξ (see Figure 8). If ξ < a, the onlyfixed point is a quiescent state of low activity. For largervalues of ξ, the two nullclines of Eq.(C1) intersect atan unstable fixed point, giving rise to a limit cycle, i.e.to relaxation oscillations in which both ρ(t) and R(t)oscillate. Finally, if ξ is large enough, an up state (fixedpoint with non-vanishing activity) emerges. For moredetails we refer to [37] and [89].

Figure 8 illustrates the bifurcation diagram of Eq.(C1)as the control parameter ξ is varied. In agreement withwhat just described, values ξ < a leads to a “down”steady state with vanishing activity and non-depletedsynaptic resources, i.e. R = ξ. At ξc = a there is aninfinite-period homoclinic bifurcation into a limit cycle.To determine if this bifurcation is homoclinic or rathera saddle-node into an invariant circle (SNIC) one, wehave explicitly measured the average period between os-cillations and plotted against the control parameter. Theresult displays a logarithmic decay of the period (see Fig-ure 8b), a typical feature of homoclinic bifurcations [62].Finally, as the control parameter is further increased, oneencounters another homoclinic bifurcation at which the

10

(a) (b)Up

Down

Stable limit cycleamplitude

Homocl.

Homocl.

FIG. 8. Bifurcations of the single unit dynamics of theLandau-Ginzburg model. Stable fixed points of Eq.(C1)are represented as a continuous line, while unstable ones cor-respond to dashed lines. Parameters: a = 1.0, b = 1.5,τR = 103, τD = 102, h = 0. (a) As the control parameterξ is increased, the down-state fixed point loses its stabilityvia a homoclinic bifurcation. (b) Period of oscillations as afunction of the control parameter ξ. The set of equations C1was integrated for a fixed long time to compute the periodas the average time between spikes (jumps to the up branch).Points marked in red indicate that the number of spikes usedfor computing the average period were relatively low (becauseof costly statistics). Note that as ξ → ξc, larger simulationsare required (increasing integration error). The data are muchbetter fitted by a logarithm (characteristic of homoclinic bi-furcations) than by a square-root fit (characteristic of SNICbifurcations) [62]. All numerical solutions are found usingWolfram Mathematica.

limit cycle disappears, giving rise to an “up” fixed point.

Appendix D: A note on different types ofexcitability and bifurcations

A system is defined as excitable when it presents a sin-gle, stable equilibrium, but a sufficiently strong input candrive the system in a large excursion in the phase spacebefore returning later to the stable fixed point [59, 60, 96].Many physical and biological systems exhibit excitability[60]. Excitability is a concept of particular importancein the context of neuroscience, where neurons are at rest,and a super-threshold signal is able to evoke a signifi-cant response (e.g., a spike), returning at the end to theresting state. Different types of neurons may responddifferently to the same input, leading to the so-called“excitability classes” [59, 96]. The most usual classesare excitability classes I and II [59]. Type-I excitabil-ity is characterized by continuous growth of the spikingrate when the input current is continuously increased,while Type-II excitability involves a sudden jump in thespiking rate under the same circumstances. In bifurca-tion theory, type-I excitability corresponds to situationswhere the corresponding limit cycle appears with a van-ishing frequency, i.e. infinite-period bifurcations, whilein type-II excitability, limit cycles emerge with a finite,

Firi

ng

rate

Osc

. A

mpl

itud

e Hopf

Exc. Type II

SNIC

Exc. Type I

Typi

cal B

if.Ex

ampl

e

FIG. 9. Excitability and bifurcations. Side-by-side com-parative sketch of infinite-period (SNIC) and Hopf bifurca-tions, which are representative examples of type-I and type-IIexcitability, respectively. Note that the firing rate is directlyrelated with frequency of oscillations: vanishingly-small firingrates correspond to arbitrarily large firing frequencies.

non-vanishing frequency [59].Two representative examples of bifurcations corre-

sponding to classes I and II are the SNIC and Hopf bi-furcation, respectively, as sketched in Figure 9. The ho-moclinic bifurcation is common in neuronal models [78]and belongs to class I neuronal excitability, as the SNICbifurcation. Both are differentiated only by the criticalexponents of the firing rate. A comprehensive summaryof the relationship between excitability classes and bifur-cations can be found, for example, in [97].

It should be noted that the mean-field phase dia-gram described in the main text includes these two maintypes of excitability near the bistability region, wherethe system behaves as collectively excitable. The bista-bility region is surrounded by other types of bifurca-tions such as saddle nodes and codimension-2 bifurca-tions, where lines of standard (codimension-1) bifurca-tions intersect. Codimension-2 points, which in our caseinclude Bogdanov-Takens, saddle-node-loop, and cusp bi-furcations, can display richer dynamics [59, 77–79]).

We have classified synchronization transitions using anomenclature that resembles that of excitability classes.In type-I synchronization, oscillations emerge at the tran-sition point with zero frequency (infinite period) and fi-nite amplitude, while in type-II synchronization, oscilla-tions are born with a fixed non-vanishing frequency. Thephenomenology becomes richer in ”hybrid-type” synchro-nization transition, where the co-dimension 2 bifurca-tions and bistability are present.

Appendix E: Mathematical analysis of globallycoupled oscillators

To assess the collective behavior of the system of cou-pled oscillators, here we reproduce with detail the deriva-tion of the equations for the order parameter and discuss

11

different closure methods. Most of these calculations canbe found in the literature, but we reproduce them herein a self-consistent way for the sake of clarity. Our modelis described by the set of stochastic equations,

ϕj = ω+a sinϕj+J

Mj

Mj∑i∈n.n.j

sin (ϕi − ϕj)+σηj(t). (E1)

It is convenient to remove one parameter, fixing e.g.ω = 1. Here, we also set J = 1, leaving a and σ as theonly free parameters. For completeness, we verify a pos-teriori that results are robust to changes in J (AppendixF).

1. Order-parameter equations

Equation (E1) is very similar to the previously dis-cussed annealed Kuramoto model, except for the addi-tional term a sin(ϕ) which induces an inhomogeneity inangular velocity across the unit circle of each oscillator.As discussed in Appendix G (in particular, in SectionG 1), such an inhomogeneity makes the Kuramoto pa-rameter inadequate to characterize the phase diagram ofthe present model as, for a 6= 0, there is a particularphase value around which each oscillator tends to spendmost of the time. For uncoupled oscillators, this occursfor ϕ = ± arcsin(ω/a) where the angular velocity is min-imal; this heterogeneity in angular velocity leads to anon-vanishing value of the Kuramoto order parameter,even when oscillators are uncoupled or, more in general,when they are asynchronous.

In order to circumvent this problem analytically, it ispossible to consider the hierarchy of higher-order mo-ments of the variable eiϕ, i.e., the so-called Kuramoto-Daido parameters, Zk:

Zk = 〈eikϕ〉 ≡ 1

N

N∑j=0

eikϕj ≡ Rkeiψk (E2)

where k = 1, 2, ...∞ and of which the Kuramoto order pa-rameter Z1 is a particular case. For convenience, we willuse either the notation in terms of amplitudes and phases(Rk and ψk) to represent the complex-valued Kuramoto-Daido parameters Zk. Using standard trigonometric re-lations, Eq.(E1) can be rewritten as a function of Z1(t) =R1(t)eiψ1(t), leading to the following set of Langevinequations,

ϕj(t) = ω+a sinϕ(t)+JR1(t) sin (ψ1(t)− ϕj(t))+ση (t)(E3)

where the mean-field nature of the coupling is evident.In order to solve these equations, we employ a standard

procedure to deal with coupled oscillators [50, 64, 67, 98,99]. The first step is to consider a large number of os-cillators, N → +∞, so that the system can be describedin the continuum limit using the probability density to

find an oscillator around any given phase value ϕ, i.e.P (ϕ)dϕ. The following Fokker-Planck equation gives theevolution of such a probability density,

∂tP (ϕ, t) =σ2

2∂2ϕP (ϕ, t)−

− J∂ϕ[(ω + a sinϕ+

Z1e−iϕ + c.c.

2i

)P (ϕ, t)

](E4)

where the identity R1 sin (ψ1 − ϕ) = (Z1e−iϕ +

Z1eiϕ)/(2i) has been used to simplify the forthcoming

algebra. As the density P (ϕ, t) is periodic in the anglevariable, it can be expanded in Fourier series:

P (ϕ, t) =1

+∞∑k=−∞

pk(t)eikϕ, (E5)

and pk = p−k, where the bar stands for complex conju-gate. It turns out that the Kuramoto-Daido parameterscoincide with these coefficients:

Zk =

∫ 2π

0

P (ϕ, t)eikϕdϕ = p−k. (E6)

Plugging the series expansion (E5) into the Fokker-Planck equation one obtains an infinite set of differen-tial equations, one for each of the parameters Zk, i.e.for the Kuramoto-Daido parameters Zk (observe that,as Z−k = Zk, it suffices to analyze the order parame-ters with k ≥ 0). To obtain differential equations foreach Zk, note that after performing the derivatives anddoing some algebra, all the terms can be written as(2π)−1

∑k f(Zk, Zk+1, Zk−1, . . .)e

ikϕ for some functionf . Then, since the exponentials eikϕ are the Fourier-basis elements, we can identify all parameters mode bymode, leading to an equation for the evolution of eachKuramoto-Daido order parameter [67, 68]. The resultingset of equations,

Zk = Zk(iωk − k2σ2

2) +

ak

2(Zk+1 − Zk−1) +

Jk

2

(Z1Zk−1 − Z1Zk+1

), (E7)

constitutes an exact description of the system.As reported below, we solved Eq.(E7) including up to

k = 50 harmonics (i.e. imposing Z51 = 0) and monitoredthe order parameters (see Section G 1). We found anexcellent agreement with direct simulations, including asmall bistable phase [67, 98], as shown in Figure 2C ofthe main text. However, even if the bistable phase existsin the thermodynamic limit, its exact location is affectedby finite-size effects in direct simulations.

In order to proceed analytically, given that all equa-tions are coupled, one needs to find a dimensional soundreduction or “closure” to truncate the infinite hierarchy

12

[99, 100]. Different closures have been proposed in theliterature of coupled oscillators. In what follows, we dis-cuss three of them.

2. Approximate solutions or closures

In deterministic, noise-free systems, an exact solutionto equation (E7) is provided by the Ott-Antonsen ansatz[69], which consists in writing Zk(t) = [Z(t)]k, so thatthe first moment already contains all the relevant infor-mation. On the other hand, the situation is more compli-cated for stochastic systems for which the Ott-Antonsenansatz does not provide an exact solution [68, 99]. Inparticular, using the Ott-Antonsen ansatz and writingZ(t) = R(t)eiψ(t) leads to the system of equations

R =1

2R[J(1−R2

)− σ2

]− 1

2a(1−R2

)cosψ,

ψ = ω +a(1 +R2

)sinψ

2R. (E8)

Remarkably, these equations are the same as those ob-tained by Childs and Strogatz in the case of determinis-tic oscillators with heterogeneous (quenched) frequenciesdistributed as a Lorentzian [70]. However, only the de-terministic system is exactly solved by the Ott-Antonsenansatz, while in the stochastic case it gives an approxi-mation of order O(σ2) [68].

In order to increase the precision of the closure, onecan assume that the global phase is distributed as a(wrapped) Gaussian with some mean ψ(t) and variance∆(t) [67]:

P (ϕ, t) =1√

2π∆

+∞∑k=−∞

exp

[− (ψ − ϕ+ 2πk)2

2∆

](E9)

The Kuramoto-Daido parameters can be explicitlycomputed via direct integration,

Zk(t) =

∫dϕP (ϕ, t)eikϕ = e−

12k

2∆(t)eikψ(t). (E10)

Plugging this ansatz into Eq.(E7) leads to

ψ(t) = ω + ae−∆/2 cosh ∆ sinψ, (E11)

∆(t) = σ2 + 2 sinh ∆(ae−∆/2 cosψ − Je−∆

).(E12)

Observe that Eq.(E10) allows one to write Zk as a func-tion of only the first mode [68], Z, giving a functionalform very similar to the Ott-Antonsen ansatz:

Zk = |Z|k2−kZk (E13)

Let us remark that the Ott-Antonsen ansatz is equiva-lent to the assumption of a Lorentzian distribution for

the angles. That is, changing Eq.(E9) to a Lorentziandistribution and following the same procedure, one re-covers the Ott-Antonsen ansatz. The wrapped Gaussianapproximation is slightly superior to the Ott-Antonsenone, but, as we will see shortly, it is not good enough asto generate a precise phase diagram (see Figure 11 belowfor a comparison).

A way to go beyond the Ott-Antonsen and the Gaus-sian ansatzes is to consider more harmonics in the ex-pansion. Tyulkina et al [68, 99] proposed to use the cir-cular cumulants of the order parameters to generate abetter closure. The advantage of the cumulant expansionis that all cumulants, except χ1 = Z, vanish when theOtt-Antonsen ansatz is selected, so choosing additionalnon-zero cumulants gives rise to systematic corrections tothe Ott-Antonsen solution, order by order in such an ex-pansion. In particular, the first three cumulants are givenby χ1 = Z, χ2 = Z2−Z2, χ3 = (Z3−3ZZ2 +2Z3)/2. Se-lecting Z and χ2 to be different from 0 but fixing χ3 = 0gives Z3 = Z3 + 3Zχ2, effectively closing the infinitesystem.[101] Substituting this last in the equation for thefirst three harmonics, the resulting system reads:

Z =1

2

(J − σ2 + 2iω

)Z + a

(Z2 − 1 + χ2

)− J

(Z|Z|2 + χ2Z

),

χ2 =2χ2(iω + aZ)− σ2(2χ2 + Z2)− 2Jχ2|Z|2. (E14)

Note that imposing χ2 = 0 leads to the Ott-Antonsenansatz, as expected. This closure provides us with a no-ticeable quantitive improvement on where the Hopf andSNIC bifurcations are located, which coincide very wellwith the results of numerical integrations. However, sim-ple parameter inspection did not render any bistabilityusing the reduction by Tyulkina et al. Given the sizeof the bistable region in phase space, finding the bista-bility probably needs a more systematic study -such asthe one we did solving the system for k = 50 harmon-ics, where bistability is clear as we showed in Fig. 2.Although losing the possibility of bistability in analyti-cal approaches relying on some closures is a well-knownproblem in stochastic processes [100].

Appendix F: Bifurcation analysis of theOtt-Antonsen equations

To gain analytical insight into the structure and topo-logical organization of the phase diagram, here we ana-lyze the bifurcation diagram of the approximation pro-vided by the simpler Ott-Antonsen closure, eqs. (E8),i.e.:

R =1

2R[J(1−R2

)− σ2

]− 1

2a(1−R2

)cosψ,

ψ = ω +a(1 +R2

)sinψ

2R. (F1)

Observe first that, for a fixed value of R, the equationof the collective phase ψ is the normal form of a saddle-

13

node into an invariant circle (SNIC) bifurcation. On the

other hand, the equation for R is almost the same as inthe annealed Kuramoto model, but adding a perturba-tion proportional to the “excitability parameter” a. Theabove set of equations is difficult to study analytically,but its bifurcations can be obtained following the sameprocedure of Childs and Strogatz who studied this systemwith a fixed value of σ =

√2 [70].

The main idea is as follows: in the annealed Kuramotolimit (a = 0) the system undergoes a Hopf bifurcation;on the other hand, individual (uncoupled) oscillators, i.e.,for J = 0, exhibit a SNIC bifurcation. Hence, by conti-nuity in solutions, we expect two branches of these twotypes of bifurcations to be present in Eq.(E8).

Calling Q the Jacobian at a fixed point, at Hopf bifur-cations trQ = 0 while in a saddle-node detQ = 0 [62].Thus, imposing one of these conditions, together withthe fixed point equations R = ψ = 0, leads to a set ofequations for the parameters of the system as a functionof the fixed point values, R∗ and ψ∗. Since such valuesare bounded, one can use these equations as parametricequations of the bifurcation curve, without computingexplicitly the values of the fixed points and their stabil-ity.

Let us start with the Hopf bifurcation. Between pa-rameters and fixed points, there are 6 unknowns: R∗,ψ∗, ω, a, J and σ. Remember that in the simulations wefixed ω = J = 1 to leave a and σ as the only free param-eters. In what follows, we obtain equations for the bifur-cations, written in a parametric form a = a(R∗, ψ∗, ω, J)and σ = σ(R∗, ψ∗, ω, J). After some algebra, it turns outthat not all the dependences are necessary. Solving forR = ψ = trQ = 0, there are 3 remaining parameters.We can choose any parameters to solve for, but it turnsout (as shown in [70]) that solving for R, cosψ and sinψis highly convenient. Note that these are only two pa-rameters, since the sine and cosine are not independentfunction as sin2 ψ + cos2 ψ = 1. Of course, one couldhave tried to directly solve for ψ and a, but it is easierto obtain expressions for the trigonometric functions andthen extract a:

aH =

√J − σ2

J + σ2

√4ω2(J + σ2)2 + J2(J − σ2)2

2J(F2)

Note that, in this case, we obtained a parametric curvefor the Hopf bifurcation, aH = aH(ω, J, σ), without re-quiring specific knowledge about the location of the fixedpoints. In particular, as J is kept fixed, it is possible toderive a curve aH = aH(ω, σ).

In what respects saddle-node bifurcations, the calcula-tion is a bit more involved since solving for the same 3variables gives high-degree polynomials for R that cannotbe explicitly solved. For this reason, we choose to solvefor ω, cosψ and sinψ, since in this way the parametersdo not depend on ψ. After solving and simplifying theresulting equations, one obtains:

ωS =(1 +R2)3/2

2(1−R2)2· (F3)√

J(1−R2) (2σ2 − J(1−R2)2)− σ4(1 +R2),

aS =

√2R2

(1−R2)2· (F4)√

(J(1−R2)− σ2) (2σ2 − J(1−R2)2).

Since 0 ≤ R ≤ 1, and one needs to explore all thepossible fixed points, there are two free parameters tochoose. Given that J is kept fixed in our simulations,we dismiss it and obtain ω and a as functions of σ. Thesaddle-node lines enclose a “collective excitable” phase,which is type-I excitable. This can be easily seen byrealizing, as discussed above, that phase dynamics areagain of the form ψ = ω + c sinψ for fixed Kuramotoorder parameter R, with the angular speed taking therole of the external input usually employed to classifyexcitability classes.

0.9 1.0 1.10.25

0.30

0.35J = 0.500

0.95 1.00 1.050.35

0.40

0.45J = 0.688

0.95 1.00 1.050.45

0.50

0.55J = 0.875

0.9 1.0 1.10.55

0.60

0.65J = 1.062

0.9 1.0 1.1

0.6

0.7

J = 1.250

0.9 1.0 1.10.70

0.75

0.80J = 1.438

0.9 1.00.80

0.85

0.90J = 1.625

0.9 1.00.85

0.90

0.95J = 1.812

0.90 0.95 1.000.9

1.0

J = 2.000

a

σ

FIG. 10. Bifurcations in the Ott-Antonsen approxima-tion. Representation of the solutions of the equations (F2)and (F4) for different values of the coupling constant J . Bluelines describe branches of Hopf bifurcations, while the redlines correspond to saddle- bifurcations. All the graphs, fordifferent values of J , are zooms made to underline the exis-tence of a region surrounded by the Hopf bifurcation (blueline) on the one side and saddle node bifurcations (red lines)on the others. Such a region is crucial as it describes a regimeof bistability: all the selected values of J display a smallbistable region, whose size decreases with J .

The manifolds of Hopf and SNIC bifurcations couldbe drawn together in a three-dimensional space, usingω, a and σ as coordinates. An alternative easy way to

14

visualize them is to make projections into the (a, σ) two-dimensional space for different values of J (see Figure 10).Since we set ω = 1, and in each such projection J is fixed,the Hopf bifurcation is then obtained as aH = aH(σ),while the saddle node is obtained by solving Eqs.(F4)as a parametric curve, depending on R and σ. Figure10 shows that there is a bistability region delimited by aHopf line and two saddle-node lines for different values ofJ . This region decreases in size as the coupling decreasesuntil it disappears for sufficiently low values of J .

Appendix G: Computational analyses and results

1. Phase diagram of the full model

As we have seen, deriving all the phases and transitionsbetween them analytically is a difficult task, and thus,one needs to resort to computational analyses. First ofall, an adequate order parameter needs to be defined. Asdiscussed above, the usual Kuramoto order parameter,R = |Z|, is not a good choice for inhomogeneous oscilla-tors, because |〈Z〉t| 6= 0 even when they are uncoupled orasynchronous, due to the different amount of time theyspend at diverse phase values.

We employed the so-called Shinomoto and Ku-ramoto (SK) parameter that solves this problem, be-ing able to discriminate between synchronous and asyn-chronous regimes [64]. In particular, we consider the –computationally more efficient– variant of such a param-eter employed by Lima and Copelli [40]:

S =

√〈|Z|2〉t − |〈Z〉t|2 . (G1)

to distinguish the synchronized phase (S 6= 0) from asyn-chronous or excitable states (S = 0).

The main result of our computational analyses is thephase diagram reported in Figure 2 of the main text. Thedifferent phases are identified computationally as follows:

• A non-vanishing S value characterizes the syn-chronous region.

• The asynchronous and “collective excitability” re-gions are both characterized by S = 0. To detectthis second regime one needs to perturb the systemand analyze its collective response (or lack of it).

• The regime of bistability is challenging to study nu-merically since in finite networks, fluctuations candrive the system to jump between the two coexist-ing states. To determine this region, the analyti-cal solution was computed by solving for the first50 terms in the series expansion (E7). It was thencomputationally verified that, for large enough net-work sizes, two alternative stable states exist withinsuch a region.

2. Accuracy of different closures

For completeness, we checked the accuracy of the dif-ferent considered closures by comparing them with theresults of computational analyses. In order to do so, thedifferent ansatzes or closures: (i) the Ott-Antonsen (eqs.(E8)), (ii) the Gaussian closure (eqs. (E12)), and (iii) theequations proposed by Tyulkina et al. (eqs. E14) weresolved near the Hopf and SNIC bifurcation branches andcompared the results –shown in Figure 11– with thoseof direct simulations of Eq.(E1) as reported above. Wewould like to remark that, up to our knowledge, it isthe first time that the accuracy of the different closuresto locate different kind of bifurcations has been formallychecked.

0.95 1.00 1.05a

0.0

0.2

0.4

S

0.9 1.0σ

0.0

0.2S

Ott-Antonsen

Gaussian

Tyulkina

Simulations, N = 103

FIG. 11. Comparison between the results obtained foranalytical closures and direct simulations. Shinomoto-Kuramoto parameter S along the Hopf and SNIC bifurca-tions, as measured both in simulations and numerical so-lutions obtained from different closures (as specified in thelegend). Parameters: ω = J = 1. Hopf, a = 0.5. SNIC,σ = 0.275. Direct simulations are performed in a fully con-nected network of size N = 103 (see Section G 3 for compu-tational details.)

The conclusion is that the ansatz by Tyulkina etal. [99] fits more accurately both the Hopf and SNICbranches than the other ones. The Gaussian ansatz cap-tures very well the phenomenology near the SNIC bifur-cation but not near the Hopf one. On the other hand,the Ott-Antonsen solution does not predict the transi-tion accurately at any of the bifurcations. However, theansatz by Tyulkina et al. [99] is not able to predict theexistence of the bistability phase, while the other twodo so. Solving the complete system of equations for theKuramoto-Daido parameters, at least 5 harmonics areneeded in order to find the bistability region.

3. Avalanches at different types of bifurcations

Here, we further investigate whether scale-freeavalanches emerged when we moved away from the bista-

15

bility region. Results at the different type of bifurcationsare reported in Figure 2 of the main text, which displaysraster plots computed at either a (Type II) Hopf (upperpanels) or at a (Type I) SNIC bifurcation point (lowerpanels), respectively, as well as within the synchronousand asynchronous phases surrounding them. Here, weshow results for values slightly below, at, and above thebifurcation. As discussed at extent in the main text,scale-free avalanches (with power-law distributed sizesand durations) emerge only in the vicinity of the hy-brid type synchronization transition; when the systemis moved away from it, scaling behavior as well as scale-free behavior breaks down (see Figure 12), while near thebistability region scale-free distributed avalanches remain(see Figure 13).

FIG. 12. Avalanches at and away from the hybridtype synchronization transition. (a) Avalanche-size andavalanche-duration distributions for a network of size N =5000 evaluated at the hybrid type synchronization transition(a = 1.07, σc = 0.496) and two other nearby points, slightlyaway from it. The Figure includes: (a) avalanche-size a du-ration distributions for several values of the noise intensity σ(see legend) with fixed ac = 1.07 and (b) avalanche-size andduration distributions for several values of the excitability a(see legend) keeping σc = 0.496 fixed. Power-law behavior isobserved only at the hybrid type transition.

4. Dynamical variability

As a complementary measure of complexity at the dif-ferent transition points, we have also computed the prob-ability distribution of the inter-spike intervals (ISI), alongwith its associated coefficient of variation (CV)

CV = 〈σISI/µISI〉 (G2)

where µISI and σISI are the mean and standard devia-

FIG. 13. Avalanches close to the hybrid type synchro-nization transition Avalanche-size and avalanche-durationdistributions for a network of size N = 5000 evaluated at thehybrid type synchronization transition (a = 1.07, σc = 0.496)and two other nearby points, slightly to the left of it. Powerlaw behavior is observed only in the bistable region close tothe hybrid-type synchronization.

tion of the inter-spike interval for each oscillator, respec-tively, and 〈·〉 indicates an average over units. It is impor-tant to distinguish between the interspike interval of eachsingle unit, and the interval between any two consecutivespikes in a network. When looking at avalanches, the sec-ond option is preferred, since it gives a small timescaleable to resolve the internal structure of avalanches. Onthe other hand, to measure the CV one focuses on inter-spike intervals of individual oscillators.

For a Poisson process, one expects an exponential dis-tribution of ISI’s along with a CV = 1; as a rule of thumb,CV > 1 is the fingerprint of irregular spiking activity.Figure 14 shows the probability distribution of the ISIs,and the CVs for the different phases and transitions, asdetermined in computational analyses. Only two casesexhibit CV > 1: the hybrid type synchronization tran-sition, as well as a small neighborhood around it in thebistable regime; they are also the only two cases char-acterized by a broad distribution of ISI values. Thus, ahigh level of variability –similar to that observed in thecortex– is only found in the region around the hybrid typesynchronization transition, but not in the neighborhoodof either standard Hopf or SNIC bifurcations.

Appendix H: Phase diagram for the model ontwo-dimensional lattices

Computational analyses in 2D systems reveal a verysimilar phase diagram to the mean-field one but with aricher phenomenology (as graphically illustrated in Fig-ure 15). There are three main phases, in a nutshell: asynchronous regime, an asynchronous one, and a collec-tively excitable phase, much as in the mean-field case.In the asynchronous regime, clusters of activity appear,propagate, and vanish dynamically, with an averagedconstant network activation level (smaller/larger closeto the SNIC/Hopf transition, respectively) but with nooverall synchronization. On the other hand, within the

16

0 100 200 300 400

ISI

10−6

10−5

10−4

10−3

10−2

10−1p(

ISI)

HT, CV = 1.20

Bistability, CV = 1.21

Hopf, CV = 0.50

Synchronous, CV = 0.16

SNIC, CV = 0.33

FIG. 14. ISI distributions and CVs for fully-connectednetworks. Probability distributions for inter-spike intervals(ISI) for different phases and bifurcations as shown in the leg-end. The coefficients of variation for each case are indicatedalso in the legend. Parameter values: Synchronous regime,σ = 0.5, a = 0.5; Hopf bifurcation, a = 0.5, σ = 0.92; col-lectively excitable phase, σ = 0.5, a = 1.12; hybrid type syn-chronization transition: σ = 0.5, a = 1.07; bistable regime:σ = 0.5, a = 1.072. Network size N = 5000.

synchronous phase, activity wane and washes and peri-ods of overall quiescence are followed by bursts of overallactivity that spreads quickly from different focuses. Sucha collective propagation requires some level of synchro-nization (let us recall that a perfectly phase-synchronousstate does not exist in 2D [98] as rotational symmetrycannot possibly be broken in low-dimensional systems;there are always some “topological defects” in the sys-tem preventing it to exhibit perfect synchronization, asit happens in well-known models of equilibrium statisticalmechanics [102] and also in the Kuramoto model [53]).Finally, the excitable state has all units in the “down”regime, with almost no activity, but is susceptible to re-spond to external perturbations or inputs.

Let us discuss the observed phenomenology at the dif-ferent bifurcation lines separating these phases:

(i) Nearby the type-I synchronization transition, atthe SNIC bifurcation (slightly within the synchronousphase), oscillating activity appears in the form of trav-eling waves. Visual inspection reveals, e.g. the presenceof typical spiral patterns typical of two-dimensional ex-citable systems (see Supplementary Material 1 [71] forvideos, and Figure 15 upper row).

(ii) Nearby the type-II synchronization transition, atthe Hopf bifurcation (slightly within the synchronousphase) the overall level of activity oscillates in time (i.e.,the system “breathes”) and is spatially distributed infragmented clusters (see Figure15, central row).

(iii) Near the hybrid type synchronization transition,the dynamical behavior is much more complex: somehowin between the overall oscillatory behavior along the Hopf

SNIC

Hopf

SP

Time

FIG. 15. Spatio-temporal dynamics on two-dimensional systems The figure shows three rows, eachone with six different frames corresponding to six differ-ent times on a running simulation, for the following cases:Upper row: near the different bifurcations SNIC (a = 1,σ = 0.08); central row: near a Hopf bifurcation (a = 0.5,σ = 0.65); and lower row: hybrid type synchronization tran-sition (a = 0.98, σ = 0.185). Blue color indicates lack ofactivity, while red color stands for maximum activity levels(identified as 1 + sinφj). Near the SNIC transition, noisefluctuations generate wavefronts that propagate in the sys-tem. At the Hopf transition, there is some background activ-ity whose level grows and shrinks periodically. At the hybridtype synchronization transition, the spatio-temporal patternsare more complex, being a mixture of the two aforementionedtypes; in particular, the system sometimes falls in the ex-citable state. Simulations performed for N = 642 with peri-odic boundary conditions. Videos showing the evolution forboth cases and the other phases can be found in the Supple-mentary Material 1 [71].

line and the emergence of wavefronts at the SNIC line(see Figure 15 bottom row).

Thus, even if we are not attempting to quantify spatio-temporal complexity here, it is clear that more com-plex dynamics emerge around the HT synchronizationtransition. As shown in Figure 16 the computationally-obtained phase diagram in the case of two-dimensionallattices has a structure qualitatively identical to themean-field case, including the same phases.

[1] W. R. Softky and C. Koch, The highly irregular firingof cortical cells is inconsistent with temporal integrationof random epsps, J. Neurosci. 13, 334 (1993).

[2] A. Arieli, A. Sterkin, A. Grinvald, and A. Aertsen, Dy-namics of ongoing activity: explanation of the large vari-ability in evoked cortical responses, Science 273, 1868

17

0.5 0.6 0.7 0.8 0.9 ac 1.1 1.2a

0.0

σc

0.4

0.6

0.8

1.0σ

FIG. 16. Phase diagram for a 2D lattice of coupledexcitable oscillators for different values of σ and a, us-ing the Shinomoto-Kuramoto (SK) order parameter (color-coded). Reddish colors stand for asynchronous states, whileblueish colors correspond to partially synchronized ones. Asin the FC-network case, the system can de-synchronize ei-ther through a supercritical Hopf bifurcation or through aSNIC bifurcation. In an intermediate region it is possibleto find a hybrid type synchronization transition (yellow star).This is characterized as the point where power-law distributedavalanches obeying finite-size scaling appear. Symbols de-scribe the coordinates for which the videos of SupplementaryMaterial 1 [71] were recorded: “Hopf” bifurcation (black tri-angles); “SNIC” bifurcation (green squares) and the HT tran-sition point (white circles and yellow star).

(1996).[3] M. Abeles, Corticonics: Neural circuits of the cerebral

cortex (Cambridge University Press, Cambridge, 1991).[4] M. D. Fox and M. E. Raichle, Spontaneous fluctuations

in brain activity observed with functional magnetic res-onance imaging, Nat. Rev. Neurosci. 8, 700 (2007).

[5] J. S. Damoiseaux, S. Rombouts, F. Barkhof, P. Schel-tens, C. J. Stam, S. M. Smith, and C. F. Beckmann,Consistent resting-state networks across healthy sub-jects, Proc. Natl. Acad. Sci. U.S.A. 103, 13848 (2006).

[6] P. E. Latham, B. J. Richmond, P. G. Nelson, andS. Nirenberg, Intrinsic dynamics in neuronal networks.i. theory, J. Neurophysiol. 83, 808 (2000).

[7] M. Mattia and M. V. Sanchez-Vives, Exploring the spec-trum of dynamical regimes and timescales in sponta-neous cortical activity, Cogn. Neurodynamics 6, 239(2012).

[8] Y. Ahmadian and K. D. Miller, What is the dy-namical regime of cerebral cortex?, arXiv preprintarXiv:1908.10101 (2019).

[9] G. Deco, V. K. Jirsa, P. A. Robinson, M. Breakspear,and K. Friston, The dynamic brain: from spiking neu-rons to neural masses and cortical fields, PLoS Comput.Biol.y 4 (2008).

[10] M. Breakspear, Dynamic models of large-scale brain ac-tivity, Nat. Neurosci. 20, 340 (2017).

[11] G. Buzsaki, Rhythms of the Brain (Oxford Univ. Press,

Oxford, 2009).[12] L. Muller, F. Chavane, J. Reynolds, and T. J. Sejnowski,

Cortical travelling waves: mechanisms and computa-tional principles, Nat. Rev. Neurosci. 19, 255 (2018).

[13] A. Buehlmann and G. Deco, Optimal information trans-fer in the cortex through synchronization, PLoS Com-put. Biol. 6 (2010).

[14] A. Renart, J. De La Rocha, P. Bartho, L. Hollender,N. Parga, A. Reyes, and K. D. Harris, The asynchronousstate in cortical circuits, science 327, 587 (2010).

[15] H. Markram, E. Muller, S. Ramaswamy, M. W.Reimann, M. Abdellah, C. A. Sanchez, A. Ailamaki,L. Alonso-Nanclares, N. Antille, S. Arsever, et al.,Reconstruction and simulation of neocortical microcir-cuitry, Cell 163, 456 (2015).

[16] J. Cabral, M. L. Kringelbach, and G. Deco, Functionalconnectivity dynamically evolves on multiple time scalesover a static structural connectome: Models and mech-anisms, NeuroImage 160, 84 (2017).

[17] C. Hammond, H. Bergman, and P. Brown, Patholog-ical synchronization in parkinson’s disease: networks,models and treatments, Trends in neurosciences 30, 357(2007).

[18] I. Dinstein, K. Pierce, L. Eyler, S. Solso, R. Malach,M. Behrmann, and E. Courchesne, Disrupted neuralsynchronization in toddlers with autism, Neuron 70,1218 (2011).

[19] E. R. Kandel, J. H. Schwartz, T. M. Jessell, S. A. Siegel-baum, and A. J. Hudspeth, Principles of Neural Science,Vol. 4 (McGraw-hill, New York, 2000).

[20] J. M. Beggs and D. Plenz, Neuronal avalanches in neo-cortical circuits, J.Neurosci. 23, 11167 (2003).

[21] T. Petermann, T. C. Thiagarajan, M. A. Lebedev, M. A.Nicolelis, D. R. Chialvo, and D. Plenz, Spontaneouscortical activity in awake monkeys composed of neu-ronal avalanches, Proc. Natl. Acad. Sci. USA 106, 15921(2009).

[22] T. Bellay, A. Klaus, S. Seshadri, and D. Plenz, Irreg-ular spiking of pyramidal neurons organizes as scale-invariant neuronal avalanches in the awake state, Elife4, e07224 (2015).

[23] D. Plenz and E. Niebur, Criticality in Neural Systems(John Wiley & Sons, New York, 2014).

[24] D. R. Chialvo, Emergent complex neural dynamics, Nat.Phys. 6, 744 (2010).

[25] D. Plenz, T. L. Ribeiro, S. R. Miller, P. A. Kells, A. Vak-ili, and E. L. Capek, Self-organized criticality in thebrain, arXiv preprint arXiv:2102.09124 (2021).

[26] M. A. Munoz, R. Dickman, A. Vespignani, and S. Zap-peri, Avalanche and spreading exponents in systemswith absorbing states, Phys. Rev. E 59, 6175 (1999).

[27] N. Friedman, S. Ito, B. A. W. Brinkman, M. Shimono,R. E. L. DeVille, K. A. Dahmen, J. M. Beggs, and T. C.Butler, Universal critical dynamics in high resolutionneuronal avalanche data, Phys. Rev. Lett. 108, 208102(2012).

[28] S. di Santo, P. Villegas, R. Burioni, and M. A. Munoz,Simple unified view of branching process statistics: Ran-dom walks in balanced logarithmic potentials, Phys.Rev. E 95, 032115 (2017).

[29] T. Mora and W. Bialek, Are biological systems poisedat criticality?, J. Stat. Phys. 144, 268 (2011).

[30] M. A. Munoz, Colloquium: Criticality and dynamicalscaling in living systems, Rev. Mod. Phys. 90, 031001

18

(2018).[31] J. Zierenberg, J. Wilting, and V. Priesemann, Home-

ostatic plasticity and external input shape neural net-work dynamics, Phys. Rev. X 8, 031018 (2018).

[32] J. Wilting, J. Dehning, J. Pinheiro Neto, L. Rudelt,M. Wibral, J. Zierenberg, and V. Priesemann, Operat-ing in a reverberating regime enables rapid tuning ofnetwork states to task requirements, Front. Syst. Neu-rosci. 12, 55 (2018).

[33] A. Ponce-Alvarez, A. Jouary, M. Privat, G. Deco,and G. Sumbre, Whole-brain neuronal activity displayscrackling noise dynamics, Neuron 100, 1446 (2018).

[34] A. J. Fontenele, N. A. de Vasconcelos, T. Feliciano, L. A.Aguiar, C. Soares-Cunha, B. Coimbra, L. Dalla Porta,S. Ribeiro, A. J. Rodrigues, N. Sousa, P. V. Carelli, andM. Copelli, Criticality between cortical states, Phys.Rev. Lett. 122, 208101 (2019).

[35] L. Dalla Porta and M. Copelli, Modeling neuronalavalanches and long-range temporal correlations at theemergence of collective oscillations: Continuously vary-ing exponents mimic M/EEG results, PLoS Comput.Biol. 15, e1006924 (2019).

[36] M. Yaghoubi, T. de Graaf, J. G. Orlandi, F. Girotto,M. A. Colicos, and J. Davidsen, Neuronal avalanchedynamics indicates different universality classes in neu-ronal cultures, Sci. Rep. 8, 3417 (2018).

[37] S. di Santo, P. Villegas, R. Burioni, and M. A. Munoz,Landau–Ginzburg theory of cortex dynamics: Scale-freeavalanches emerge at the edge of synchronization, Proc.Natl. Acad. of Sci. U.S.A. 115, E1356 (2018).

[38] S.-S. Poil, R. Hardstone, H. D. Mansvelder, andK. Linkenkaer-Hansen, Critical-state dynamics ofavalanches and oscillations jointly emerge from balancedexcitation/inhibition in neuronal networks, J. Neurosci.32, 9817 (2012).

[39] D.-P. Yang, H.-J. Zhou, and C. Zhou, Co-emergence ofmulti-scale cortical activities of irregular firing, oscilla-tions and avalanches achieves cost-efficient informationcapacity, PLoS Comput. Biol. 13, e1005384 (2017).

[40] I. I. Lima Dias Pinto and M. Copelli, Oscillations andcollective excitability in a model of stochastic neuronsunder excitatory and inhibitory coupling, Phys. Rev. E100, 062416 (2019).

[41] J. Liang, T. Zhou, and C. Zhou, Hopf bifurcation inmean field explains critical avalanches in excitation-inhibition balanced neuronal networks: A mechanismfor multiscale variability, Front. Systems Neurosci. 14,87 (2020).

[42] F. Pittorino, M. Ibanez Berganza, M. di Volo, A. Vez-zani, and R. Burioni, Chaos and correlated avalanchesin excitatory neural networks with synaptic plasticity,Phys. Rev. Lett. 118, 098102 (2017).

[43] E. D. Gireesh and D. Plenz, Neuronal avalanches orga-nize as nested theta- and beta/gamma-oscillations dur-ing development of cortical layer 2/3, Proc. Natl. Acad.Sci. USA 105, 7576 (2008).

[44] H. Yang, W. L. Shew, R. Roy, and D. Plenz, Maximalvariability of phase synchrony in cortical networks withneuronal avalanches, J. Neurosci. 32, 1061 (2012).

[45] S. R. Miller, S. Yu, and D. Plenz, The scale-invariant,temporal profile of neuronal avalanches in relation tocortical γ–oscillations, Sci. Rep. 9, 1 (2019).

[46] M. Breakspear, S. Heitmann, and A. Daffertshofer, Gen-erative models of cortical oscillations: Neurobiological

implications of the Kuramoto model, Front. HumanNeurosci. 4, 190 (2010).

[47] J. Cabral, E. Hugues, O. Sporns, and G. Deco, Roleof local network oscillations in resting-state functionalconnectivity, Neuroimage 57, 130 (2011).

[48] F. A. Ferrari, R. L. Viana, S. R. Lopes, and R. Stoop,Phase synchronization of coupled bursting neurons andthe generalized kuramoto model, Neural Networks 66,107 (2015).

[49] P. Villegas, P. Moretti, and M. A. Munoz, Frustratedhierarchical synchronization and emergent complexityin the human connectome network, Sci. Rep. 4, 5990(2014).

[50] A. Pikovsky, M. Rosenblum, and J. Kurths, Synchro-nization: A Universal Concept in Nonlinear Sciences,Vol. 12 (Cambridge university press, Cambridge, 2003).

[51] Y. Kuramoto, Self-entrainment of a population of cou-pled nonlinear oscillators, Lecture Notes in Physics 39,420 (1975).

[52] Y. Kuramoto, Chemical oscillations, waves, and turbu-lence (Courier Corporation, New York, 2003).

[53] J. A. Acebron, L. L. Bonilla, C. J. Perez Vicente, F. Ri-tort, and R. Spigler, The Kuramoto model: A simpleparadigm for synchronization phenomena, Rev. Mod.Phys. 77, 137 (2005).

[54] S. H. Strogatz, From Kuramoto to Crawford: Exploringthe onset of synchronization in populations of coupledoscillators, Physica D 143, 1 (2000).

[55] M. Martinello, J. Hidalgo, A. Maritan, S. di Santo,D. Plenz, and M. A. Munoz, Neutral theory and scale-free neural dynamics, Phys. Rev. X 7, 041071 (2017).

[56] H. Hong, H. Chate, L.-H. Tang, and H. Park, Finite-sizescaling, dynamic fluctuations, and hyperscaling relationin the kuramoto model, Phys. Rev. E 92, 022122 (2015).

[57] R. Juhasz, J. Kelling, and G. Odor, Critical dynamicsof the Kuramoto model on sparse random networks, J.Stat. Mech. Theory Exp. 2019, 053403 (2019).

[58] G. Odor and J. Kelling, Critical synchronization dy-namics of the kuramoto model on connectome and smallworld graphs, Sci. Rep. 9, 1 (2019).

[59] E. M. Izhikevich, Dynamical Systems in Neuroscience:The Geometry of Excitability and Bursting (The MITPress, Cambridge, 2006).

[60] B. Lindner, J. Garcıa-Ojalvo, A. Neiman, andL. Schimansky-Geier, Effects of noise in excitable sys-tems, Phys. Rep. 392, 321 (2004).

[61] N. Kopell, G. Ermentrout, M. Whittington, andR. Traub, Gamma rhythms and beta rhythms have dif-ferent synchronization properties, Proc. Natl. Acad. Sci.U.S.A. 97, 1867 (2000).

[62] S. H. Strogatz, Nonlinear Dynamics and Chaos: WithApplications to Physics, Biology, Chemistry, and Engi-neering, Studies in Nonlinearity (Addison-Wesley Pub,Reading, Mass, 1994).

[63] R. Adler, A study of locking phenomena in oscillators,Proc. IEEE 34, 351 (1946).

[64] S. Shinomoto and Y. Kuramoto, Phase transitions inactive rotator systems, Prog. Theor. Phys. 75, 1105(1986).

[65] A. T. Winfree, The Geometry of Biological Time, Vol. 12(Springer, Berlin, 2001).

[66] H. Sakaguchi, S. Shinomoto, and Y. Kuramoto, Localand grobal self-entrainments in oscillator lattices, Prog.

19

Theor. Phys. 77, 1005 (1987).[67] M. A. Zaks, A. B. Neiman, S. Feistel, and

L. Schimansky-Geier, Noise-controlled oscillations andtheir bifurcations in coupled phase oscillators, Phys.Rev. E 68 (2003).

[68] I. V. Tyulkina, D. S. Goldobin, L. S. Klimenko, andA. Pikovsky, Dynamics of Noisy Oscillator Populationsbeyond the Ott-Antonsen Ansatz, Phys. Rev. Lett. 120(2018).

[69] E. Ott and T. M. Antonsen, Long time evolution ofphase oscillator systems, Chaos 19, 023117 (2009).

[70] L. M. Childs and S. H. Strogatz, Stability diagram forthe forced Kuramoto model, Chaos 18, 043128 (2008).

[71] See Supplementary Videos at [] for the visualization ofdifferent videos in the 2D lattice.

[72] J. Alstott, E. Bullmore, and D. Plenz, powerlaw: apython package for analysis of heavy-tailed distribu-tions, PloS one 9 (2014).

[73] A. Clauset, C. R. Shalizi, and M. E. Newman, Power-law distributions in empirical data, SIAM review 51,661 (2009).

[74] S. di Santo, R. Burioni, A. Vezzani, and M. A.Munoz, Self-organized bistability associated with first-order phase transitions, Phys. Rev. Lett. 116, 240601(2016).

[75] H. Ohta and S. I. Sasa, Critical phenomena in globallycoupled excitable elements, Phys. Rev. E 78, 065101(2008).

[76] H. Hong, H. Chate, L.-H. Tang, and H. Park, Finite-sizescaling, dynamic fluctuations, and hyperscaling relationin the Kuramoto model, Phys. Rev. E 92 (2015).

[77] J. Hesse, J.-H. Schleimer, and S. Schreiber, Qualitativechanges in phase-response curve and synchronizationat the saddle-node-loop bifurcation, Phys. Rev. E 95(2017).

[78] J.-H. Schleimer, J. Hesse, and S. Schreiber, Spikestatistics of stochastic homoclinic neuron modelsin the bistable region, arXiv:1902.00951 (2019),arXiv:1902.00951.

[79] J. D. Cowan, J. Neuman, and W. van Drongelen, Wil-son–cowan equations for neocortical dynamics, J. Math.Neurosci. 6, 1 (2016).

[80] M. Benayoun, J. D. Cowan, W. van Drongelen, andE. Wallace, Avalanches in a stochastic model of spik-ing neurons, PLoS Comput. Biol. 6, e1000846 (2010).

[81] V. Buendıa, S. di Santo, J. A. Bonachela, and M. A.Munoz, Feedback mechanisms for self-organization tothe edge of a phase transition, Frontiers in Physics 8(2020).

[82] K. Tsumoto, H. Kitajima, T. Yoshinaga, K. Aihara,and H. Kawakami, Bifurcations in morris–lecar neuronmodel, Neurocomputing 69, 293 (2006).

[83] C. R. Laing, The dynamics of networks of identical thetaneurons, J. Math. Neurosci. 8, 4 (2018).

[84] E. Montbrio, D. Pazo, and A. Roxin, Macroscopic De-scription for Networks of Spiking Neurons, Phys. Rev.X 5 (2015).

[85] R. M. Borisyuk and A. B. Kirillov, Bifurcation analysisof a neural network model, Biol. Cybern. 66, 319 (1992).

[86] J. Bonachela, S. de Franciscis, J. Torres, and M. A.Munoz, Self-organization without conservation: Are

neuronal avalanches generically critical?, J. Stat. Mech.:Theory Exp. 2010 (02), P02015.

[87] J. A. Bonachela and M. A. Munoz, Self-organizationwithout conservation: True or just apparent scale-invariance?, J. Stat. Mech.: Theory Exp. , P09009.

[88] M. Girardi-Schappo, L. Brochini, A. A. Costa, T. T. A.Carvalho, and O. Kinouchi, Synaptic balance due tohomeostatically self-organized quasicritical dynamics,Phys. Rev. Res. 2, 012042 (2020).

[89] V. Buendıa, S. di Santo, P. Villegas, R. Burioni, andM. A. Munoz, Self-organized bistability and its possiblerelevance for brain dynamics, Phys. Rev. Res. 2, 013318(2020).

[90] L. Meshulam, J. L. Gauthier, C. D. Brody, D. W. Tank,and W. Bialek, Coarse graining, fixed points, and scal-ing in a large population of neurons, Physical reviewletters 123, 178103 (2019).

[91] G. Hahn, T. Petermann, M. N. Havenith, S. Yu,W. Singer, D. Plenz, and D. Nikolic, Neuronalavalanches in spontaneous activity in vivo, J. Neuro-physiol. 104, 3312 (2010).

[92] S. Yu, A. Klaus, H. Yang, and D. Plenz, Scale-invariantneuronal avalanche dynamics and the cut-off in size dis-tributions, PloS one 9, e99761 (2014).

[93] V. Priesemann, M. H. Munk, and M. Wibral, Sub-sampling effects in neuronal avalanche distributionsrecorded in vivo, BMC neuroscience 10, 1 (2009).

[94] A. Levina and V. Priesemann, Subsampling scaling,Nat. Comm. 8, 1 (2017).

[95] J. Wilting and V. Priesemann, Inferring collective dy-namical states from widely unobserved systems, Nat.Comm. 9, 2325 (2018).

[96] S. A. Prescott, Excitability: Types I, II, and III, inEncyclopedia of Computational Neuroscience, edited byD. Jaeger and R. Jung (Springer New York, New York,NY, 2014) pp. 1–7.

[97] Z. Zhao and H. Gu, Transitions between classes of neu-ronal excitability and bifurcations induced by autapse,Sci. Rep. 7, 1 (2017).

[98] S. Shinomoto and Y. Kuramoto, Cooperative Phenom-ena in Two-Dimensional Active Rotator Systems, Prog.Theor. Phys. 75, 1319 (1986).

[99] D. S. Goldobin, I. V. Tyulkina, L. S. Klimenko, andA. Pikovsky, Collective mode reductions for populationsof coupled noisy oscillators, Chaos 28, 101101 (2018).

[100] H. Makarem, H. N. Pishkenari, and G. R. Vossoughi,A modified Gaussian moment closure method for non-linear stochastic differential equations, Nonlinear Dyn.89, 2609 (2017).

[101] In stochastic processes, the closure 〈x3〉 ' 〈x〉3 +3〈x〉〈x2〉 is also called Gaussian closure because it as-sumes that the variable x follows Gaussian statistics.Note that in the case under study, assuming that Zobeys Gaussian statistics and considering that ψ isGaussian distributed are two different approximations.

[102] N. D. Mermin and H. Wagner, Absence of ferromag-netism or antiferromagnetism in one-or two-dimensionalisotropic heisenberg models, Phys. Rev. Lett. 17, 1133(1966).