Download - KARL PETERSEN

Transcript
Page 1: KARL PETERSEN

SYMBOLIC DYNAMICS

KARL PETERSEN

Mathematics 261, Spring 1998University of North Carolina at Chapel Hill

Copyright c©1998 Karl Petersen

NOTETAKERS

Guillaume BonnetSuzanne BuchtaDavid DuncanJesse FreyKimberly JohnsonLorelei KossXavier MelaCorwyn NewmanKimberly NoonanRobert PrattKennan Shelton, Coordinating EditorSujin ShinDan SpitznerPaul Strack

Page 2: KARL PETERSEN

Contents

1. Introduction 12. January 13 (Notes by KJ) 12.1. Background and Resources 12.2. General Remarks 12.2.1. Quantization 12.2.2. Systems 22.2.3. Mappings between systems 22.3. Plan of the course 32.4. Basic Properties of subshifts 33. January 15 (Notes by KJ) 63.1. Basic Properties of Subshifts 63.1.1. Languages 73.1.2. Finite Codes 84. January 20 (Notes by PS) 134.1. Topological Dynamical Systems and Maps Between Them 134.2. Dynamical Properties of Topological Dynamical Systems 144.3. Minimality 155. January 22 (Notes by PS) 165.1. Minimality in (Σa, σ) 175.2. Ergodicity of (Σa, σ) 175.3. Examples of Non-Cyclic Minimal Systems 186. January 27 (Notes by SB) 206.1. Applications of the PTM Sequence: 206.2. Some Other Ways to Generate the PTM Sequence: 206.3. Generalizing Properties of the PTM Sequence 216.3.1. Substitutions 216.3.2. Topological Strong Mixing 216.3.3. Topological Entropy, htop 227. January 29 (Notes by SB) 247.1. Invariant Measures 247.1.1. Some easy examples of invariant measures 258. February 3 (Notes by SS) 278.1. More Examples of Invariant Measures 278.2. Unique Ergodicity 289. February 5 (Notes by SS) 299.1. Unique Ergodicity 299.1.1. Interpretation of the unique ergodicity criterion in (2) for subshifts 309.1.2. Connection between minimality and unique ergodicity 3010. February 10 (Notes by KN) 3110.1. Expansive Systems 3110.2. Equicontinuous Systems 3110.2.1. Some examples of equicontinuous systems 3110.2.2. Equicontinuous subshifts 3110.3. Distal Systems 3210.4. The structure of the Morse System 3210.4.1. The group rotation 3211. February 12 (Notes by KN) 3411.1. The Odometer 3411.1.1. The spectrum of the odometer in Σ+

2 3511.2. Relationship between the Toeplitz subshift and orbit closure of the odometer 3612. February 17 (Notes by RP) 37

i

Page 3: KARL PETERSEN

12.1. Toeplitz System 3712.2. The Odometer as a Map on the Interval–Cutting and Stacking 3913. February 19 (Notes by RP) 4113.1. Action-Commuting Property 4113.2. Odometer as Cutting and Stacking 4114. February 24 (Notes by JF) 4315. February 26 (Notes by JF) 4615.1. History: 4615.2. Another Theorem 4616. March 3 (Notes by XM) 4916.1. Application to the Morse System. 4917. March 5 (Notes by XM) 5218. March 17 (Notes by GB) 5718.1. The Spectrum of the Morse System 5719. March 19 (Notes by GB) 6019.1. Sturmian Systems 6020. March 24 (Notes by DJS) 6320.1. Subshifts of Finite Type 6320.2. Graph Representations of Subshifts of Finite Type 6320.3. General Edge and Vertex Shifts 6421. March 26 (Notes by DJS) 6722. March 31 (Notes by DD) 6923. April 2 (Notes by DD) 7324. April 7 (Notes by LK) 7524.1. Generalization to Equilibrium States 7524.2. Coding between Subshifts 7624.3. State Splitting and Amalgamation 7625. April 9 (Notes by LK) 8025.1. Matrices for In-splittings 8125.2. Topological Conjugacy of SFT’s and Strong Shift Equivalence 8126. April 14 (Notes by CN) 8326.1. Shift Equivalence 8526.2. Williams Conjecture 8626.2.1. Some Positive Evidence: 8626.3. Invariants of Shift Equivalence 8727. April 16 (Notes by CN) 8827.1. Invariants of Shift Equivalence 8827.2. Embeddings and Factors 8927.2.1. What about factors? 9028. Sofic Systems 9028.1. Shannon’s Message Generators 9129. April 21 (Notes by KN) 9229.1. Sofic Systems 9229.1.1. Characterizations of Sofic Systems 9230. April 23 (Notes by PS) 9530.1. 3 ⇒ 1 9530.2. 1 ⇒ 3 9530.3. 2 ⇔ 3 9530.4. 1 ⇒ 6 9530.5. 6 ⇒ 1 9630.6. 3 ⇔ 7 9630.7. 3 ⇒ 4 9730.8. 4 ⇒ 3 97

ii

Page 4: KARL PETERSEN

31. April 28 (Notes by SB) 9832. April 30 (Notes by SS) 10332.1. Shannon Theory 10332.1.1. Source coding 10332.1.2. Shannon-McMillan-Breiman Theorem 10433. May 5 (Notes by KJ and RP) 10633.1. Connecting the Source to the Channel 10633.2. Mutual Information and Capacity 10633.3. Shannon’s Channel Coding Theorem 10833.4. Good Channels 10933.5. Sliding Block Code Versions (Ornstein, Gray, Dobrushin, Kieffer) 10933.6. Further Topics 110List of Figures 111

iii

Page 5: KARL PETERSEN

Symbolic Dynamics 1

1. Introduction

These are notes from a graduate course on symbolic dynamics given at the University of NorthCarolina, Chapel Hill, in the spring semester of 1998. This course followed one from the previousyear which considered sources of symbolic dynamics, especially the construction of Markov parti-tions for certain smooth systems. The topics included Sturmian and substitution systems, shiftsof finite type, codings between systems, sofic systems, some information theory, and connectionswith topological dynamics and ergodic theory. The author thanks all the students who took notes,wrote them up, and typed them; Kennan Shelton for managing the entire project; and Sarah BaileyFrick for help with corrections.

2. January 13 (Notes by KJ)

2.1. Background and Resources. There will be several books on reserve in the Brauer Librarywhich will give necessary background and more details. They are:

• D. Lind and B. Marcus, An Introduction to Symbolic Dynamics• K. Petersen, Ergodic Theory• P. Walters, An Introduction to Ergodic Theory• B. Kitchens, Symbolic Dynamics

We also have two handouts: The Spring 1997 Math 261 notes: Attractors and Attracting Measures,and K. Petersen, Lectures on Ergodic Theory.

The background needed for this course is general topology, analysis, and measure theory (especi-ally if we do information theory). Our range of background is large, from first-year students to nthyear, from people with no exposure to dynamics to over-exposure to dynamics. It is not necessaryto have had a course in ergodic theory, but books and notes will be available for whomever needsthem. Occasionally we will have to use a definition or concept from ergodic theory, but there is nottime to go into the background and lots of examples, so you may have to do a little reading on theside.

In the Spring 97 version of Math 261 our purpose was to produce symbolic dynamics, to giveone place it came from. We set up the geometric machinery of Markov partitions, and this gave ussymbolic dynamics. Now we do symbolic dynamics in its own right, knowing what it is based on.

2.2. General Remarks. What is symbolic dynamics, and what are we trying to do in this course?We are not trying to cover all of symbolic dynamics—most of what we cover is in the last chapterof Lind and Marcus.

Symbolic dynamics can be described as the study of codings into or between arrays of symbols.We use the term codings to mean three things: the process of quantization of data, the systemsthat result from quantization, and mappings between systems.

2.2.1. Quantization. Quantization is the process of taking some possibly continuous object andtransforming it into something discrete. Here are several examples:

• Measurement takes something continuous (length) and records it to some degree of prede-termined accuracy—say 4 significant digits.• We can take an image and quantize it to an array of pixels with some gray-scale or finite

color palette. Digital television is based on this idea.

Page 6: KARL PETERSEN

2 Karl Petersen

Figure 1. An image quantized into pixels

• An Axiom A dynamical system leads to a subshift of finite type when trajectories are codedaccording to visits to cells of a Markov partition.

Hadamard and Morse used this method of discretization when they studied geodesics, and it isalso useful for studying complex dynamics.

Note that there is an interesting interaction between the continuous and the discrete: Occasio-nally we take a discrete system, such as population dynamics or a fluid flow, and model it usingcontinuous dynamics, such as PDEs. Then sometimes we solve these by using discrete approxima-tions.

2.2.2. Systems. The second sense of the word ‘coding’ is a system that is the result of quantization.These are shift dynamical systems, often called subshifts. An example of this would be an SFT(subshift of finite type).

We let X = the space of all images under a coding, and let σ = the shift, which may be higher-dimensional. In the examples above, a finite decimal measurement is the image of the coding oflength. We could also have X be the space of all picture arrays.

The dynamical aspect of the system is that the shift lets us move around. Applying the shifttransformation amounts to redirecting our attention to different parts of a recorded string of sym-bols. Applying horizontal and vertical shifts to an array of pixels lets us move around an image.

The shift can also be time. If we have a string of integers, we can move our field of view around,but we can also consider the string arriving at our computer one unit at a time. Moving one bit tothe right corresponds to what arrives at our computer one second later.

The system itself is the result of a coding in the first sense of the word. We study the dynamicproperties of the system, in the hopes that they will tell us the properties of what we started with.

2.2.3. Mappings between systems. We might call this kind of coding ‘re-coding.’ This is what wedo with image processing, for example. If we are given an image in color, we can recode it to begray-scale.

We can ask how to find these recodings, and what systems allow it. We can also ask whatremains after recoding and what information is lost. In information theory, for example, signals arerecoded to be preferable in some way. For example, you could have a signal with lots of repeatedsymbols and irrelevant information. But after compression to get rid of the irrelevancy, you havea very unstable system. So you add in some redundancy in a carefully controlled way to get someerror correction, to stabilize your information.

Page 7: KARL PETERSEN

Symbolic Dynamics 3

2.3. Plan of the course. The first sense of coding is what we studied last spring. That aspect isusually left to the scientists. We will mainly study systems. Here is a proposed plan of the course.It can be altered depending on what people want to see more of (or less of).

I Basic Properties of Subshiftse.g. ergodicity, mixing, entropy, invariant measures, . . .

II Examplese.g. SFT, sofic systems, adics (Toeplitz), substitution systems (Prouhet-Thue-Morse, Cha-con), Sturmian, coded systems, and some countable-state systems such as continued fracti-ons (Gauss map) and beta-shifts.

III Coding (mappings between systems)Between SFT’s (the shift-equivalence problem), and automorphisms of SFTs (cellular au-tomata). This section will draw heavily from Lind and Marcus, and Kitchens.

IV Information TheoryShannon theorems, connections with Ornstein theory of Bernoulli shifts, complexity.

Please give feedback on what you want to see.

2.4. Basic Properties of subshifts. Let A be a finite or countable set. We call A the alphabet.We give A the discrete topology and

Σ(A) = Π∞−∞A

the product topology. Thus Σ(A) is defined as

(1) {x = (xi)∞−∞ : xi ∈ A for each i}.

The one-sided shift space is given by

(2) Σ(A)+ = {x = (xi)∞0 : xi ∈ A for each i}.

The shift transformation σ : Σ(A)→ Σ(A) is defined by

(3) (σx)i = xi+1

for −∞ < i <∞.If A has n elements (n = 2, 3, 4, . . .) then we denote Σ(A) by Σn and call it the (full) n-

shift. In this case, the topology on Σn is compatible with the metric d(x, y) = 1/(j + 1) wherej = inf{|k| : xk 6= yk}. Thus two elements of Σn are close if and only if they agree on a long centralblock

If A is countable, there are many inequivalent metrics compatible with the discrete topology onA. This is quite different than the finite case. For example, consider the metric on 3 points wherethe distance between the first and second, and the distance between the second and third is one,and the distance between the first and third is two. This gives an equivalent metric to the one wherethe distance between the second and third points is changed to 1/2. The metrics are equivalent,the topology for each is the discrete topology, and they give the same product topological space(figure 2).

On the other hand, say A = N = {1, 2, 3 . . .}. Then there are at least two natural ways to arrangethe natural numbers, one with a limit point and one without, and these give rise to fundamentallydifferent metrics on the product space Σ(A) (see figure 3).

Countable alphabets appear more and more, for example in complex dynamics and in non-uniformly hyperbolic dynamical systems, and we need to consider the proper representation. Ho-wever, for the near future, A will be a finite alphabet.

Page 8: KARL PETERSEN

4 Karl Petersen

Say |A| = n, so Σ(A) = Σn. Then Σn is compact and σ : Σn → Σn is a homeomorphism. Thesefacts are not too hard to see: The first follows by Tychonoff’s Theorem and the second follows fromobserving that σ is a one-to-one onto continuous map. Thus we call the compact metric space withhomeomorphism (Σn, σ) the n-shift dynamical system.

A subshift is a pair (X,σ) where X ⊂ Σn (for some n) is a nonempty closed, shift-invariant(σX = X) set. Coding in the second sense given above is basically the study of subshifts.

A block or word is an element of Ar for some r = 0, 1, 2, . . ., i.e. a finite string on the alphabetA. We denote the empty block by ε. For example, if A = {0, 1} then B = 011100 is a block. Wewrite l(B) for the length of a block B. Note that already we have opportunities to be ridiculouslyprecise: see Gottschalk and Hedlund for more precision.

The cylinder set determined by a block B of length r at position j ∈ Z is

(4) [B]j = {x ∈ X : xjxj+1xj+2 . . . xj+r−1 = B}.So the cylinder set is the set of all points in the space X which agree with B beginning in the jthplace. Some important cylinder sets are those which begin with the 0’th (central) place: when wewrite [B] we mean [B]0, which begins in the 0’th place. For example, [0] is the set of sequenceswith 0 in the 0’th place, [01] is the set of sequences with 0 in the 0’th place and 1 in the first place,and σ[01] is the set of sequences with 0 in the −1;st place, and 1 in the 0’th place. Note (in thepicture) that when we shift the sequence to the left, that has the effect of shifting our attention tothe right. This is the same phenomenon that makes taking a step forward in a room equivalent tothe room moving backward.

The cylinder sets are open and closed and form a base for the topology of X.Thus we have that Σn is compact, totally disconnected (the only connected sets are single points)

and perfect (there are no isolated points). Hence it is homeomorphic to the Cantor middle-thirdsset in [0, 1] (as are most of the subshifts we will study). All the base sets we will be working withare the same (up to homeomorphisms). But the mappings on the Cantor sets will be different. Ifwe tried to write out the mappings we are using on the Cantor set on the interval, the definitionswould be terrible. But when we change spaces to other subshifts (which are relatively simple) andkeep the map the same (the shift) the definitions are simple to work with.

In addition to blocks we have rays, which are semi-infinite blocks (xi)∞i=m or (xi)

i=m−∞ , right-

infinite or left-infinite sequences. We say a block B appears in a block C if we can find blocks Dand E (possibly empty) such that C = DBE. We can also make many natural statements aboutconcatenation (as in the last example) which we will assume.

Next time: More basic properties.

{ {{{

1 1

1 1/2

Figure 2. Two equivalent metrics for the finite alphabet shift

Page 9: KARL PETERSEN

Symbolic Dynamics 5

1 2 3 4 5 6 7 8

12345......6

....

Figure 3. Two non-equivalent definitions of metrics for the countable alphabet shift

b b ... b 1 2 r

jth place

arbitrary arbitrary

Figure 4. Example of the cylinder set [B]j

Page 10: KARL PETERSEN

6 Karl Petersen

3. January 15 (Notes by KJ)

3.1. Basic Properties of Subshifts. A recap of terminology introduced last time about subshifts.We start with a finite alphabet A = {0, 1, . . . a − 1} whose cardinality a is (usually) finite and

greater than 2. Then we give Σa = AZ the product topology, which makes it a compact metricspace. We define σ : Σa → Σa by (σx)i = xi+1, that is, the i’th coordinate of the shift of x is the(i+ 1)’st coordinate of the original sequence. We call (Σa, σ) the full (two-sided) a-shift.

A subshift (X,σ) where X ⊂ Σa for some a ≥ 2 is a nonempty closed σ-invariant set (σX = X);X has the subspace topology.

More on the topology (in a full shift): Two sequences x, y are close if they agree on a long centralblock. If the sequences agree on an even longer block far away from the 0’th place, that doesn’t

x=

y=

0th place

x and y agree here

Figure 5. Two points are close if they agree on a long central block

say how close they are, but how close some shift of the two sequences are.We call [B]m = {x ∈ Σa : xmxm+1 . . . xm+l(B)−1 = B} (where B is a finite word and l(B) is the

length of B) a cylinder set. One motivation for the name cylinder set may be in R3: If you restrictthe x and y coordinates but leave the z coordinate free, you get a cylinder-like object.

x axis

z axis

y axis

Figure 6. Motivation for the term “cylinder set”

Page 11: KARL PETERSEN

Symbolic Dynamics 7

Cylinder sets are open and closed, and they form a base for the topology. To see that they areopen, let x ∈ [B]m, so that B appears in x starting at the m’th place. We want to show that x isin the interior of [B]m. To do this we show that points close enough to x are in the set [B]m also.Let y be a point close enough to x so that y agrees with x on a long central block including theplaces beginning with m where B appears. Then y also has B appearing at those places and somust be in [B]m.

To see that every cylinder set is closed, let xn ∈ [B]m be a sequence converging to x. We needto show that x ∈ [B]m. Notice that after a while, the xn agree with x on a long central block. Ifthey agree on a long enough block, they agree from m to m+ l(B)− 1. But since xn agree with Bthere, so does x and so x ∈ [B]m.

Notice that since the base for the topology consists of open and closed sets, we have a difficulttime finding connected sets. Thus the only connected sets are singletons, and so subshifts aretotally disconnected (zero-dimensional).

3.1.1. Languages. Define the following:A0 = {ε} = the empty wordA1 = A, A2 = AA = all words of length 2 (we write AA to mean concatenation), etc.A∗ = ∪n≥0A

n = the set of all finite words on the letters in A.A language on the alphabet A is any subset L ⊂ A∗. This includes all formal languages, pro-

gramming languages, etc. L doesn’t have to be finite.Define the language of a subshift (X,σ) ⊂ (Σa, σ) to be L(X,σ) = the collection of all finite blocks

found in sequences x ∈ X. This is the set {B ∈ A∗ : there is x ∈ X such that B appears in x}.Recall B appears in x means that x = . . . B . . .. Note that L(Σa, σ) = A∗.

Relating to this definition there is the following theorem:

Theorem 3.1. If (X,σ) is a subshift, then L(X,σ) is

(i) factorial: that is, if B ∈ L(X,σ) and C is a sub block of B (i.e., B = pCs for some possiblyempty blocks p, s ∈ A∗) then C ∈ L(X,σ);

(ii) extendable: that is, if B ∈ L(X,σ) then there are nonempty blocks p, s ∈ A∗ such thatpBs ∈ L(X,σ).

Moreover, properties (i) and (ii) characterize the languages of subshifts: Given any L ⊂ A∗ whichis nonempty, factorial, and extendable, there is a unique subshift X such that L = L(X,σ).

Proof. If B is in the language of a subshift, that means B appears in some sequence x, and so do allits subwords; further, we can extend B to both sides. (Notice that L(X,σ) is never finite, becauseit contains words of all lengths.)

For the second part of the theorem, given a nonempty, factorial, extendable language L, wedefine X to be the set of all sequences which do not contain any words that are not in L and showthat it is a subshift. Given an arbitrary language, there might not be anything in X. Since L isnot empty, we can take any word in L and extend it to the right and left in accordance with item(ii). Thus X is not empty. X is σ-invariant since no one said where the origin was as we extendeda word in L.

To show that X is closed, we show that its complement is open. Let x be in the complement.This means that it has some bad word in it (one which is not in L). Words which are close to x(they agree on a long central block) will also have this bad word in it, so the complement of X isopen in the metric space topology. �

Page 12: KARL PETERSEN

8 Karl Petersen

x=

y=F(x)

F

? ?

Figure 7. A sliding block map

Using this theorem we have the progression

X → L(X)→ XL(X) = X,

where XL is the unique subshift given by the extendable factorial language L. Thus there areinteresting connections between symbolic dynamics and the study of formal languages, the Chomskyhierarchy, automata theory, and so on.

There was some discussion about whether the languages consisting of the empty word (ε) andthe empty language also gave subshifts. The language consisting only of ε is not extendable, sothe theorem does not apply. Note that ε is in every factorial language. Lind and Marcus do notrestrict the subshift resulting from the language to being nonempty. Unless we use only nonemptylanguages, it makes sense for us to include the empty subshift also.

3.1.2. Finite Codes. Finite codes are also known as block codes and sliding block codes. They arecodes between subshifts. We will describe one of these codes, a map F from Σa to Σb for some aand b. We will say that y = F (x).

Let w ≥ 0 be the size of a ‘window’ or ‘range’. We want to figure out what symbol to put inthe i’th place of F (x). To do this, we look at a window of symbols in x and use that to decide.We slide the window back and forth to figure out other symbols. This method goes on in the realworld in data encoding and receiving.

Formally, let f : Aw → B = 0, 1 . . . b− 1 for some b ≥ 1, where w is the size of the window above.This is a block map. Then we define F : Σa → Σb by

(5) (Fx)i = f(xi−mxi−m+1 . . . xi+n) (where w = m+ n+ 1)

for all i ∈ Z. Then F : Σa → Σb, or its restriction to any subshift of Σa, is called a sliding blockcode with window (or range) w, memory m and anticipation n.

The shift map or a power of the shift is an example of a map which only depends on anticipation,so we may want to have the concept of negative memory. In the shift map, (σx)i = f(xi+1) wheref is the identity map on the alphabet. Thus, i −m = i + n = i + 1, so w = 1,m = −1, n = 1.We could also define the shift map by (σx)i = g(xixi+1) where g(x0x1) = x1. In this case,w = 2,m = 0, n = 1. For examples, see figure 8.

One important thing to note is that when we slide the sliding window over, we only slide it overone block at a time instead of the width of the window.

We have the following theorem:

Theorem 3.2 (Curtis, Hedlund, Lyndon). If (X,σ) is a subshift of (Σa, σ) and F : (X,σ)→ (Σb, σ)is a sliding block code determined by a block map f : Aw → B, then F is a continuous shiftcommuting map (a factor map).

Page 13: KARL PETERSEN

Symbolic Dynamics 9

x=

y=F(x)

F

ith

im n

x=

F(x)=ith place

Figure 8. Sliding block map with memory m and anticipation n; sliding block mapwith negative memory

Conversely, if (X,σ) ⊂ (Σa, σ) and (Y, σ) ⊂ (Σb, σ) are subshifts and φ : (X,σ) → (Y, σ) iscontinuous and commutes with σ, then φ is given by a sliding block code.

This theorem says that sliding block codes give all continuous shift-commuting maps betweensubshifts. This is particularly interesting in the viewpoint of category theory, which says we shouldwant to study all maps among things in a category. In shift spaces, we want them to be shift-commuting and continuous (since Σa is a topological space), so this theorem says we need not lookany farther than sliding block codes.

To help see what this means, we give an example of a simple sliding block code.

Example 3.1. Let A = {0, 1}. We define a map f : A3 → B = {α, β} pointwise:

000 → β

001 → β

010 → α

011 → β

100 → α

101 → α

110 → α

111 → β

If we let F be the induced sliding block code with memory 0, and x = . . . 1100001010111 . . ., theny = F (x) = . . . ααββββααααββ . . . .

Proof (of CHL). The first part comes from two observations. The map F is shift-commuting be-cause clearly as you shift the center of the image, you just shift the center of the window along.

The map F is continuous because of the finite range. To see this, we want to show that twopoints can be as close as we want them to be in the range, provided the points they came from

Page 14: KARL PETERSEN

10 Karl Petersen

x=

F(x)=

0’th

F

n n

{{w w

{ {

Figure 9. The sliding block map is continuous

))

xx’

F(x)

F(x’)

0th place

((

( )( )

-m m

Figure 10. Uniform continuity gives us equivalence classes of 2m+ 1 blocks

in the domain are close enough. So, decide how close the points in the range should be—say youwant them to agree on a central block of length 2n + 1. Then if you choose x, x′ ∈ X such thatthey agree on a block of length 2(n + w) + 1, their images must agree on the smaller block: if wehave x′j = xj for |j| ≤ n+ w then F (x)j = F (x′)j for |j| ≤ n as in figure 9.

For the converse we use another picture. Let X ⊂ Σa, Y ⊂ Σb be subshifts, φ : X → Y becontinuous and such that φσ = σφ (here σ is being used both as the shift on X and on Y ). Themap is continuous and hence is uniformly continuous, since X is compact. (At this point the prooffor countable-state alphabets goes to blazes.) Uniform continuity implies that there is m such thatif xj = x′j for |j| ≤ m, then (φx)0 = (φx′)0. To see this, remember that closeness is determined byagreeing on a central place. How close they have to be in the domain is determined by this m fromuniform continuity (see figure 10).

The (2m + 1)-blocks fall into equivalence classes according to which symbol they determine inthe image—this gives a block map and hence a sliding block code.

Since φ is a shift-commuting map, we have the same story in the j’th place as we do in the 0’thplace. The j’th place in x, x′ is the 0’th place in σjx, σjx′. �

This theorem is important and useful because in theory now we know what all the factor mapsor homomorphisms between subshifts are. We give some examples of different types of sliding blockcodes.

Page 15: KARL PETERSEN

Symbolic Dynamics 11

Example 3.2 (1-block code). Let A = {0, 1, 2} and B = {α, β}. Then define the block map thatsends even numbers to α and odd numbers to β:

0 → α

1 → β

2 → α

In this example, if x = . . . 10112001102 . . . we have F (x) = . . . βαββαααββαα . . .. This examplecollapsed two letters, so information was lost. We could have simply renamed the letters in aone-to-one way and lost no information.

Theorem 3.3. If a = |A| is prime, then for each w ≥ 1, any map f : Aw → A is given by apolynomial in w variables over GF (a) (the general field with a elements).

For example, if a = 2, A = {0, 1}, and p(x0, x1, x2) = x1 + x0x2, then we have the block code

000 → 0 + 0 · 0 = 0

101 → 0 + 1 · 1 = 1

110 → 1 + 1 · 0 = 1...

Exercise 1. Find the polynomial for example 3.1 and prove it can always be done. What happensif a is not prime?

Example 3.3 (Higher block code). Let (X,σ) ⊂ (Σa, σ) be a subshift, and fix r ≥ 2. Take a newalphabet B = Ar = r − blocks on A. Define F : (X,σ) → (Σb, σ) (b = ar) as generated by theblock map

(6) f(x1 . . . xr) = x1 . . . xr ∈ B for x1, . . . , xr ∈ A.

Note that this map is into (Σb, σ), but not onto.For example, we can do the higher block map with r = 2 from Σ2 into Σ4:

00 → α

01 → β

10 → γ

11 → δ

Then given a sequence x = . . . 110100010 . . ., we get F (x) = . . . δγβγαγβγ . . .. When you aretaking the image of x, don’t shift by two to get the next symbol in F (x). Only shift by one.

Using an r-block code F as above, the image F (X) ⊂ Σb is a subshift that is topologicallyconjugate to (X,σ) (it’s an isomorphic, i.e., one-to-one onto shift-commuting image); this image iscalled the r-block representation of (X,σ).

Note that the r-block representation of (Xσ) is different from (X,σr). It is true that (Σ2, σ2) ∼=

(Σ4, σ) by mapping each 2-block (they don’t overlap) to a separate symbol. However, (Σ2, σ) 6∼=(Σ4, σ) via the higher block map. For example, in the image of the higher block map given above,

Page 16: KARL PETERSEN

12 Karl Petersen

2 block representation

map onto Sigma4

into Sigma 4

...1 1 0 1 0 0 1 1 0 0 0 1 1 0...

...1 1 0 1 0 0 1 1 0 0 0 1 1 0...

...d c b c a b d c a a b d c...

...d b a d a b c...

Figure 11. The 2-higher block representation of Σ2, along with a map from (Σ2, σ2)

to (Σ4, σ).

α could never be followed by γ, while in the full 4-shift they could follow each other, as seen infigure 11.

Next time: Dynamic properties such as ergodicity and mixing. Look at the topological dynamicssection of Petersen, or page 16 or so in the old 261 notes.

Page 17: KARL PETERSEN

Symbolic Dynamics 13

4. January 20 (Notes by PS)

4.1. Topological Dynamical Systems and Maps Between Them.

Definition 4.1. A topological dynamical system is a pair (X,T ), where X is a compact Hausdorffspace (usually metric) and T : X → X is a continuous mapping (usually a homeomorphism).

Definition 4.2. A homomorphism or factor mapping between topological dynamical systems (X,T )and (Y, S) is a continuous onto map φ : X → Y such that φT = Sφ. We say Y is a factor of X,and X is an extension of Y .

Definition 4.3. A topological conjugacy is a one-to-one and onto factor map.

For example, the factor mappings between subshifts are all given by finite window sliding-blockcodes.

One major problem is to classify subshifts, or even subshifts of finite type, up to topologicalconjugacy. We need invariants to do so, preferably complete invariants. If this question weresolved, the next step would be to construct codings between them:

φ : (Σ, σ)→ (Σ′, σ)

There are various engineering difficulties motivating such recodings. For example, if your originalsystem allowed arbitrarily long sequences of zeros and ones, then slight errors might arise if yourhardware had difficulty distinguishing between 6,000,000 zeros and 6,000,001 zeros. It would bebetter to recode the system so that there was some upper bound on the length of sequences ofzeros.

Similar classification problems exist for other classes of topological dynamical systems, and forother kinds of maps with weaker or stronger conditions (for example, measurability).

Definition 4.4. Fix (X,T ) = (Σa, σ), a full shift. The factor maps

φ : (Σa, σ)→ (Σa, σ)

are endomorphisms of (Σa, σ) or cellular automata. One-to-one endomorphisms are automorphisms.

Since φσ = σφ, we have an action of Z× Z on (Σa, σ), namely

(m,n)x = σmφnx

for m,n ∈ Z. If φ is not invertible, then we have an action of Z× Z+ on Σa.For example, suppose φ is given by the block map

(φx)i = xi + xi+1 mod 2

for a = 2, A = {0, 1}. Consider

x = ...101110100010110...

φx = ...11001110011101...

φ2x = ...0101001010011

Page 18: KARL PETERSEN

14 Karl Petersen

or

x = ...000001000...

φx = ...000011000...

φ2x = ...000101000...

φ3x = ...001111000...

φ4x = ...010001000...

φ5x = ...110011000...

In the latter case, we have Pascal’s triangle mod 2. In general, 1’s that appear within x will tryto produce such triangles in the images. Nearby 1’s will interfere with each other, creating complexpatterns. If there are infinitely many 1’s appearing in the sequence x, the pattern becomes nearlyimpossible to predict. Thus, even seemingly simple maps φ can have complex developments.

The system (Σa, φ) has been studied as a topological dynamical system in its own right, by suchindividuals as R. Gilman, F. Blanchard and A. Maass.

Similar questions have been asked about topological dynamical systems other than Σa, for exam-ple subshifts of finite type. Finding all endomorphisms of such systems and studying the propertiesof those endomorphisms leads to some interesting open problems. For a more in depth study ofendomorphisms of subshifts of finite type, see the paper by Boyle, Lind and Rudolph.

4.2. Dynamical Properties of Topological Dynamical Systems. Consider a compact metricspace X and a homeomorphism T : X → X (often T continuous is sufficient).

Definition 4.5. For x ∈ X, the orbit of x is O(x) = {Tnx : n ∈ Z} (alternately, if T is not

invertible, n ∈ Z+). The orbit closure of x is O(x).

Definition 4.6. A set A ⊂ X is invariant if T (A) ⊂ A.

Definition 4.7. We say (X,T ) is topologically ergodic if it satisfies one of the following equivalentproperties:

(1) Topological transitivity: There is a point x ∈ X with a dense orbit, that is O(x) = X.(2) The set of points with dense orbit is residual. Recall that a residual set is a set that

contains the intersection of countably many dense open sets. Equivalently, a residual setis the complement of a first category set, that is, a set that is contained in the union ofcountably many nowhere dense sets.

(3) Regional transitivity: Given non-empty sets U, V ⊂ X, there is some n ∈ Z such thatTnU ∩ V 6= ∅. See Figure 12.

(4) Every closed invariant set is nowhere dense. The idea here is a sort of “topological irredu-cibility”, in that the only closed T -invariant subsets of X must be nowhere dense.

For a detailed proof of the equivalence of these properties, see the Spring 1997 Math 261 notes,p. 27. These notes also include an additional equivalent property, Baire ergodic.

Note that the subshift (O(x), T ) is necessarily topologically ergodic.This leads to the question of how we can determine which subshifts are ergodic. Consider closed

X ⊂ Σa with σX ⊂ X, so that (X,σ) is a subshift. Then (X,σ) is topologically ergodic if and onlyif it has a dense orbit, which is true if and only if there is an x ∈ X which contains all the wordsin L(X).

Page 19: KARL PETERSEN

Symbolic Dynamics 15

TnU

V

U

Figure 12. Regional Transitivity

To see why this is true, consider y ∈ X, y = ...[B]... where B ∈ L(X) is a central block of y. Ifthe word B also appears somewhere in x, then for some n, σnx will have B as its central block,and therefore will be near y. In this way we can reduce the dynamical question of the ergodicityof (X,σ) to a combinatorial question about x and L(X).

4.3. Minimality.

Definition 4.8. A topological dynamical system is called minimal if there is no proper closedinvariant set, or, equivalently, if for all x ∈ X the orbit O(x) is dense.

Theorem 4.1. If (X,T ) is any (compact) topological dynamical system, then there are properclosed invariant sets A ⊂ X such that (A, T ) is minimal.

Proof. Order the closed invariant subsets of X by inclusion, and use Zorn’s Lemma.�

Let x ∈ Σa. When is (O(x), σ) minimal? The following property is what we need.

Definition 4.9. A point x is almost periodic or syndetically recurrent if for every neighborhood Uof x, the set of return times

R(U) = {r ∈ Z : T rx ∈ U}has bounded gaps, in that there is some K such that for all n ∈ Z,

(n−K,n+K) ∩R(U) 6= ∅.

���� �������� )

n+K

r

n-K n(

Figure 13. Return time r ∈ R(U)

The term almost periodic has many different meanings in dynamics, so we prefer the term syn-detically recurrent instead.

Page 20: KARL PETERSEN

16 Karl Petersen

5. January 22 (Notes by PS)

Theorem 5.1. Let (X,T ) be a compact topological dynamical system and x ∈ X. Then (O(x), σ)is minimal if and only if x is syndetically recurrent.

Proof. Suppose x is syndetically recurrent. Let y ∈ O(x), and let U be a compact neighborhood ofx. Recall that every compact metric space is locally compact, that is, it has a neighborhood baseconsisting of compact neighborhoods. Thus, U can be chosen arbitrarily small.

Since R(U) has bounded gaps, there is a K such that

O(x) =

K⋃j=−K

T j⋃

r∈R(U)

T rx

but

K⋃j=−K

T j⋃

r∈R(U)

T rx ⊂K⋃

j=−KT j(U).

Since U is compact and closed, so are T j(U) and ∪Kj=−KT j(U). Thus

O(x) ⊂K⋃

j=−KT j(U)

as well. Since y ∈ O(x), there is some j ∈ [−K,K] such that y ∈ T jU and T−jy ∈ U .It follows that O(y) intersects the neighborhood U of x. Since the neighborhood U can be chosen

arbitrarily small, we see that x ∈ O(y). Thus O(x) ⊂ O(y), so that O(x) = O(y) and the orbit of

y is dense in O(x), making O(x) be minimal.

Conversely, suppose O(x) is minimal. Let U be a neighborhood of x and R(U) = {r ∈ Z : T rx ∈U}. We want to show that R(U) has bounded gaps. For any y ∈ O(x), O(x) = O(y) because of

the minimality of O(x), so there is a j such that T jy ∈ U . Therefore,

∞⋃j=−∞

T j(U) ⊃ O(x).

This is an open covering of O(x), and by the compactness of X, there is an finite subcover. Thatis, there is a K such that

K⋃j=−K

T j(U) ⊃ O(x).

Clearly this implies that R(U) has bounded gaps.�

This theorem is due to G. D. Birkhoff (Bull. Soc. Math. France, 1912). With appropriate slightmodification, it holds for non-invertible T as well.

Page 21: KARL PETERSEN

Symbolic Dynamics 17

5.1. Minimality in (Σa, σ). In a shift space (Σa, σ), the orbit closure of a point x ∈ Σa will beminimal if and only if every block that appears in x appears with bounded gap. That is, if B isany word in x, then B appears and reappears infinitely many times, with the time between eachrepeat bounded by some k.

x = ...[B]...[B]...[B]...[B]...

Note that k will depend on the length of B, with larger k for longer words B.The full shift itself is not minimal, since it has lots of proper closed σ-invariant sets, for example

the set consisting of the fixed point

0 = ...00.00....

Here, the decimal place marks the 0th place in the sequence 0.Another example is the cycle of points

x1 = ...0010.01001...

x2 = ...0100.10010...

x3 = ...1001.00100...

��

����

����x 2

x 3

x 1

Figure 14. The Cycle x1 to x3

Both of these examples are also examples of points with minimal orbit closure O(x). All finitecycles are trivially minimal.

5.2. Ergodicity of (Σa, σ).

Theorem 5.2. The space (Σa, σ) is topologically ergodic.

Proof. We can demonstrate the ergodicity of Σa in two ways. First, we show that it has regionaltransitivity. Let U and V be open sets in (Σa, σ). Then both U and V must contain some cylindersets [B]m ⊂ U and [C]m+r ⊂ V . We will construct an x ∈ U so that Tnx ∈ V , and thereforeTn(U) ∩ V 6= ∅.

Simply choose n large enough so that the word C in the (m+ r+ n)th place and the word B inthe mth place do not overlap. Fill in the rest of x with zeros, so that

x = ...000[...B...]00...00[...C...]000...

Note that with some obvious modification, this proof also demonstrates strong mixing for (Σa, σ),in that there is some N such that for all |n| ≥ N , Tn(U) ∩ V 6= ∅.

We can also show that (Σa, σ) is topologically transitive, in that it has a dense orbit. To do so, weconstruct an x ∈ Σa which contains all words of L(Σa). This is like constructing a “Champernowne

Page 22: KARL PETERSEN

18 Karl Petersen

number”. For example, in base 10, a Champernowne number is simply one whose digits consists ofall possible integers in order, that is

x = .1 2 3 4 5 6 7 8 9 10 11 12 13 14 ...

Similarly, for Σ2, we make a sequence consisting of all possible words base 2, that is

x = .0 1 00 01 10 11 000 001 010 011 100...

If we place a string of zeros before the decimal place, such an x will have a dense orbit in Σa.�

An interesting question related to the topological transitivity of Σa is whether or not the decimalexpansion of a given irrational number will have a dense orbit in Σ10. Consider π,

x = ...0000.314159...

It is conceivable that after some point, no 6’s appear in this sequence, so that it would not have adense orbit. For examples of applications of questions like this this (involving uniform distribution)to things like random number generators, see the book by Niederreiter.

5.3. Examples of Non-Cyclic Minimal Systems. Let X be the unit circle in the complexplane, X = S1 = {z ∈ C : |z| = 1}, and α ∈ (0, 1) be an irrational number. Let T be an irrationalrotation of X, with T (z) = e2πiαz. Then every orbit in (X,T ) is dense and X is minimal. This isa theorem by Kronecker (and not hard to prove).

z

T z

2πα

Figure 15. Rotation by α

For a minimal subshift of Σa, we use an example discovered and rediscovered by Prouhet, Thue,Morse and many others. We construct a sequence ω inductively by using a substitution map ζ,with ζ(0) = 01 and ζ(1) = 10. Alternatively, at each stage of the construction, we append the dualof the previous step, where the dual of 0 is 1 and the dual of 1 is 0. Thus

ζ(0) = 01

ζ2(0) = 01 10

ζ3(0) = 0110 1001

ζ4(0) = 01101001 10010110

Page 23: KARL PETERSEN

Symbolic Dynamics 19

To construct ω, we place the resulting one-sided sequence to the right of the decimal and itsreversal to the left, so that

ω = ...01101001 1001 01 10.01 10 1001 10010110...

Note first of all that ω contains no 000 or 111 blocks, since ω consists of strings of 01’s and 10’s.Thus both 0 and 1 appear with a bounded gap of 2. Also note that by recoding ω, changing 01 toa and 10 to b, we simply get ω again on the symbols a and b. In fact, let ar = ζr(0) and br = ζr(1).By recoding with these words as symbols, we again get ω.

Now, let B be a word in ω. Suppose B appears entirely to the right of the decimal, in the initial2r-block of ω. Thus B appears in ar. But by substituting,

ω = .arbrbrar...

It follows that ar, and thus B, appears with bounded gap. The case is similar if B lies entirely tothe left of the decimal place. If, on the other hand, the word B overlaps the decimal place, observethat

ω = ...arbrbrar.arbrbrar...

orω = ...brararbr.arbrbrar...

depending on r. Either way, B is contained in either ar.ar or br.ar, both of which appear in ar+2,hence with bounded gap.

To complete the proof, all that remains is to show is that ω is not periodic.

Page 24: KARL PETERSEN

20 Karl Petersen

6. January 27 (Notes by SB)

Claim: The Prouhet-Thue-Morse (PTM) sequence

ω = . . . . . . .01101001 . . .

is NOT periodic.

Proof. Suppose the PTM sequence is periodic. Then

ω = . . . BBBB . . .

Assume, without loss of generality, that l(B) is ODD. (Grouping ω into 2r-blocks, ar and br, wherel(B)/2r is integer and odd, again produces a PTM sequence, on the symbols ar and br). Thenthere exists r > 0 such that 2r ≡ 1 (mod l(B)). Since (2, 22, 23, . . . (mod l(B)) is an infinite listin a finite set {0, 1, . . . , l(B) − 1}, we have 2s = 2s+r for some s, r > 0. But 2s is relatively primeto l(B), hence it has a multiplicative inverse (mod l(B)). This implies 1 ≡ 2r( (mod l(B)). Now,looking at the PTM sequence, the sequences starting at the 1st and the 2r’th place should be thesame, since ω = . . . BBB . . . and 1 ≡ 2r (mod l(B)) implies σω = σ2rω

ω =

ar︷ ︸︸ ︷. 0︸︷︷︸

0th

1︸︷︷︸1st

101001 . . .

br︷ ︸︸ ︷1︸︷︷︸

2rth

0010110 . . .

br︷ ︸︸ ︷10010110 . . .

ar︷ ︸︸ ︷.01101001 . . . . . .

however, we can see that this is not the case (11 6= 10). �

6.1. Applications of the PTM Sequence:

(1) Morse used this sequence to construct recurrent but nonperiodic geodesics on surfaces, usingHadamard’s idea of coding geodesics by means of a partition on the surface. Special note:Hadamard published this result in 1898, making this year the 100’th anniversary of hisaccomplishment.

(2) Axel Thue used this sequence in his work on questions involving logic and group theory.(3) The PTM sequence makes it possible to find sequences without too many repetitions, an

issue of importance in computer science. The PTM sequence is cube-free, i.e., it containsno BBB for any block B, and it can be used to make a square-free sequence (i.e., no BB)on three symbols.

(4) The Burnside Problem: Decide whether the groupG on r generators with relations gn =identityfor all g ∈ G can be infinite.

(5) K. Mahler’s work on other problems in number theory(6) Prouhet’s work, published in 1851.

6.2. Some Other Ways to Generate the PTM Sequence:

(1) For n ≥ 0, ω = sum (mod 2) of binary digits of n: if n = a0 + a1 · 2 + a2 · 22 + . . .+ ar · 2rthen ω(n) ≡ a0 + a1 + . . . an (mod 2).

(2) Keane’s block multiplication: Let 0′ = 1, 1′ = 0. If B = b1 . . . br is a block, put B×0 = B, b×1 = B′ = b′1 . . . b

′r. If C = c1 . . . cn is another block, put B×C = (B×c1)(B×c2) . . . (B×cn)

Example 6.1. (1101)× (101) = 001011010010

Example 6.2. ω0ω1ω2 . . . = 0× (01)× (01)× (01)× . . . = 01101001 . . .

Page 25: KARL PETERSEN

Symbolic Dynamics 21

U

V

TkU

X

Figure 16. After some time, the set U under the action of T will stay in contactwith every sampling set V

Block multiplication leads to “generalized Morse sequences” (Keane ’69) which are defi-ned by other infinite block products like:

0× (001)× (001)× (001)× . . . = 001001110001001110110110001 . . .

Here the code can be interpreted as “starting with 001, take what is written, write it again,then write down its dual (e.g., 001→ 110).”

6.3. Generalizing Properties of the PTM Sequence. The PTM sequence is the starting pointfor a number of properties that can be generalized and then applied to a number of different settings.

6.3.1. Substitutions. For example, let τ0 = 011 and let τ1 = 00. Then

0 = 0

τ0 = 011

τ20 = 0110000

τ30 = 0110000011011011011

...

This substitution gives, in the limit, a one-sided sequence. Complete it to the left with all 0’s. Theset of forward limit points under the shift is a closed invariant set called a substitution dynamicalsystem. Among the first to study these were W. Gottschalk, J. Martin and P. Michel. The lecturenotes by M. Queffelec summarize much of what is known about these systems.

6.3.2. Topological Strong Mixing. Although we have already discussed this property, we have notyet given a formal definition, so we give one now.

Definition 6.1. Let X be a compact metric space and T : X → X a homeomorphism. We saythat (X,T ) is topologically strongly mixing (t.s.m.) if given non-empty open sets U, V ⊂ X, thereexists n such that if |k| ≥ n, then T kU ∩ V 6= ∅.

This property is illustrated by Figure 16.Topological strong mixing for subshifts is easily characterized.

Proposition 6.1. A subshift (X,σ) of some (Σa, σ)) is topologically strongly mixing if and onlyif given any blocks B,C ∈ L(X), there exists n such that if |k| ≥ n, then there exists x ∈ X such

Page 26: KARL PETERSEN

22 Karl Petersen

that x = . . . . . . B . . . C︸ ︷︷ ︸k

. . . . . . In other words, given enough time, we can get from B to C within the

system X.

Note: The PTM sequence is NOT topologically strongly mixing.

Definition 6.2. We say that (X,T ) is topologically weakly mixing (t.w.m.) in case the Cartesiansquare (X ×X,T × T ) is topologically ergodic. (Recall that ((T × T )(x1, x2) = (Tx1, Tx2)).

Notice that topological strong mixing implies topological weak mixing. For given non-empty openA,B,U, V , we want to find n with (T × T )n(A × B) ∩ (U × V ) 6= ∅. This is easily accomplished:find N1 and N2 such that TnA ∩ U 6= ∅ for all n ≥ N1 and TnB ∩ V 6= ∅ for all n ≥ N2. Thisactually shows that (X,T ) being t.s.m. implies (X×X,T ×T ) is t.s.m., which in turn implies that(X ×X,T × T ) is ergodic.

There were questions as to why one should consider ergodicity of the Cartesian square. Analternative characterization of t.w.m., for minimal systems, is that there are no nonconstant eigen-functions: if f is continuous and f ◦T = λf for some constant λ, then f is constant. The definitionof measure-theoretic weak mixing in ergodic theory is that

1

n

n−1∑k=0

∣∣∣µ(T kA ∩B)− µ(A)µ(B)∣∣∣ −→ 0 asn→∞

for all measurable sets A,B. See p.10 of Petersen’s Lectures on Ergodic Theory for this and otherequivalent characterizations of measure-theoretic weak mixing.

Measure-theoretic strong mixing is defined by

µ(T−nA ∩B)→ µ(A)µ(B)

for all measurable sets A,B. Thus, mixing properties concern asymptotic independence. Thinkingin terms of joinings, we study how or whether the joinings νn (measures on X ×X) defined by

νn(A×B) = µ(T−nA ∩B)

approach the independent joining

ν(A×B) = µ(A)µ(B).

See p. 4 of the Lectures for a discussion of joinings. Anyway, thinking along these lines makes onewant to study properties of T × T on X ×X, such as ergodicity.

6.3.3. Topological Entropy, htop. For the general definition of topological entropy, see the referenceworks. For subshifts, the definitions are simpler.

Definition 6.3. Let (X,σ) be a subshift of some (Σa, σ). For each n = 1, 2, . . ., letNn(X)=card(L(X)∩An), i.e., the number of n-blocks in L(X). Then we define the topological entropy of X to be

htop(X,σ) = limn→∞

1

nlogNn(X).

Note that this limits exists because Nn+m ≤ NnNm so that logNn is subadditive and thereforelim(1/n) logNn = inf{(1/n) logNn} exists.

Page 27: KARL PETERSEN

Symbolic Dynamics 23

The question arises, how many allowable words of length n+m can we make by concatenatingallowable words of length n with allowable words of length m? Here we can think of htop(X,σ)intuitively as a measure of any of the following equivalent concepts: the exponential growth rate ofthe number of words in the language, the “concatenability index” of L(X), the “freedom of speech”permitted in the system, or the possibility of “saying something new.” We have card(L(X)∩An) ∼expn(htop(X,σ)). For example, for the full shift, htop(Σ2) = lim 1

n log(2n) = log 2. On the other hand,htop(orbit closure of the PTM sequence)=0. Therefore, there is no “freedom of speech” permittedin this system; in other words, after a while the possibility of seeing something new in the sequenceis next to nothing. This makes sense for the PTM sequence, since once we’ve seen one of the basicblocks of length 2r, we are locked into one of two possibilities for the next string of 2r steps.

Page 28: KARL PETERSEN

24 Karl Petersen

7. January 29 (Notes by SB)

The PTM sequence is syndetically recurrent, but not periodic. In fact, Thue proved that it iscube free and Hedlund-Morse proved the even stronger non-periodicity condition that the PTMsequence does not contain BBb for any block B = b . . .. The fact that the PTM sequence existsand has this lack of periodicity has a useful application to avoiding infinite looping in iterativeprocedures. For example, certain chess rules consider a game a draw once a sequence of moves isrepeated twice and begun for a third time. However, the cube-free property of the PTM sequencedemonstrates that it is possible to have an infinite game, i.e., one which never ends, not even by adeclared draw.

7.1. Invariant Measures.

Definition 7.1. A measure is a countably-additive, nonnegative (or sometimes signed) set function.

Definition 7.2. Let X be a compact metric space, T : X → X a homeomorphism. An invariantmeasure for (X,T ) is a probability measure (i.e., µ(X) = 1) defined on the Borel sets B(X) of X(i.e., the smallest σ-algebra containing the open sets) such that

µ(T−1A) = µ(A) for all A ∈ B(X).

Proposition 7.1. There is always at least one invariant measure on (X,T ).

Before we begin the proof of this proposition, we establish a few facts and definitions

Definition 7.3. Cesaro operators: An = 1n

∑n−1k=0 T

k.

Given a continuous map T : X → X, we have a map T : C(X) → C(X) such that (Tf)(x) =f(Tx) and its adjoint map T : C(X)∗ → C(X)∗. Note that each of these maps is called T even thougheach acts on a different space. Here C(X)∗ refers to the dual vector space (i.e., the vector spaceof all continuous linear maps C(X) → R) of the vector space C(X). By the Riesz RepresentationTheorem, C(X)∗ =M(X), the space of signed Borel measures on X. If µ ∈ C(X)∗ =M, then Tµis defined by Tµ(f) = µ(Tf) =

∫f ◦ T dµ for all f ∈ C(X). Note thatM(X) is a complete metric

space with the weak∗ topology defined by µn → µ if and only if µn(f)→ µ(f) for all f ∈ C(X).

Proof. We want to make invariant measures using the Cesaro operators. The set of probabilitymeasures is a compact set in M(X). Take any probability measure µ0 on X, say µ0 = δx for somex ∈ X: for any continuous function f ,

∫f dµ0 = f(x) for all f ∈ C(X), or

µ0(A) =

{0 if x 6∈ A,1 if x ∈ A.

Note that each Anµ0 is also a probability measure and that

Anµ0(f) =1

n

n−1∑k=0

∫f ◦ T kdµ0 =

∫1

n

n−1∑k=0

f ◦ T kdµ0.

Let µ be a weak∗ limit point of {Anµ0}, say Anµ0 → µ. Then we can show that µ is invariant.Since we can approximate characteristic functions by continuous functions, to show that µ(A) =

Page 29: KARL PETERSEN

Symbolic Dynamics 25

0 n-1

n

Figure 17. Shifting intervals [0, n− 1] by small amounts causes heavy overlap.

µ(T−1A) for all A ∈ B(X) it suffices to show that∫f dµ =

∫f ◦ T dµ for all f ∈ C(X). But∫

f ◦ Tdµ = µ(Tf) = limj

1

nj

nj−1∑k=0

µ0(fT k+1)

= limj

1

nj

nj∑k=1

µ0(fT k), while

∫fdµ = µ(f) = lim

j

1

nj

nj−1∑k=0

µ0(fT k),

so that |µ(Tf)− µ(f)| = lim(1/nj) |µ0(fTnj )− µ0(f)| = 0, and hence µ(Tf) = µ(f).�

Remarks 7.1.

(1) It is the nearly abelian property of the acting group that gives invariant measures in thisway. More specifically, intervals [0, n− 1] in N form a Følner sequence: a slight translationof one overlaps it heavily. See Figure 17.

(2) If µ is an invariant measure for (X,T ), then (X,B(X), µ, T ) is a measure-theoretic dynamicalsystem, one of the fundamental objects of study in ergodic theory.

7.1.1. Some easy examples of invariant measures.

Example 7.1. Bernoulli measures on full shifts (Σa, σ), a ≥ 2Put any probability measure on the alphabetA = {0, 1, . . . , a−1}, i.e., choose weights {p0, p1, . . . , pa−1}for the symbols in A such that

∑pj = 1. Let µ be the corresponding product measure on Σa. Then

the measure of a cylinder set [B]m = {x ∈ Σa : xmxm+1 · · ·xm+l(B)−1 = B} is µ[B]m = pb1pb2 · · · pbr ,where B = b1b2 . . . br. On Σ2 look at B(1/3, 2/3), i.e., the measure with weights 1/3 for 0 and 2/3for 1. Then µ(. . . . . . . 001 . . .) = (1/3)(1/3)(2/3). So these measures correspond to finite-state, in-dependent, identically-distributed (stationary) stochastic processes (IID). By stationary, we meanthat the measure is shift-invariant, i.e., that the probabilistic laws don’t change with time. Inde-pendence, for example, in a coin-flipping experiment, means that one outcome does not affect anyother one. This is not a property that holds for all types of repeated experiments.

Example 7.2. Rotation on a compact abelian groupLet G = {α, β, γ} = Z (mod 3) = {0, 1, 2} and let Tg = g + 1 (mod 3). See Figure 19 We can

Page 30: KARL PETERSEN

26 Karl Petersen

A B

Figure 18. P (A ∩ B) = P (A)P (B), i.e., the probability that the cylinder setsappears in the places shown is equal to the product of the probability of the cylindersets appearing on their own.

.

..

T0

1 2

Figure 19. Here 0 = α, 1 = β, 2 = γ.

.

..

2 0 A

12

Figure 20. Here 0 = α, 1 = β, 2 = γ.

think of this example in the Σ2 setting using a periodic sequence having period 3:

α = . . . 001001001 . . .

β = . . . 01001001 . . .

γ = . . . 1001001 . . . ;

thenσα = β, σβ = γ, σγ = α.

Here there is just one invariant probability measure, the one that puts µ(α) = µ(β) = µ(γ) = 1/3.Define µ on Σ2 by µ = (1/3)δα + (1/3)δβ + (1/3)δγ , so that for a subset A ⊂ X, µ(A) = (1/3) ·(# of points α, β, γ in A). See Figure 20.

Then µ extends to a σ-invariant measure on Σ2 by defining µ(Σ2 \ {α, β, γ}) = 0.

Page 31: KARL PETERSEN

Symbolic Dynamics 27

8. February 3 (Notes by SS)

8.1. More Examples of Invariant Measures.

Example 8.1. Let G be a compact group. Then there is a unique Borel probability measure µ onG which is invariant under translation by elements of the group :

µ(gA) = µ(A) = µ(Ag) for any Borel A ⊂ G and for any g ∈ G.

This measure is called (normalized) Haar measure. Note that if G is a topological group, then(g, h) 7→ gh−1 : G × G → G is continuous. If g is any fixed element of G and we define a grouprotation Tg : G→ G by Tgh = gh for any h ∈ G, then Haar measure µ is an invariant measure forTg.

For example, if G = S1 = unit circle in C with multiplication, so that G ∼= [0, 1) with additionmod 1, then each map Tα : [0, 1) → [0, 1) by Tαx = x + α (or Tα : S1 → S1 by Tαz = e2πiαz)preserves Haar measure = Lebesgue measure. If α is irrational, then ([0, 1), Tα) is minimal, i.e.,every orbit is dense. Once one orbit is dense (i.e., topological ergodicity), then every orbit is dense,because in this case orbits are just translates of one another : {x+nα : n ∈ Z} = {y+nα : n ∈ Z}+(x− y). If α ∈ Q, then ([0, 1), Tα) is not necessarily minimal.

More generally, suppose G is a compact group for which there exists g ∈ G with {gn : n ∈ Z}dense (the orbit of the identity is dense under Tg). Such a G is called monothetic and is necessarilyabelian. For example, [0, 1) with addition mod 1 is monothetic, since {nα : n ∈ Z} = [0, 1) for anyα 6∈ Q. If G is monothetic and g is a generator, then (G,Tg) is minimal and (normalized) Haarmeasure is the only T -invariant (Borel probability) measure on G. For, if µ is Tg-invariant, then itis Tgn-invariant for any n, so it is Th-invariant for any h ∈ G. Hence it is Haar measure. In Example6.2, µ on G is the unique T -invariant measure. See Figure 19, page 26. Also, T2 ∼= [0, 1)× [0, 1) withcoordinatewise addition mod 1 is monothetic and so is T × · · · × T with coordinatewise additionmod 1.

Example 8.2. Let G be a compact group, µ Haar measure, and T : G → G a (continuous)endomorphism or automorphism. Then T preserves µ. For, if µ is translation-invariant, then µT−1

is also translation invariant and so must be Haar measure. Hence µT−1 = µ.For example, let T : [0, 1) → [0, 1) be Tx = 2x mod 1. Then T is not a translation, but T

preserves Lebesgue measure. Also

(S1, z 7→ z2, ν) ∼= ([0, 1), T,m)φ← (Σ+

2 , σ,B(12 ,

12)),

where m is Lebesgue measure on [0, 1) and ν is Lebesgue measure on S1. The map φ is one-to-one except on a countable set, where it is two-to-one. For, if x ∈ [0, 1), then x = .x1x2x3 · · · =∑∞

k=1 xk/2k, each xk = 0 or 1, and x corresponds to a point (x1x2x3 · · · ) in Σ+

2 . Then Tx = 2xmod 1 = .x2x3x4 · · · corresponds to a point σ(x1x2x3 · · · ) in Σ+

2 , and so T corresponds to the shiftσ on Σ+

2 . The expansion .x1x2x3 · · · of x ∈ [0, 1) is obtained by following the orbit x, 2x, 4x, · · ·(mod 1) of x and writing down 0 when the point is in [0, 1

2) and 1 when the point is in [12 , 1).

Similarly, for endomorphisms or automorphisms of a torus, e.g., T(xy

)=(

2 11 1

)(xy

)on [0, 1) × [0, 1)

(mod 1) preserves Haar measure = Lebesgue measure on T2.

Page 32: KARL PETERSEN

28 Karl Petersen

On (Σ2, σ), there are many σ-invariant measures; for example, each periodic orbit supports one.In fact, each subshift (X,σ) supports at least one, and there are lots of Bernoulli and Markovmeasures and so on.

8.2. Unique Ergodicity. Let (X,T ) be a compact topological dynamical system and I(X,T ) theset of T -invariant (Borel probability) measures on X. Then I(X,T ) is a compact, convex subsetof M(X) = C(X)∗ = vector space of all (signed) Borel measures on X with the weak∗ topology.For convexity, we can easily see that if µ, ν ∈ I(X,T ), 0 ≤ t ≤ 1, then tµ+ (1− t)ν ∈ I(X,T ).

The extreme points of I(X,T ) (i.e., the ones which cannot be written as tµ + (1 − t)ν forsome µ, ν ∈ I(X,T ), µ 6= ν and 0 < t < 1) are the ergodic measures for (X,T ) (i.e., the onesfor which (X,B(X), T, µ) is ergodic (measure-theoretically), in that every invariant set A (i.e.,µ(A 4 T−1A) = 0) has measure 0 or 1). The theorem of Krein-Milman says that there alwaysexists an extreme point. So, the set of ergodic measures is not empty. If (X,T ) has only oneinvariant (Borel probability) measure, then by the theorem of Krein-Milman, it is an extremepoint, hence ergodic.

Proposition 8.1. If (X,T ) has only one invariant (Borel probability) measure, then that measureis ergodic.

Definition 8.1. If (X,T ) has only one invariant, hence ergodic, measure, then we say that (X,T )is uniquely ergodic.

For example, [0, 1) with translation mod 1 by an irrational is uniquely ergodic. In Example 6.2,(G,T ) is uniquely ergodic. However, if G = {a, b, c} ∼= Z3 and T is defined by T (a) = a, T (b) = c,and T (c) = b, then T is not a group rotation, and it is not uniquely ergodic, since there are twoergodic measures. For, if µ is defined by µ({a}) = 1, µ({b}) = µ({c}) = 0, and ν is defined byν({a}) = 0, ν({b}) = ν({c}) = 1

2 , then they are ergodic, and every other invariant measure is aconvex combination tµ+(1−t)ν, 0 < t < 1, i.e., µ and ν are extreme points. Thus, I(X,T ) ∼= [0, 1].

Remark 8.1. (Choquet Simplex Theory) In a certain sense, every point of a compact convex set(in the right sort of space) is a “convex combination” (some kind of integral) of extreme points.

To find out which subshifts are uniquely ergodic, we need the following theorem.

Theorem 8.2. (Ergodic Theorem) Let (X,B, µ) be a probability space and T : X → X a measure-preserving transformation (i.e., T is 1-1 and onto up to sets of measure 0, TB = T−1B = Band µT−1 = µT = µ) and f an integrable function on X (i.e., f : X → R is measurable and∫X |f |dµ <∞). Then Anf(x) = 1

n

∑n−1k=0 f(T kx) converges as n→∞ for a.e. x ∈ X.

Page 33: KARL PETERSEN

Symbolic Dynamics 29

9. February 5 (Notes by SS)

9.1. Unique Ergodicity.

Remark 9.1. The description of the invariant and ergodic measures for a topological dynamicalsystem (X,T ), uniquely ergodic etc., based on the Ergodic Theorem and Riesz RepresentationTheorem (C(X)∗ = signed Borel measures on X) is due to Krylov and Bogolioubov. Also seeNemytskii and Stepanov, “Qualitative Theory of Differential Equations” (1960), or J. Oxtoby,“Ergodic sets” (1952), Bull. AMS.

Theorem 9.1. For a compact topological dynamical system (X,T ), the following three conditionsare equivalent.

(1) (X,T ) is uniquely ergodic.

(2) For all f ∈ C(X), Anf(x) = 1n

∑n−1k=0 f(T kx) converges as n → ∞ uniformly on X to a

constant (the integral of f with respect to the unique invariant measure).(3) For all f ∈ C(X), some subsequence of {Anf(x)} converges pointwise on X to a constant.

Proof. Recall that according to the Ergodic Theorem, if µ is an invariant measure on (X,T ) andf ∈ C(X) ⊂ L1(X,B(X), µ), then Anf(x) converges to some limit function f(x) a.e. dµ. If µ isergodic, then f(x) is the constant

∫X fdµ a.e..

(1) ⇒ (2) : Assume (2) does not hold. There is at least one invariant measure for (X,T ), call itµ0. Since (2) doesn’t hold, there exists f ∈ C(X) such that Anf(x) does not converge uniformly to∫X fdµ0. So, there exist δ > 0, xk ∈ X,nk ∈ N such that

|Ankf(xk)−

∫X fdµ0| ≥ δ for all k.

Then limk→∞Ankh(xk) exists for all h ∈ C(X). For, take a countable dense set {g1, g2, · · · } in

C(X). By passing to a subsequence of {nk}, we may assume that Ankgj(xk) → λj ∈ R as k → ∞

for all j. Then Ankh(xk) = Ank

gj(xk)+Ank(h−gj)(xk), and Ank

gj(xk)→ λj as k →∞; and also if‖h− gj‖∞ < ε, then |Ank

(h− gj)(xk)| < ε. Thus, if gjs → h in C(X), then Ankh(xk)→ lims→∞ λjs

as k →∞.Then, defining λ(h) = limk→∞Ank

h(xk) for h ∈ C(X) defines a positive normalized continuouslinear functional on C(X), i.e., h ≥ 0 implies λ(h) ≥ 0, λ(1) = 1, λ(ch) = cλ(h), and λ(h1 + h2) =λ(h1) + λ(h2). For continuity, note that if ‖h1 − h2‖∞ is small, then |λ(h1)− λ(h2)| is small.

Then by the Riesz Representation Theorem, there exists a unique Borel probability measure µon X such that λ(h) =

∫X hdµ for all h ∈ C(X). The measure µ is T -invariant, since as before,

λ(h ◦ T ) = λ(h) for all h ∈ C(X). But µ 6= µ0, since∫X fdµ = λ(f) = limk→∞Ank

f(xk)

where nk is now a subsequence of the original one, and so∣∣∫X fdµ−

∫X fdµ0

∣∣ ≥ δ.Hence (X,T ) is not uniquely ergodic.

(2) ⇒ (3) is clear.

Page 34: KARL PETERSEN

30 Karl Petersen

(3)⇒ (1) : Suppose µ, ν are two different ergodic invariant measures for (X,T ). Choose f ∈ C(X)with

∫X fdµ 6=

∫X fdν. By the Ergodic Theorem, Anf(x)→

∫X fdµ for µ-a.e. x ∈ X, say x ∈ Xµ,

and Anf(x) →∫X fdν for ν-a.e. x ∈ X, say x ∈ Xν . So, Xµ ∩Xν = ∅, and µ(Xµ) = 1 = µ(X),

and ν(Xν) = 1 = ν(X). Then it is clearly impossible that any subsequence of {Anf(x)} convergespointwise on X to a constant (since Xµ 6= ∅ and Xν 6= ∅). �

Remark 9.2. It is enough to check condition (2) for f in a countable dense set in C(X).

9.1.1. Interpretation of the unique ergodicity criterion in (2) for subshifts.

Theorem 9.2. A topologically transitive subshift (O(x), σ) is uniquely ergodic if and only if everyblock that appears in x appears with a uniform limiting frequency: i.e., given any block B thatappears in x, there is λ(B) such that given ε > 0, there is L > 0 such that if C is any block thatappears in x with l(C) ≥ L, then ∣∣ν(B,C)

l(C) − λ(B)∣∣ < ε,

where ν(B,C) = the number of times that B appears in C, i.e., card{j : 1 ≤ j ≤ l(C) − l(B),cjcj+1 · · · cj+l(B)−1 = B}.

Proof. Condition (2) of the previous theorem for the characteristic function χ[B]m of a basic cylinder

set says that Anχ[B]m converges to a constant λ(B) uniformly on O(x). Note that

Anχ[B]0(x) = 1n

∑n−1k=0 χ[B]0(σkx) = 1

n × (the number of B’s seen in x0 · · ·xn+l(B)−1).

So, to say Anχ[B]0 converges to λ(B) uniformly on O(x) says that if C is any long enough block in x,

then∣∣ν(B,C)/l(C)−λ(B)

∣∣ is small. Now, linear combinations of such characteristic functions withrational coefficients are dense in C(X), so if (2) holds for the χ[B]m , it holds for all f ∈ C(X). �

9.1.2. Connection between minimality and unique ergodicity.

Example 9.1. A non-minimal uniquely ergodic orbit closure in (Σ2, σ)

Let x = · · · 000.100 · · · and so O(x) = O(x) ∪ {· · · 000 · · · }. Then the point z = · · · 000 · · ·fails to have a dense orbit. So (O(x), σ) is not minimal. But δz is the unique invariant measure;

δz([B]m) = 1 if B = 0l(B), and 0 if B = 0 · · · 010 · · · 0. Thus O(x) is uniquely ergodic. Also, it istopologically transitive.

For a uniquely ergodic system that is not topologically transitive, let y = · · · 000.1100 · · · andX = O(y) ∪ O(x) (with x = · · · 000.100 · · · as before). Then (X,σ) is uniquely ergodic, but nottopologically transitive.

Examples of minimal systems that are not uniquely ergodic were given first by Markov, then byOxtoby.

Page 35: KARL PETERSEN

Symbolic Dynamics 31

10. February 10 (Notes by KN)

In the following sections we will mention several more properties of topological dynamical systemsand consider for which subshifts these properties hold.

10.1. Expansive Systems.

Definition 10.1. A topological dynamical system is called expansive if there is β > 0 such thatwhenever x, y ∈ X with x 6= y then there is n ∈ Z such that d(Tnx, Tny) ≥ β. We call β theexpansive constant.

Notice that every subshift is expansive. This is because if x 6= y, then there is some first placewhere they differ, say xn 6= yn, and then d(σnx, σny) = 1.

Theorem 10.1. (Reddy) If (X,T ) is an expansive topological dynamical system, then (X,T ) is afactor of some subshift in a space of sequences on a finite alphabet. In other words, there exists asubshift on a finite alphabet, (Σ, σ) ⊂ (Σa, σ) and a factor map φ : Σ → X so that the followingdiagram commutes:

Σσ−−−−→ Σ

φ

y yφX

T−−−−→ XMoreover, if (X,T ) is expansive and 0-dimensional (totally disconnected), then (X,T ) is topologi-cally conjugate to a subshift on a finite alphabet.

Theorem 10.1 implies that any compact expansive topological dynamical system has finite en-tropy.

10.2. Equicontinuous Systems.

Definition 10.2. A topological dynamical system (X,T ) is equicontinuous if {Tn : n ∈ Z} isequicontinuous, i.e., given ε > 0 there is δ > 0 such that if x, y ∈ X and d(x, y) < δ, thend(Tnx, Tny) < ε for all n ∈ Z.

10.2.1. Some examples of equicontinuous systems.

(1) If T : X → X is an isometry (i.e. preserves distance), then (X,T ) is equicontinuous.(2) If (X,T ) is a rotation on a compact (abelian) group, then it is equicontinuous because

such a group always has an equivalent invariant metric: d(h1, h2) = d(gh1, gh2) for allg, h1, h2 ∈ G. (See Kelley’s General Topology book for more).

(3) x→ x+ α (mod 1) on [0, 1).

10.2.2. Equicontinuous subshifts. Which subshifts are equicontinuous? As we just saw, subshiftsare expansive and it is not easy to be both expansive and equicontinuous. We need to find a δso that whenever d(x, y) < δ, then d(Tnx, Tny) < ε for all n. Suppose we are given ε = β, theexpansive constant. Then we need to be sure there are no points x 6= y with d(x, y) < δ. Thiscondition requires a finite set of points. An example of an expansive equicontinuous subshift is(O(x), σ), where x = . . . 101010 . . .. Here, β = 1. Given any ε, we may take δ = 1/2. Thenwhenever d(x, y) < δ = 1/2, we have x = y, so equicontinuity is clear. Equicontinuity also followsbecause this system is a rotation on a compact group.

Page 36: KARL PETERSEN

32 Karl Petersen

10.3. Distal Systems.

Definition 10.3. (X,T ) is distal if there are no proximal pairs, i.e. pairs of points x 6= y, for whichthere exists nk →∞ or (−∞) so that d(Tnkx, Tnky)→ 0.

Again, we find that the only distal subshifts are the finite subshifts. Gottschalk and Hedlundactually show that if (X,T ) is compact expansive and X is dense in itself (X has no limit points),then there exists an asymptotic pair {x,y}, that is x 6= y with d(Tnx, Tny)→ 0.

10.4. The structure of the Morse System. Recall the PTM sequence: Let ω = . . . .01101001 . . .and M = O(ω). Then (M,σ) is the Morse minimal set.

Theorem 10.2. (M,σ) is (measure-theoretically) a skew product over a group rotation. More

precisely, there are a compact abelian group G, an element θ ∈ G with {nθ : n ∈ Z}, a uniquelyergodic subshift (X,σ) ⊂ (Σ2, σ) which is isomorphic to (G,Rθ) (with Haar measure), and a conti-nuous function f : X → {−1, 1} such that (M,σ) is topologically conjugate to the dynamical system

(X × {−1, 1}, T ), with T defined by T (x, ξ) = (σx, f(x)ξ).

Theorem 10.2 implies that the Morse minimal set is nearly a group rotation: (M,σ) is a 2-1 exten-sion of a group rotation. (S. Kakutani, 5th Berkeley Symposium, 1969; W.A. Veech, TAMS,1969).From this structure follow easily properties of the Morse minimal set such as unique ergodicity,entropy 0, the spectrum, etc. A similar analysis is possible for more generalized Morse sequences(Goodson, Lemancyk, etc.).

10.4.1. The group rotation. The underlying group rotation of (M,σ), denoted by (G,Rgo), is calledthe odometer or von Neumann-Kakutani adding machine.

The odometer is defined on Σ+2 = {0, 1}N. Let T (x1x2x3 . . . ) = (x1x2x3 . . . ) + (1000 . . . ) with

carry to to the right. For example,

T (001100 . . . ) = (101100 . . . )

T 2(001100 . . . ) = (011100 . . . )

T 3(001100 . . . ) = (111100 . . . )

T 4(001100 . . . ) = (000010 . . . )

An alternate way to state the rule:

if x = x1x2x3 . . . and n = the first k such that xk = 0, then Tx = 0 . . . 0︸ ︷︷ ︸n−1

1︸︷︷︸nth

xn+1xn+2 . . .

Notice that the orbit of go = .1000 . . . is dense in Σ2+:

Tgo = 010000 . . .

T 2go = 110000 . . .

T 3go = 001000 . . .

T 4go = 101000 . . .

T 5go = 011000 . . .

T 6go = 111000 . . .

T 7go = 000100 . . .

Page 37: KARL PETERSEN

Symbolic Dynamics 33

In general,

Tngo = α0α1 . . . αr where

n = α0 + α12 + . . .+ αr2r. The initial block of Tngo is the binary representation of n+ 1. Notice

that T is an isometry, i.e. d(Tx, Ty) = d(x, y). Hence, (Σ+2 , T ) is minimal, since topologically

ergodic isometries are minimal. To see this, take go = .1000 . . . and g, h ∈ Σ+2 . We want some n so

that Tng is arbitrarily close to h, say within some ε. We can get this because T rgo ≈ h for some rand T sgo ≈ g for some s. Thus, n = r − s gives Tng = T rT−sg ≈ h.

This leads us to a result in the more general case.

Theorem 10.3. Every topologically ergodic isometry is (topologically conjugate to) a rotation bya generator on a compact monothetic group.

The odometer may also be represented as a subshift (isomorphic up to countably many points).First we form a Toeplitz sequence, τ , as follows: First fill τ in with 0’s followed by blanks. Thenfill in the remaining blanks with 1’s followed by blanks. Repeat this alternating process until allblanks are filled in:

τ = . . . . . . 0 0 0 0 0 0 0 0

. . . . . . 1 1 1 1

. . . . . . 0 0 0

Let L be the language consisting of all blocks in the 1-sided sequence, τ , and let X be the 2-sidedand X+ be the 1-sided subshift determined by L. The space (X,σ) is the Toeplitz subshift. (K.Jacobs & M. Keane, ZW, 1969). In fact, there exists a whole class of these sequences. For examplewe could start with any periodic sequence on 0 follow it by another one on 1 , etc. Thesequences may vary each time. (J. Neveu, ZW, 1969; E. Eberlein, Diplomarbeit, 1970; S. Williams,Thesis, 1981 and ZW, 1984; Downarowicz; Lemanczyk, Iwanik, . . .).

Page 38: KARL PETERSEN

34 Karl Petersen

11. February 12 (Notes by KN)

We want to discuss the relationship between the odometer and the Toeplitz subshift, and thenuse these systems to describe the structure of the Morse subshift.

11.1. The Odometer. Recall the odometer or von Neumann-Kakutani adding machine is a rota-tion on a compact monothetic group, T : Σ+

2 → Σ+2 , defined as follows:

T (x1x2x3 . . . ) = (x1x2x3 . . . ) + (1000 . . . )

with carry to to the right. For example,

T (001100 . . . ) = (101100 . . . )

T 2(001100 . . . ) = (011100 . . . )

T 3(001100 . . . ) = (111100 . . . )

T 4(001100 . . . ) = (000010 . . . )

There are several equivalent descriptions of the odometer:

(1) The 2-adic integers are sequences y = .y1y2y3 . . . with yn ∈ {0, 1, . . . , 2n−1} and yn+1 ≡ yn(mod 2n). Define coordinatewise addition (mod 2n in the nth coordinate) and give it theproduct topology.

(2) We can think of the odometer in terms of formal series:

y =∞∑j=0

xj2j , where xj = 0 or 1 and yn =

n−1∑j=0

xj2j .

In our analysis of the odometer we will use the definition, that is the description in termsof sequences, x = x0x1x2 . . . in Σ+

2 . Notice that coordinatewise addition (mod 2n) of they’s corresponds to addition (mod 2) of the x’s with carry to the right.G = Σ+

2 with this group operation is a compact, abelian, and monothetic group. T :

G→ G is given by Tx = x+ θ, and O(θ) is dense. Thus, (G,T ) is a minimal and uniquelyergodic topological dynamical system.

(3) Adic realization of the odometer (Vershik).We regard a point in Σ+

2 as an infinite path in the graph in Figure 21(a). For example,the sequences x = 11101 . . . and y = 00011 . . . are shown in Figure 21(b). We say that twopoints are comparable if they eventually coincide. That is, x and y are comparable if thereexists a (smallest) k such that xn = yn for all n > k. Then we define a partial ordering <by x < y if xk < yk. In Figure 21(b), for example, x < y.T : Σ+

2 → Σ+2 is defined by Tx = the smallest y > x, if there is one, and T (111 . . .) =

000 . . .. We may generalize this adic representation of the odometer in many ways. Forexample, we can consider odometers with different size wheels. Then the space is H =∏∞n=0{0, 1, . . . , qn − 1}, where qn ≥ 1, instead of just Σ+

2 =∏∞n=0{0, 1}. as in the Figure

22(a). Another variation would be to restrict the paths on the graph, as in Figure 22(b).The advantage of the adic representation is the concrete combinatorial representation of thesystem, which sometimes provides better and perhaps easier ways of analyzing propertiesof the system. As we may see later, the system represented in Figure 22(b) is isomorphicto the Fibonacci substitution system, where the substitution ζ is defined by ζ(1) = 0 andζ(0) = 01.

Page 39: KARL PETERSEN

Symbolic Dynamics 35

0

0

0

0

0

0

1

1

1

1

1

1 1

0

0

1

0

10

0

0

1

1

1

(a) (b)

xy

Figure 21. (a) The adic graph, (b) The paths x = 11101 . . . and y = 00011 . . ..

1 2 3

2

0 1

0 1

10

0

0

0

0

0

0

0

1

1

1

1

1

1

(b)(a)

Figure 22. (a) Adic graph for H, (b) Adic graph with restricted paths.

11.1.1. The spectrum of the odometer in Σ+2 . The ergodic properties of Σ+

2 are fairly well under-stood since it is a minimal uniquely ergodic group rotation. It is known to have a purely discretespectrum, that is, its eigenfunctions span L2. In fact, its eigenfunctions are the continuous groupcharacters (a character is a continuous homomorphism, φ : G→ {z ∈ C : |z| = 1}). The dual groupor character group for Σ+

2 is the group of dyadic rationals in [0, 1) ≈ S1, with the following duality:

(x, x) = e(2πi)x∑∞

j=0 εj2j , where x =p

2N.

Page 40: KARL PETERSEN

36 Karl Petersen

Thus if G = Σ2+ and m is the Haar measure, then the functions, f ∈ L2(G,m) and (λ ∈ C),

such that f(Tx) = λf(x) m-a.e. are just the functions f = x and λ = λx = e2πix for some x:

f(Tx) = x(Tx) = x(x+ θ)

= x(x)x(θ)

= x(θ)︸︷︷︸λ

x(x)︸︷︷︸f(x)

,

and x(θ) = (θ, x) = e2πix.

Example 11.1. Let G = S1 and Tz = e2πiαz. Then G = Z with the duality (n, z) = zn. On [0, 1)this is just Tx = x+ α mod 1 with (n, x) = e2πinx. So the eigenfunctions are φn(x) = e2πinx witheigenvalues λn = e2πinα.

11.2. Relationship between the Toeplitz subshift and orbit closure of the odometer.The following theorem states the relationship between the odometer and the Toeplitz sequence.The connection is a near isomorphism between a minimal subshift and a compact group rotation,which allows us to ask and answer many more questions about the structure and properties of eachsystem.

Theorem 11.1. (Neveu, ZW 1969; E. Eberlein, Thesis) There is a continuous onto map fromthe (1-sided) Toeplitz subshift, (X+, σ) ⊂ (Σ+

2 , σ) to the odometer (G,T ) = (Σ+2 , T ), which is 1-1

except on a countable set of points on which it is 2-1. Consequently, the Toeplitz subshift is minimaland uniquely ergodic and has purely discrete spectrum.

Proof. Define “not quite a mapping” ψ : G → Σ+2 as follows: Let x = x1x2x3 . . . ∈ G = Σ+

2 . Wewill use this as a program to tell us how to get a Toeplitz-type sequence. Put

ψx =

{0a00a10a2 . . . if x1 = 0,

a00a10a20 . . . if x1 = 1,

where

a0a1a2 . . . =

{1b01b11b2 . . . if x2 = 0 ,

b01b11b21 . . . if x2 = 1,

and continue in this manner to define ψx.This does not always give us a point, because it may happen that just one blank does not get

filled in. However, if x has infinitely many 0’s in it, then ψ is defined, since each 0 tells you to fillin the first blank from the left.

Let Go = {x : there is K such that xk = 1 for all k ≥ K}. If x ∈ Go, then just one place in ψxis left undetermined. Fill this one place with both a 0 and 1, so that ψx = {u, u′} consists of twopoints

We will continue the proof next time.�

Page 41: KARL PETERSEN

Symbolic Dynamics 37

12. February 17 (Notes by RP)

12.1. Toeplitz System. Recall the (one-sided) Toeplitz sequence ω. Let X+ ⊂ Σ+2 denote the

orbit closure of ω. Let G be Σ+2 with group operation addition mod 2 with carry to the right. Define

T : G → G by Tx = x + θ, where θ = 1000 . . . . We defined “not quite a mapping” ψ : G → X+

last time. If x /∈ G0 = {y ∈ Σ+2 : yk = 0 for only finitely many k}, then ψx ∈ Σ+

2 is defined. Ifx ∈ G0, exactly one entry stays unfilled in ψx; fill it with both 0 and 1, so ψx = {u, u′} ⊂ Σ+

2 (twopoints). Notice that ψ(000 . . . ) = ω.ψ is “almost 1:1”: Suppose ψx and ψy intersect; we claim that then x1 = y1. If x1 6= y1, then

ψx = 0a00a10 . . .

ψy = a′00a′10a′2 . . . .

Since some ai or a′i is 1, ψx 6= ψy. Now induct, starting with a = a0a1 . . . to get x2 = y2, etc.This allows us to define φ = ψ−1 : ψG → G. It also shows that this φ is continuous: if ψx

and ψy agree on a long initial block, then so do x and y. Writing ψG = ψG0 ∪ ψ(G \ G0) andG = G0 ∪ (G \G0), we have that φ : ψG0 → G0 is 2:1 and φ : ψ(G \G0)→ G \G0 is 1:1.

We claim that Tφ = φσ. It suffices to show that ψT = σψ. For example, take

x = 111000 . . . .

Then

Tx = 000100 . . . ,

and

ψTx = 010001000101 . . . .

On the other hand,

ψx = 1010001000101 . . . ,

and

σψx = 010001000101 · · · = ψTx.

Note also that equality holds when x = 111 . . . , the sequence of all ones. In that case, we haveTx = 000 . . . (all zeros), and ψTx = ω. On the other hand, ψx = zω, where z ∈ {0, 1}. Soσψx = ω as well:

(0 or 1)|ω σ−−−−→ ωxψ xψx = 111 . . .

T−−−−→ Tx = 000 . . .

Note that φψ = idG and ψφ =

{id on ψ(G \G0),

two points on ψG0.

If u = ψx ∈ ψG, then φσu = Tx. This action-commuting property is clarified further in 12.1,below.

Now we show that ψG is closed. Let u ∈ ψG ⊂ Σ+2 . Say vn ∈ ψxn, vn → u ∈ Σ+

2 . Bycompactness, we may assume xn → x. We claim that u ∈ ψx (or = ψx). If xn and x agree ona long initial block, then so do ψxn and ψx, except possibly for a single blank, which must occurat the same (say j’th) place. If x /∈ G0, there is no such blank. If x ∈ G0, take (ψx)j = uj , since(vn)j = uj .

Page 42: KARL PETERSEN

38 Karl Petersen

Now σψG = ψTG = ψG, so ψG is a subshift. The point ω is in X+ = ψG, so X+ ⊃ O+(ω). We

claim that O+(ω) is dense in ψG, so X+ = ψG = O+(ω). Let u ∈ ψG, say u ∈ ψx. Take nk withTnkθ = nkθ → x in G (odometer (G,T ) is minimal). Note that ψTnkθ = σnkψθ = σnkω. Hence bycontinuity of ψ, nkθ → x implies that ψσnkω → {u, ?}:

σnkω −−−−→ {u, ?}xψ xψnkθ −−−−→ x

It is possible to choose nk so that any residual blank in u is filled in correctly–see 12.1, below.Consequently, (X+, σ) is minimal and uniquely ergodic. m = Haar measure on G is the uniqueinvariant measure for (G,T ). Define µ on X by µ(A) = m(φA).

ψG0 ∪ ψ(G \G0) µ, ν

measure 0 ↓ l≈ lG0 ∪ (G \G0) m

If ν on X+ were also σ-invariant, then φν = λ defined on G by λ(B) = ν(φ−1B) is T -invariant,hence = m (because of unique ergodicity of (G,T )). In particular, ν(ψG0) = 0. So µ = ν.

Remark 12.1. It’s not hard to see directly, using our criteria for subshifts, that (O+(ω), σ) isminimal and uniquely ergodic.

Every block B that appears in ω appears not only with bounded gap, but along (at least) anarithmetic sequence.

ω = 01 0001︸︷︷︸B

0 01 0001︸︷︷︸B

. . .

Since ω is the limit of periodic sequences on {0, 1, }, ω is regularly almost periodic in this sense.

Page 43: KARL PETERSEN

Symbolic Dynamics 39

0 1

00 11

0 1

1001

Figure 23. Cutting and Stacking the Unit Interval

0 10

1

1/2

1/2

Figure 24. Graph of the Cutting and Stacking Function

12.2. The Odometer as a Map on the Interval–Cutting and Stacking. Let x ∈ [0, 1) withdyadic expansion x1x2 . . . . Then

Tx = 000 . . . 01xk+1xk+2 . . . ,

where k = min{n : xn = 1}, i.e. k is the first place a 1 appears.Figure 24 shows the graph of the cutting and stacking function.Figure 25 illustrates the first three stages of cutting and stacking.Lebesgue measure is preserved. Lebesgue measure on [0, 1) corresponds to Bernoulli measure

B(12 ,

12) on Σ+

2 , the set of dyadic expansions.

Page 44: KARL PETERSEN

40 Karl Petersen

1

1/21/4

defined

1/2 1

Stage 1.

Stage 2.

Stage 3.

0 1

undefined

3/4

undefined

defined

defined

1/2 3/4

0 1/4

undefined

defined

0 1/2

Figure 25. Stages of Cutting and Stacking

Page 45: KARL PETERSEN

Symbolic Dynamics 41

13. February 19 (Notes by RP)

Recall the odometer (G,T ) and the construction and properties (2:1, 1:1, continuity, etc.) of ψand φ. From now on, denote the Toeplitz sequence by s. Then φ(000 . . . ) = s.

13.1. Action-Commuting Property. We show that if u ∈ ψx, then σu ∈ ψTx, where x ∈ G.(If x = 111 . . . , then ψx = zs, where z ∈ {0, 1}, so σu = s = ψ(00 . . . ) = ψ(Tx).) Say

x = 11 . . . 1︸ ︷︷ ︸k−1

0xk+1xk+2 . . . ,

so thatTx = 00 . . . 0︸ ︷︷ ︸

k−1

1xk+1xk+2 . . . .

Now x prescribes that the first place in ψx is a blank for the first k − 1 iterations and then getsfilled in with 0 or 1, depending on the parity of k. The next blank place in ψx remains blank at thisstage. Now consider ψTx. If after the first k steps we insert either a 0 or 1 in front of the sequence(whichever we used in step k), it would be as if for the first k − 1 steps we skipped it and startedwith it on step k, i.e., as if we had taken ψx. Hence at this stage σ−1ψTx = ψx, i.e., ψTx = σψx.Then afterward Tx and x coincide, hence ψTx = σψx, and therefore also φσu = Tx = Tφu (since

φψ =id and φu = x). We have shown that φσ = Tφ .

We claim that ψG = O+(s). s ∈ ψG, so from above σks ∈ ψG for all k ≥ 0, i.e. ψG ⊃ O+(s).

Since ψG is closed, ψG ⊃ O+(s). We want to show that O+(s) is dense in ψG. Let u ∈ ψG,u = u1u2 . . . . Find n with ψ(nθ) agreeing with u on an arbitrarily long initial block. Recall thatθ = 100 · · · ∈ G is a generator, and

nθ = ε0ε1ε2 . . . εr00 . . . ,

wheren = ε0 + ε1 · 2 + ε2 · 22 + · · ·+ εr · 2r.

Thus every block is an initial block of nθ for some n. Say in building up u as ψ(some x), we haveu = u1 . . . . . . um, a block on {0, 1, }. We can get nθ to agree with a long initial block of x. Thenψ(nθ) = u1 . . . . . . um (the same block on {0, 1, }). Now u = u1 . . . λ . . . um . . . , where λ ∈ {0, 1}.Say nθ = ε0ε1ε2 . . . εr00 . . . (= x1 . . . xr+1 . . . ). Append either 0 or 10 to get λ filled in properly.This completes the proof of the following theorem.

Theorem 13.1. φ : (O+(s), σ) = (X+, σ) → (G,T ) is 1:1 except on a countable set, where it is2:1. It is continuous and satisfies φσ = σT .

13.2. Odometer as Cutting and Stacking. Recall the cutting and stacking approach discussedlast time. For x ∈ G = Σ+

2 , put ρ(x) = min{n : xn = 0} + 1 mod 2, i.e. ρ(x) = 1−(parity of thefirst place in x where you see a zero), and ρ(111 . . . ) = 1. Then (ρ(Tnx)) = ψ(x) for x /∈ G0. Soas we follow the orbit of 000 . . . under T and write down the values of ρ, we build the Toeplitzsequence s. We are coding a transformation by a partition.

Definition 13.1 (The two-sided Toeplitz system). In the preceding construction, we could just aswell have defined ψ : G→ Σ2 instead of just ψ : G→ Σ+

2 :

ψx =

{. . . 0a−20a−10a00a1 . . . if x1 = 0

. . . a−20a−10a00a10 . . . if x1 = 1,

Page 46: KARL PETERSEN

42 Karl Petersen

etc. (Any “unfilled blank” is to the right (left) of the central position if x has finitely many 0’s(1’s).)

The properties established above still hold, including density of the orbit of each of the twopoints s1 and s2 in ψ(000 . . . ). In fact, already σ : X+ → X+ is essentially invertible: one cancheck that σ−1{u} is a single point except when u = s, in which case σ−1{s} = {0s, 1s} (because

T−1(000 . . . ) = −θ = 1111 . . . ). Then X = ψG = O(s1) = O(s2) is again a closed subshift whichis minimal and uniquely ergodic.

Theorem 13.2 (replaces Theorem 10.2). Let (M,σ) be the two-sided Morse minimal system ⊂{−1, 1}Z, and (X,σ) the two-sided (minimal, uniquely ergodic) Toeplitz system. Define f : X →{−1, 1} by f(u) = (−1)u0 for all u ∈ X. Define T : X × {−1, 1} → X × {−1, 1} by T (u, ξ) =

(σu, f(u)ξ), a skew product. Then (M,σ) and (X × {−1, 1}, T ) are topologically conjugate.

Remark 13.1. Note that if τ : G→ N is defined by

τ(x1x2 . . . ) = min{n : xn = 0},i.e.

τ(x1x2 . . . ) = (first index k where xk = 0),

andτ(111 . . . ) = 1,

then (−1)τ(x)+1 = f(ψx) for x /∈ G0.

Remark 13.2. Note also that, if we define the cocycle f(u, n) by

f(u, n) =

1 for n = 0

f(u)f(σu) · · · f(σn−1u) for n > 0

f(σ−1u) · · · f(σnu) for n < 0

,

then Tn(u, ξ) = (σnu, f(u, n)ξ).

Remark 13.3. Consider the Morse sequence

ω = . . . 01101001100101101001011001101001 . . . .

Look at the cellular automaton η(x)n = xn + xn+1 + 1. Then

η(ω) = . . . 010001010100010 0 · · · = s,

the Toeplitz sequence. This is easily seen by grouping ω first into a sequence on the 2-blocks 01and 10, each of which is sent to 0 by η, then into the 4-blocks 0110 and 1001, etc.

Proof of Theorem 13.2. Let X = X × {−1, 1}. Define π : X → {−1, 1}Z by π(u, ξ)n = f(u, n)ξ for

all n ∈ Z. (We code the orbit of a point in X by the sequence of ±1’s in its second coordinate.) Itwill turn out that π(s1, 1)n = (−1)ωn for n ≥ 0 (and of course similarly for s2–recall that both s1

and s2 have s as their right half). �

Page 47: KARL PETERSEN

Symbolic Dynamics 43

14. February 24 (Notes by JF)

We recall that the Morse minimal set M is defined by M = O(ω) = {σnω : n ∈ Z}, and wecontinue to let X be the orbit closure of the two-sided Toeplitz sequence. Then we have thefollowing theorem, which was stated last time.

Theorem 14.1. (M,σ) is topologically conjugate to (X, T ), where X = X × {1,−1}, T (u, ξ) =(σu, f(u)ξ), and f(u) = (−1)u0.

Before proving the theorem, we need to make a few comments and introduce additional notation.For x ∈ G = Σ2

+ with x 6= 111 . . ., we define τ(x) to be the first positive n such that xn = 0, wherex = x1x2x3 . . .. When x = 111 . . ., we set τ(x) = 1. We claim that τ defined in this way satisfiesthe equation

(−1)τ(x)+1 = f(ψ(x)) if x 6= 111 . . . .

In fact, as long as there are 1’s in x, the right half of ψ(x) is

0 0 0 0 0 0 . . . (first step)

010 010 010 . . . (second step)

0100010 010 . . . (third step)

...

and the initial blank will be filled in with 1 + n (mod 2), where n is the smallest natural numbersuch that xn = 0. Hence the claim follows.

We recall also that if n has a diadic expansion given by

n = ε0 + ε121 + ε222 + . . . ,

then nθ = ε0ε1ε2 . . .. We can now proceed with the proof of the theorem.

Proof. Define a map π : X → {−1, 1}Z by π(u, ξ)n = f(u, n)ξ, where f(u, 0) = 1 and f(u, n) isdefined for n 6= 0 by

f(u, n) =

{f(u)f(σu)f(σ2u) . . . f(σn−1u) for n ≥ 1

f(σ−1u)f(σ−2u) . . . f(σnu) for n ≤ −1

This f(u, n) is called a multiplicative cocycle. Then

π(u, ξ) = . . . f(σ−1u)ξ, ξ︸︷︷︸0’th place

, f(u)f(σu)ξ, . . .

[There was a question about why the term “cocycle” is used here. By way of explanation, considerthe following situation: Let T : X → X and f : X → C. Then write

Snf(x) =

∑n−1

k=0 f(T kx) if n ≥ 1

0 if n = 0∑−1k=n f(T kx) if n ≤ −1

Page 48: KARL PETERSEN

44 Karl Petersen

It is then easy to see that Sn+mf(x) = Snf(x) + Smf(Tnx), and if we let F (x, n) = Snf(x), wehave that F (x, n+m) = F (x, n)+F (Tnx,m), an additive cocycle equation. The terminology comesfrom the cohomology theory of groups, where one finds analogous equations. A skew product suchas T will in general have a cocycle such as f(u, n) appearing in the second coordinate when it israised to higher powers.] We can now see that the entries in π(u, ξ) are just the second coordinates

of the expression for Tn(u, ξ).

We now show that π is continuous. If (u, ξ) and (v, ζ) are close in X, then ξ = ζ and u and vare close in the two-sided shift. Hence u and v agree on a long central block. Specifically, thereexists J ∈ N such that (−1)uj = (−1)vj for |j| ≤ J . But then π(u, ξ)n = π(v, ζ)n for |n| ≤ J , whichmeans that π(u, ξ) and π(v, ζ) are close. This shows that π is continuous.

Notice also that π is one-to-one. If we know π(u, ξ), then we know ξ, since π(u, ξ)0 = ξ. Butthen once we know ξ, we can determine the value of f(u) and hence of u0. Continuing in thismanner we can determine un for all n.

We now make the claim that if s is the one-sided Toeplitz sequence and n ≥ 0, then π(. . . .s, 1)n =(−1)ωn , where ωn is the n’th entry in the Morse sequence. Recall that ωn =

∑∞i=0 εi(n) (mod 2),

where n has a diadic expansion given by n =∑r

i=0 εi(n)2i and each εi(n) is either 0 or 1. Supposethat n has a diadic expansion such that εi(n) = 1 for all i such that |i| < p and εp(n) = 0. Thennθ = 111 . . . 1 0︸︷︷︸

p’th place

. . . and τ(nθ) = p. Hence f(σn(. . . s)) = (−1)p+1.

Note now that if n has the diadic expansion we described earlier, then

n+ 1 = 0 + 0 · 2 + 0 · 22 + · · ·+ 0 · 2p−1 + 1 · 2p . . . ,where the tail of the expansion is unchanged. Therefore

1 +∞∑i=0

εi(n)−∞∑i=0

εi(n+ 1)︸ ︷︷ ︸both finite sums

= 1 + [p+ εp+1 + . . .]− [1 + εp+1 + . . .] = p = τ(nθ).

Hence[n−1∑k=0

τ(kθ)

]+ n =

n−1∑k=0

[1 +

∞∑i=0

εi(k)−∞∑i=0

εi(k + 1)

]+ n = 2n+

∞∑i=0

[n−1∑k=0

εi(k)−n−1∑k=0

εi(k + 1)

]

= 2n+∞∑i=0

(εi(0)︸︷︷︸0

−εi(n)) = 2n−∞∑i=0

εi(n) ≡ 0 +∞∑i=0

εi(n) (mod 2) = ωn.

We have shown that

f(. . . s, n) = f(s)f(σs) · · · f(σn−1s) = (−1)s0(−1)s1 · · · (−1)sn−1 = (−1)τ(0θ)+1(−1)τ(1θ)+1 · · · (−1)τ((n−1)θ)+1.

But that demonstrates our claim, for we have shown that π(. . . s) = . . . ω+, where ω+ is the righthalf of the Morse sequence on {−1, 1}.

Further, it is clear that we have the following relationship:

(u, ξ)T−−−−→ (σu, f(u)ξ)

π

y yπ(. . . , ξ, f(u)ξ, . . .)

σ−−−−→ (. . . , f(u)ξ, f(σu)f(u)ξ . . .),

Page 49: KARL PETERSEN

Symbolic Dynamics 45

i.e. σπ = πT .The image π(X) ⊂ {−1, 1}Z is closed and σ-invariant. Since it contains all shifts of a sequence

which agrees with λ for n ≥ 0, where λn = (−1)ωn , π(X) contains the orbit closure of λ.Let N be the orbit closure of λ. Then (N, σ) is topologically conjugate to (M,σ). To determine

whether π(X) is contained in N , we need to figure out which words can appear in a sequence in

π(X). We claim that the only such words are those which appear in the right half of λ.

If (u, ξ) ∈ X, then the image under π of (u, ξ) is just (. . . , ξ, f(u)ξ, . . . , f(u, n)ξ, . . .). Consider theword of length r beginning in the m’th position, namely (f(u,m)ξ, f(u,m+1)ξ, . . . , f(u,m+r−1)ξ).Since the Morse system is self-dual, we can assume that ξ = 1. Then we can find M ≥ 0 such thatσM (. . . s) ≈ u. Take M so large that f(σj+M (. . . s)) = f(σj(u)) for all j = 0, . . . , (m+ r− 1). Thiscompletes the proof of the theorem. �

We will use the topological conjugacy we have demonstrated to derive information about thestructure of the Morse minimal set and, more importantly, the variants of it.

Page 50: KARL PETERSEN

46 Karl Petersen

15. February 26 (Notes by JF)

The Morse system (M,σ) is topologically conjugate to (X, T ), where X = X×{−1, 1}, X is the

orbit closure of the two-sided Toeplitz sequence, and T (u, ξ) = (σu, f(u)ξ), where f(u) = (−1)u0 .The isomorphism is given by π, where π(u, ξ)n = f(u, n)ξ. Hence π(u, ξ) is the sequence of secondcoordinate values in the orbit of (u, ξ).

15.1. History:

(1) Otto Toeplitz in 1928 constructed the Toeplitz sequence as an example of an almost-periodicfunction on the integers.

(2) Kakutani (1967) discussed the structure and spectrum of the Morse and related sequences.His work was the first place where the skew product came into the discussion.

(3) Veech (1969) dealt with even more generalizations of the structure.(4) Jacobs and Keane (1969) discussed other Toeplitz systems, and Keane (1969) dealt with

generalized Morse systems.(5) Susan Williams in her 1981 thesis showed how to get non-uniquely ergodic Toeplitz systems,

generalizing an example of Oxtoby.

15.2. Another Theorem. We deal in this section with a theorem of Furstenberg (1961) whichwas used by Veech (1969). The theorem, which we will state and prove, relates to the ergodicity ofskew product transformations, a topic which was dealt with even earlier by Anzai.

Theorem 15.1. Let (X,T ) be a minimal topological dynamical system. Let f : X → {−1, 1} be

a continuous function, and define T on X = X × {−1, 1} by T (x, ξ) = (Tx, f(x)ξ). Then (X, T )is not minimal if and only if there is a nontrivial continuous solution g of the cocycle-coboundaryequation g(Tx) = f(x)g(x). Suppose moreover that (X,T ) is minimal and uniquely ergodic. Then

(X, T ) is not uniquely ergodic if and only if there is a nontrivial measurable solution g to the abovecocycle-coboundary equation.

Proof. Suppose that g is continuous on X and satisfies g(Tx) = f(x)g(x) for all x ∈ X. Then |g|is continuous and T -invariant, hence constant. (To see that |g| is constant, look at {x : |g(x)| ≤ α}for α ∈ R. This set is closed and T -invariant for every α, hence either empty or X for all α.Alternatively, fix x and note that |g| is constant on O(x), a dense set.) Then

f(x) =g(Tx)

g(x)

g(T 2x)

g(Tx)· · · g(Tnx)

g(Tn−1x)=g(Tnx)

g(x).

Hence Tn(x, ξ) = (Tnx, f(x, n)ξ) = (Tnx, g(Tnx)

g(x) ξ). If {Tnkx} converges to some y, then Tnk(x, ξ) −→(y, g(y)

g(x)ξ). Thus we cannot have Tnk(x, 1) −→ (x,−1), since Tnk(x, 1) −→ (x, 1). Hence O+(x, 1)

is not dense in X for any x. The result follows similarly for negative powers.Conversely, suppose that (X, T ) is not minimal. Take (x0, ξ0) such that O(x0, ξ0) is strictly

contained in X.

Lemma. For every x, there exists a unique g(x) ∈ {−1, 1} such that (x, g(x)) ∈ O(x0, ξ0) = X0.

Proof. Suppose that for some x both (x, 1) and (x,−1) are contained in X0. Given any (y, ξ) ∈ X,we can choose a sequence {nk} such that Tnkx −→ y since (X,T ) is minimal. By taking a

subsequence, we may assume that Tnk(x, 1) −→ (y, ζ), where ζ ∈ {−1, 1}. But then Tnk(x,−1) −→

Page 51: KARL PETERSEN

Symbolic Dynamics 47

(y,−ζ). Hence {(y, 1), (y,−1)} ⊂ O(x, 1) ∪ O(x,−1) ⊂ O(x0, ξ0), where here we use the fact that

if in any system we have u ∈ O(v), then O(u) ⊂ O(v). It then follows since (y, ζ) was arbitrary

that O(x0, ξ0) = X, a contradiction. This proves the lemma. �

We now demonstrate that the function g given by the lemma is continuous. Suppose that asequence of points {xn} converges to x. Take a subsequence so that (xn, g(xn))︸ ︷︷ ︸

∈O(x0,ξ0)

−→ (x, ξ). Then

(x, ξ) ∈ O(x0, ξ0), which means that ξ = g(x). Thus xn −→ x implies that g(xn) −→ g(x). Finally,

T (x, g(x)) = (Tx, f(x)g(x)) implies that g(Tx) = f(x)g(x). This completes the proof of the portionof the theorem which deals with minimality.

Now suppose that µX is the unique T -invariant Borel probability measure on X. Define µ on Xby µ = µX × counting measure

2 . That is, each of 1 and −1 has measure 12 in the set {−1, 1}. For a set

A ⊂ X, µ(A× {1}) = µ(A× {−1}) = 12µX(A).

Lemma. The system (X, T ) is uniquely ergodic if and only if (X, T , µ) is ergodic.

We recall that the system (X, T , µ) is ergodic if every T -invariant measurable function is constantalmost everywhere with respect to µ.

Proof. Suppose µ is not ergodic for (X, T ). Then (X, T ) is not uniquely ergodic, since the existenceof a nonergodic measure implies the existence of at least two ergodic measures.

Conversely, suppose that µ is ergodic, and let ν be a T -invariant measure on X. Then look atthe flip transformation F : X → X defined by F (x, ξ) = (x,−ξ). It is clear that FT = TF , and the

fact that F and T commute allows us to show that Fν is also a T -invariant measure. Let A ⊂ X.Then

Fν(T−1A) = ν(F−1T−1A) = ν(T−1F−1A) = T ν(F−1A) = ν(F−1A) = Fν(A).

Having shown that Fν is T -invariant, we now claim that µ = 12(ν +Fν). Both µ and 1

2(ν +Fν)

project to µX in the first coordinate. In fact, if π is the projection of X onto X, the fact that ν isT -invariant implies that πν is T -invariant. Let A ⊂ X. Then

1

2(ν + Fν)(A× {ξ}) =

1

2[ν(A× {ξ}) + ν(A× {−ξ})] =

1

2[ν(A× {−ξ, ξ})]

=1

2(πν)(A) =

1

2µX(A) = µ(A× {ξ}).

Hence µ = 12(ν +Fν), which contradicts the ergodicity of µ unless ν = Fν. But then µ = ν, which

proves the lemma. �

Suppose that g is a nontrivial measurable solution to g(Tx) = f(x)g(x). Our method of proof

here is to show that if such a g exists, then µ = µX × counting measure2 is not ergodic on X. Note that

h(x, ξ) = g(x)ξ is not constant almost everywhere with respect to µ, and

h(T (x, ξ)) = h(Tx, f(x)ξ) = g(Tx) f(x)︸︷︷︸±1-valued

ξ = g(x)ξ = h(x, ξ).

Hence h is nonconstant, measurable, and T -invariant, which means that µ is not ergodic. Thus thesystem (X, T ) is not uniquely ergodic.

Page 52: KARL PETERSEN

48 Karl Petersen

We will give the remainder of the proof next time. �

Page 53: KARL PETERSEN

Symbolic Dynamics 49

16. March 3 (Notes by XM)

To finish the proof of the previous theorem it remains to show that if the skew product is notuniquely ergodic, then the cocycle-coboundary equation has a measurable solution.

Suppose that (X, T ) is not uniquely ergodic; then, by the lemma, µ is not ergodic. Hence there

exists a T -invariant function h : X → C which is not constant a.e. with respect to µ.Let’s define

g(x) =h(x, 1)− h(x,−1)

2.

It is clearly measurable with respect to µX , and we will show it satisfies our equation.Since h is T -invariant, we have the following :

h(x, ξ) =h(x, 1) + h(x,−1)

2︸ ︷︷ ︸:=v(x)

+ ξh(x, 1)− h(x,−1)

2︸ ︷︷ ︸g(x)

(simply note that ξ ∈ {−1, 1})

= h ◦ T (x, ξ)

= h(Tx, f(x)ξ)

=h(Tx, 1) + h(Tx,−1)

2︸ ︷︷ ︸v(Tx)

+f(x)ξh(Tx, 1)− h(Tx,−1)

2︸ ︷︷ ︸g(Tx)

µ-a.e.,

so that

v(x) + ξg(x) = v(Tx) + f(x)ξg(Tx).

For ξ = 1 we have

v(x) + g(x) = v(Tx) + f(x)g(Tx),

and for ξ = −1 we have

v(x)− g(x) = v(Tx)− f(x)g(Tx).

The difference of the two equations gives us g(x) = f(x)g(Tx), and multiplying byf(x) on bothsides we see g is a solution of our equation, i.e., g ◦ T = fg.

It remains to show that g is not identically zero. Suppose it was; then h(x, 1) = h(x,−1) = h0(x)

a.e., and since h is T -invariant, h0 is T -invariant. By ergodicity of µX , h0 must be constant a.e.,which in turns implies h must be constant, so we have a contradiction. �

16.1. Application to the Morse System. When the minimal dynamical system (X,σ) is theToeplitz system (and f(u) = (−1)u0 , for u ∈ X), the skew-product defined in the previous theoremis topologically conjugate to the Morse system (cf. Theorem 10.2). In order to prove that the Morsesystem is minimal and uniquely ergodic we need to show that the cocycle-coboundary equation hasno nontrivial measurable solution. The next result gives us a necessary and sufficient condition forthis to happen.

First let us recall that we have a continuous map ψ, which is a.e. a homeomorphism, fromthe odometer (G,Rθ) into the 2-sided Toeplitz system (X,σ), where G = Σ+

2 , θ = (111 . . .) andRθ(x) = x+ θ.

Page 54: KARL PETERSEN

50 Karl Petersen

Theorem 16.1. Let (X,σ) be the 2-sided Toeplitz system and µX the unique ergodic measure onX. If there exists a non trivial g : X → C satisfying g(σx) = f(x)g(x) a.e. with respect to µX ,then

(7) limk→∞

∫Xf(u, nk)dµX(u) = 1

for every sequence {nk} of positive integers such that nkθ → 0. (Recall that

f(u, n) = f(u)f(σu)f(σ2u) . . . f(σn−1u)

for n > 0.) The converse also holds.

Proof. Let g be a nontrivial solution of g(σx) = f(x)g(x). <(g) and =(g) are also solutions andone of them at least must be nontrivial; therefore we can assume that g is real-valued. Note that|g ◦ σ| = |f | |g| = |g|, so |g| is σ-invariant, and hence must be constant almost everywhere withrespect to µX . Also |g| 6= 0 since g is not identically zero. We may then assume g takes values in{−1, 1} since it is the case for g/ |g|.

Now for any u ∈ X and n > 0 we have

f(u, n) = f(u)f(σu)f(σ2u) . . . f(σnu) =g(σu)

g(u)

g(σ2u)

g(σu). . .

g(σnu)

g(σn−1u)=g(σnu)

g(u).

Or, if we look at it on the odometer side,

f(ψt, n) =g(σnψt)

g(ψt).

Recall that σψ = ψRθ, so that σnψ = ψRnθ . Then

f(ψt, n) =g(ψRnθ t)

g(ψt)=g(ψ(t+ nθ))

g(ψt).

Now we calculate that∣∣∣∣∫Xf(u, nk)dµX(u)− 1

∣∣∣∣ =

∣∣∣∣∫Gf(ψt, nk)dm(t)− 1

∣∣∣∣ =

∣∣∣∣∫G{f(ψt, nk)− 1} dm(t)

∣∣∣∣=

∣∣∣∣∫G

[gψRnk

θ (t)

g(ψt)− 1

]dm(t)

∣∣∣∣≤∫G

∣∣∣∣gψRnkθ (t)

g(ψt)− 1

∣∣∣∣ dm(t) =

∫G

∣∣∣∣gψRnkθ (t)− g(ψt)

g(ψt)

∣∣∣∣ dm(t)

≤ ‖g ◦ ψ ◦Rnkθ − g ◦ ψ‖L1(G,m) (since |g| = 1),

and this is small for large k because translation is continuous in L1(G,m). [Given ε > 0 thereexists h continuous over G such that ‖g ◦ ψ − h‖L1 < ε/3 (because continuous functions are densein L1). Then

‖g ◦ ψ ◦Rnkθ − g ◦ ψ‖L1 ≤ ‖g ◦ ψ ◦Rnk

θ − h ◦Rnkθ ‖L1 + ‖h ◦Rnk

θ − h‖L1 + ‖h− g ◦ ψ‖L1

≤ 2‖g ◦ ψ − h‖L1 + ‖h ◦Rnkθ − h‖L1 .

Page 55: KARL PETERSEN

Symbolic Dynamics 51

Since h is uniformly continuous on G we can choose k large enough so that,

supt∈G|h(t+ nkθ)− h(t)| < ε/3,

and thus ‖h ◦Rnkθ −h‖L1 < ε/3. Finally, ‖g ◦ψ ◦Rnk

θ − gψ‖L1 < ε for k large enough, which proves7.]

We don’t actually need the converse of this statement so we won’t prove it but leave it as anexercise. �

Page 56: KARL PETERSEN

52 Karl Petersen

17. March 5 (Notes by XM)

Theorem 17.1. Condition (7) of Theorem 16.1 is not satisfied, hence the Morse system is minimaland uniquely ergodic.

Remark 17.1. When a system is both minimal and uniquely ergodic we sometimes say it is strictlyergodic.

Recall some notation :

τ(x) =

{inf{n : xn = 0} if x 6= 11 . . . 1 . . .

1 if x = 11 . . . 1 . . .

Define γ(x) = f(ψx) = (−1)τ(x)+1 and

γ(x, n) =

γ(x)γ(Rθx) . . . γ(Rn−1

θ x) if n > 0

1 if n = 0

γ(R−1θ x)γ(R−2

θ x) . . . γ(Rnθx) if n < 0.

We will show that Condition (7) is not true, i.e. that limr→∞∫G γ(x, nr)dm(x) 6= 1 for a certain

sequence nr (γ is just the version of f from the odometer side). In order to do that we need astronger version of the pointwise ergodic theorem, namely :

Theorem 17.2. Let (X,T ) be a uniquely ergodic topological dynamical system and µ a non-atomicT -invariant measure on X. If h : X → C is a bounded measurable function with at most finitelymany discontinuities, then

(8) Anh(x) =1

n

n−1∑k=0

h ◦ T kx −→∫Xhdµ

for all x ∈ X. (What’s stronger in this new statement is that the convergence occurs not just almosteverywhere).

Proof. If h is continuous we already know (cf. Theorem 9.1) that Anh converges uniformly on X to∫X hdµ. If we suppose that h is discontinuous only at a finite number of points, say x1, x2, . . . , xm,

the idea is to show we can find continuous functions uε and vε such that uε ≤ h ≤ vε and whichhave integrals differing by at most ε. Using standard arguments of measure theory, we may assumeh is positive. (To reduce to the real case, write h = <h+ i=h; to reduce to the positive case, writeh = h+ − h−, where h+ = max{h, 0} and h− = max{−h, 0}).

Let ε > 0 be arbitrary. µ is non-atomic, therefore µ(xi) = 0 for all i’s, and since it is alsoouter regular (being a Borel measure), we can find disjoint open neighborhoods Vi of the xi eachof measure less than ε/m, so that the total measure of their union is less than ε. Now let Ui besmaller neighborhoods of the xi with Ui ⊂ Vi. Using Urysohn’s Lemma (for example), we can builda continuous map χε from X onto [0, 1] such that χε = 1 on Ui and χε = 0 outside Vi, for all i.

Let’s define

uε = (1− χε)hand

vε = (1− χε)h+ ‖h‖∞χε.

Page 57: KARL PETERSEN

Symbolic Dynamics 53

The function uε is continuous by construction, and vε is just the sum of uε and of another continuousfunction, so it is also continuous. Moreover, uε ≤ h and vε = h + (‖h‖∞ − h).χε ≥ h, so thatuε ≤ h ≤ vε. Both uε and vε coincide with h on

⋂mi=1(X \ Vi), and we can easily verify that∫

X(vε − uε) ≤ ε‖h‖∞.The end of the proof is now straightforward: Anuε ≤ Anh ≤ Anvε for all n, and uε and vε are

continuous, so we have the following:∫Xh− ε‖h‖∞ ≤

∫Xuε = limAnuε ≤ limAnh ≤ limAnh ≤ limAnvε =

∫Xvε ≤

∫Xh+ ε‖h‖∞.

Hence Anh tends to∫X hdµ, since ε was arbitrary. �

We now turn to the proof of Theorem 17.1.

Proof (Veech). Take nr = 2r with r even. The function γ defined previously is continuous exceptat −θ = 11 . . . 1 . . ., therefore x 7→ γ(x, 2r) has at most finitely many discontinuities. By Theorem

17.2 this brings us to the study of limn→∞1n

∑n−1k=0 γ(x+kθ, 2r) for some x that we find convenient.

In fact we will estimate the last expression for x = 0, and all the difficulty remains in determiningwhat values of τ we see in γ(kθ, 2r) = γ(kθ)γ((k+1)θ) . . . γ((k+2r−1)θ). For this we must examinea fairly arbitrary string of 2r consecutive multiples of θ in G, i.e. kθ, (k + 1)θ, . . . , (k + 2r − 1)θ.Consider the dyadic expansions that appear in these multiples, k, k + 1, . . . , k + 2r − 1. One halfof them, that is 2r−1 of the total, have τ = 1, so that γ = 1; one half of the remaining (those thatstart with a 1), that is 2r−2 of the total, have τ = 2, so that γ = −1; etc. ; 2 of them have τ = r−1,so that γ = 1. A picture will help.

Page 58: KARL PETERSEN

54 Karl Petersen

Here’s a string of 2r multiples of θ in the case where r = 5 and k = 4.

τ γ

4θ = 0010000 . . . . . . . . . 1 1

5θ = 1010000 . . . . . . . . . 2 −1

6θ = 0110000 . . . . . . . . . 1 1

7θ = 1110000 . . . . . . . . . 4 −1

8θ = 0001000 . . . . . . . . . 1 1

9θ = 1001000 . . . . . . . . . 2 −1

10θ = 0101000 . . . . . . . . . 1 1

11θ = 1101000 . . . . . . . . . 3 1

12θ = 0011000 . . . . . . . . . 1 1

13θ = 1011000 . . . . . . . . . 2 −1

14θ = 0111000 . . . . . . . . . 1 1

15θ = 1111000 . . . . . . . . . 5 1

16θ = 0000100 . . . . . . . . . 1 1

17θ = 1000100 . . . . . . . . . 2 −1

18θ = 0100100 . . . . . . . . . 1 1

19θ = 1100100 . . . . . . . . . 3 1

20θ = 0010100 . . . . . . . . . 1 1

21θ = 1010100 . . . . . . . . . 2 −1

22θ = 0110100 . . . . . . . . . 1 1

23θ = 1110100 . . . . . . . . . 4 −1

24θ = 0001100 . . . . . . . . . 1 1

25θ = 1001100 . . . . . . . . . 2 −1

26θ = 0101100 . . . . . . . . . 1 1

27θ = 1101100 . . . . . . . . . 3 1

28θ = 0011100 . . . . . . . . . 1 1

29θ = 1011100 . . . . . . . . . 2 −1

30θ = 0111100 . . . . . . . . . 1 1

l(k)θ =31θ = 1111100 . . . . . . . . . 6 −1

32θ = 0000010 . . . . . . . . . 1 1

33θ = 1000010 . . . . . . . . . 2 −1

34θ = 0100010 . . . . . . . . . 1 1

35θ = 1100010 . . . . . . . . . 3 1

Page 59: KARL PETERSEN

Symbolic Dynamics 55

To summarize: 2r−1 + 2r−3 + · · · + 1 have γ = 1, so their product is 1. 2r−2 + 2r−4 + · · · + 2have γ = −1, so their product is 1. All together that makes 2r−1 + 2r−2 + . . .+ 1 = 2r − 1 terms,so that the sign of our expression γ(kθ, 2r) = γ(kθ)γ((k+ 1)θ) . . . γ((k+ 2r − 1)θ) is determined bythe remaining one.

Let l(k) be the integer such that γ(l(k)θ) is the remaining factor whose sign we have to analyze,

and let β(k) = τ(l(k)θ) + 1 (so that γ(l(k)θ) = (−1)β(k)). When k = 0, 1, . . . , τ(l(k)θ) takes thevalues r, r + 1, r + 2, . . . with respective densities 1/2, 1/4, 1/8, . . . (see figure). In other words,

card{k : 0 ≤ k ≤ n− 1, τ(l(k)θ) = r + i}n

→ 1

2i+1as n→∞, for i = 0, 1, 2, . . ., or

card{k : 0 ≤ k ≤ n− 1, β(k) = β}n

→ 1

2β−ras n→∞, for β ≥ r + 1.

l(k) θ = 111...............1

0

1

1

0

0 0

10

1

0

1

0

1

0

1

0

1

0

1

0

1

0

0

1

0

1

0

1

1

r

r+1

r+2r+3 r+4

Figure 26. Densities of τ(l(k)θ).

Page 60: KARL PETERSEN

56 Karl Petersen

Hence

1

n

n−1∑k=0

γ(kθ, 2r) =1

n

n−1∑k=0

γ(l(k)θ) =1

n

∞∑β=r+1

(−1)β card{k : 0 ≤ k ≤ n− 1, β(k) = β}

. Put fn,β = (−1)β card{k : 1 ≤ k ≤ n − 1, β(k) = β}/n; then fn,β tends to (−1)β/2β−r and

|fn,β| ≤ 1/2β−r. Hence, as an application of the Lebesgue Dominated Convergence Theorem (for

example), we have that∑β≥r+1

fn,β tends to∑β≥r+1

(−1)β

2β−ras n→∞.

Conclusion: ∫Gγ(x, 2r)dm(x) = lim

n→∞

1

n

n−1∑k=0

γ(kθ, 2r) =∞∑

β=r+1

(−1)β

2β−r

=2r∞∑

β=r+1

(−1

2

)β= 2r

(−1

2

)r+1 1

1 + 12

=− 1

36= 1.

This proves our theorem. �

Page 61: KARL PETERSEN

Symbolic Dynamics 57

18. March 17 (Notes by GB)

18.1. The Spectrum of the Morse System. For reference see S. Kakutani (1966), Fifth BerkeleySymposium in Math Stat., vol II, p. 405.Let (M,σ) ⊂ {−1, 1}Z be the Morse system with unique invariant measure µM , and (X,σ) the

Toeplitz system. Recall that X = X × {−1, 1} and T : X → X is defined by T (u, ξ) = (σu, f(u)ξ)

and f(u) = (−1)u0 . Also, L2(M,µM ) is isomorphic to L2(X, µ) and is a separable Hilbert space.To any measurable transformation T : X → X on a measure space

(X,B, µ

)is associated a

linear operator UT on L2(X,B, µ

)which is unitary if T is invertible (cf. e.g. Walters, p. 25) in the

following way:

UTh(x) = h(T (x)) for all h ∈ L2(X,B, µ

).

Then

< UnT f, g > =

∫Xf ◦ Tngdµ for all f, g ∈ L2

(X,B, µ

), and n ∈ Z.

For the Morse system, σ : M →M corresponds to T : X → X. The unitary operatorUT : L2(X, µ)→ L2(X, µ) takes the form

UTh(x, ξ) = h(T (x, ξ)) = h(Tx, f(x)ξ)

or more briefly

UTh = hT .

The spectrum of the system (X, T ) is defined as the spectrum of the operator UT on L2(X, µ).Recall briefly the definitions of various kinds of spectral measures, spectral types: see Petersen’s

Lectures p. 10 or book, p. 19 for a quick review. The basic idea is that we can understand theoperator by decomposing it as an integral: UT =

∫ π−π e

iθdE(θ), where E is a projection-valued

Borel measure on [−π, π):

UkT

=

∫ π

−πeikθdE(θ), k ∈ Z

< UkTu, u > =

∫X

(T ku)udµ = ρ(k) = ρu(k).

One can can check that ρ(k) is positive-definite on Z. Hence by the Bochner-Herglotz Theorem,ρu is the Fourier transform of a measure νu: ρu(k) =

∫ π−π e

−ikθdνu(θ).

The function of k < UkTu, v > is not positive-definite if u 6= v. But then < Uk

Tu, v > are the

Fourier coefficients of a complex measure λu,v that is not directly given by Bochner-Herglotz butby polarization: replace first u by u+ v and then by u− v in the preceding construction.

The maximal spectral type of the system (or of UT ) is the minimal (up to absolute continuity)type that dominates all these measures. The discrete spectrum of UT is just the set of eigenvaluesof UT (they lie on the unit circle since it’s a unitary operator). These are point masses (or atoms)of E. E{λ} is the projection onto the eigenspace associated with λ, that is, E{λ} is the projection

on {h ∈ L2 : T h = λh}.

Page 62: KARL PETERSEN

58 Karl Petersen

Let K ⊂ L2 be the closed linear span of the eigenfunctions (note there is a countable number byseparability). This is the Kronecker subspace of L2.If(X,B, µ

)is a measure space, T : X → X a measure-preserving transformation, then K the closed

linear span of the eigenfunctions, corresponds to a factor: There are a measure-preserving system(Y, C, ν, S

)and a factor map π :

(X,B, µ, T

)→(Y, C, ν, S

)such that:

K = {h ◦ π : h ∈ L2(Y, C, ν

)}.

This(Y, C, ν, S

)is called the Kronecker factor of

(X,B, µ, T

). It is the largest factor with purely

discrete spectrum. In L2(Y, C, ν

)(and in L2

(X,B, µ,

)) the eigenspaces are all one-dimensional and

pairwise orthogonal. Actually, then(Y, C, ν, S

)is isomorphic to an ergodic group rotation with

Haar measure, a system like the odometer or an irrational rotation on the circle. If a system doesnot have purely discrete spectrum, the analysis of its structure is more difficult.

For the Morse system (M,σ) there is a dualizing (or mirroring) map δ : M →M given by

δx = x = . . . , x−1, x0, x1, where 0 = 1, 1 = 0.

δ maps the Morse system = (O(. . . 01101001 . . . ), σ) into itself.

Theorem 18.1 (Kakutani). For the Morse system, represented as (X, T , µ) as before,

L2(X, µ) = V0 ⊕ V1, where

V0 = {h ∈ L2(X, µ) : h(x, ξ) = h(x,−ξ)}

V1 = {h ∈ L2(X, µ) : h(x, ξ) = −h(x,−ξ)}.Moreover, UT has discrete spectrum on V0 with eigenvalues at each diadic rational. On V1, UThas continuous spectrum. In fact there are no eigenfunctions outside V0, so V0 = K = Kroneckersubspace of L2(X, T , µ).

Remark 18.1. It can be shown that the maximal spectral type of UT |V1 is singular with respectto the Lebesgue measure, and can be given fairly explicitly as a Riesz product and a g-measure.Kakutani further showed that by varying the construction of X slightly, one could obtain manyexamples with pairwise singular maximal spectral types for these UT |V1 .

There is much subsequent work on spectral types of Morse-like systems. These are steps in thedirection of “Banach’s problem”: find a system with simple, in the sense of multiplicity, Lebesguespectrum, i.e.

(X,B, µ, T

)such that in L2

(X,B, µ

)there is a function φ such that 1, φ, φT, φT 2, . . .

are pairwise orthogonal and span L2(X,B, µ

).

Proof. Recall that there is the factor mapping π:

X = X × {−1, 1} π→ (X,σ) = Toeplitz system ≈ odometer = (G,T ),

and that (G,T ) is a compact abelian group rotation and so has discrete spectrum.

Let G be the character group of G, i.e. the set of all continuous γ : G→ {z ∈ C : |z| = 1} such

that γ(g1g2) = γ(g1)γ(g2) for all g1, g2 ∈ G. Then each γ ∈ G is an eigenfunction of T , and Gspans L2(G,m).V0 does not depend on the second coordinate, so

V0 = {h ◦ π1 : h ∈ L2(X,µX)} ≈ L2(X,µX).

Page 63: KARL PETERSEN

Symbolic Dynamics 59

We know that the Toeplitz system is isomorphic to (G,T ), so we have that UT |V0 has purely discretespectrum with eigenvalues at the diadic rationals. So V0 is contained in the Kronecker subspace ofL2(X, T , µ).

To prove L2(X, µ) = V0 ⊕ V1, take u0 ∈ V0, u1 ∈ V1; then∫Xu0u1dµ =

∫X×{1}

u0u1dµ+

∫X×{−1}

u0u1dµ =

∫X×{1}

u0u1dµ+

∫X×{1}

u0(−u1)dµ = 0,

so V0⊥V1.Now we check that V0 and V1 span L2. Take f ∈ L2 and decompose it as

f(x, ξ) =f(x, ξ) + f(x,−ξ)

2︸ ︷︷ ︸=PV0

f∈V0

+f(x, ξ)− f(x,−ξ)

2︸ ︷︷ ︸=PV1

f∈V1

,

which is the sum of an element of V0 and V1. Since any f ∈ L2 has this decomposition, we haveL2 = V0 ⊕ V1.

Now let’s see that there are no eigenfunctions except in V0. Suppose h ∈ L2(X, µ), λ ∈C, and hT = λh. Then

(PV0h)T (x, ξ) =hT (x, ξ) + hT (x,−ξ)

2= λ

h((x, ξ)) + h((x,−ξ))2

= λPV0h.

Suppose there exists v ∈ V1 with vT = λv. Then v2 ∈ V0 and v2T = λ2v2. So λ2 is a diadicrational by the result for V0, and hence so is λ. Now v is an eigenfunction and the correspondingeigenvalue is a diadic rational. Each eigenspace is one-dimensional by ergodicity, so v ∈ V0, andtherefore v = 0 (since also in V1). This show that UT has continuous spectrum on V1. Now let h

be arbitrary in L2. Then h = PV0h+ PV1h is an eigenfunction with eigenvalue λ. From above alsoPV0h has eigenvalue λ, so does PV1h, but then PV1h must vanish. �

Page 64: KARL PETERSEN

60 Karl Petersen

19. March 19 (Notes by GB)

19.1. Sturmian Systems. This is a parallel construction to the Toeplitz system. The name comesfrom the fact that this system is related to the zeros of the classical Sturm-Liouville differentialequation: y′′ + f(x)y = 0 with f periodic in x.Some references:

• Hedlund and Morse (1938,1940), Symbolic Dynamics I and II, Am. J. Math 60 and 62• Hedlund (1944), Sturmian minimal sets, Am. J. Math 66• Coven and Hedlund (1973), Sequences with minimal block growth, Math. Syst. Theo. 7• Ferenczi (1996), E.T.D.S. 16• Petersen and Shapiro (1973), Induced flows, Trans. AMS 177

Let G = [0, 1) with addition mod 1, α /∈ Q and T : G → G the rotation defined by Tx = x+ α(mod 1). Consider the following coding.

Let 0 < β < 1 and P be the partition of [0, 1) into two intervals, P = {[0, β), [β, 1)}, and P ′ thepartition without the end-points, P ′ = {(0, β), (β, 1)}. We define the Sturmian system S(α, β) tobe the closure in Σ2 of the set of all sequences (xn), where xn = χ[0,β)(g + nα) for all n ∈ Z andsome g ∈ G. That is, we code the T -orbits of points in G by their itineraries, assigning 0 whenthey visit the first interval, 1 when they visit the second.

[ [ )0 β x TxT 2x 1

α︷ ︸︸ ︷0 1︷ ︸︸ ︷ ︷ ︸︸ ︷

66

ψx = .110 . . .

Figure 27. Defining the Sturmian system

Sturmian systems provide examples of {0, 1} sequences with minimal block growth: For x ∈ Σ2

let Nn(x) be the number of different n-blocks that appear in x. If x is a periodic sequence, thenNn(x) is bounded. Even more (exercise): If there is n such that Nn(x) ≤ n+ 1, then x is periodic.Some Sturmian systems have Nn(x) = n+ 1 for all n ∈ Z. So in a sense, Sturmian sequences haveminimal complexity among nonperiodic ones (See Coven and Hedlund, Ferenczi).

We give now more precisely the construction of the Sturmian system. As in the Toeplitz situation,define ψ “not quite a map”: ψ : G → Σ2, ψ(x) = 1 or 2 points for all x ∈ G. This is because wewant ψG ⊂ Σ2 to be closed, shift-invariant, minimal and uniquely ergodic. There is again a badset where it is harder to define the map, G0 = O(0) ∪ O(β). On G0 we will put both 0 and 1 incertain entries, as for the Toeplitz case. Let us first examine the case x ∈ G \ G0. There, ψ(g) =the unique sequence which is the unambiguous coding of G by

ψ(x)n = χ[0,β)(x+ nα)

Page 65: KARL PETERSEN

Symbolic Dynamics 61

Note that in this case, it is the same to consider the partition P or P ′.Let us define ψ(x) more carefully in case x is in the bad set G0,

G0 = O(0) ∪ O(β)

= (Zα mod 1) ∪ (β + Zα mod 1) (disjoint union if β /∈ Zα).

If x ∈ G0 define ψ(x) to be 2 sequences in Σ2. There are two cases:(i) The case β /∈ Zα = O(0).From the decomposition of G0 as a disjoint union, either x ∈ O(0) or x ∈ O(β), but not both.

There exists a unique n such that Tn(x) = 0 (or β for the other case) and never hits again one ofthese two points (since β /∈ Zα). Define

(ψ(x))m =

{χ[0,β)(T

m(x)) for m 6= n

both 0 and 1 for m = n

Therefore we have defined ψ in such a way that

ψ(x) = {u, v}, with um = vm for m 6= n and un = 0, vn = 1.

(ii) The case β ∈ Zα, say β = jα = T j0.Suppose Tnx = 0, so Tn+jx = β. Tmx 6= 0 or β unless m = n or n+ j. In this case define

ψ(x) = {u, v}, with

um = vm = χ[0,β)(T

m(x)) for m 6= n or n+ j,

un = 0, vn = 1,

un+j = 1, vn+j = 0

This amounts to splitting each of 0 and β into a “left half” and “right half” and similarly for eachpoint in their orbits.

Now define φ : ψG→ G to be φ = ψ−1 (an actual map).Note that clearly shifting the sequence corresponds to translating the point:

σψx = ψTx for all x ∈ G;

therefore

φσu = Tφu for u ∈ ψG

(since φu = x implies u ∈ ψx so σu ∈ ψTx, and hence φσu = Tx = Tφu). We have just shown

that φσ = Tφ .

We claim that ψG is closed. Suppose uk ∈ ψxk ∈ ψG and uk → u ∈ Σ2.Suppose we have a subsequence uki such that xki → y ∈ G. We want to show that u is in

ψy. This would show that ψG is closed and also that φ is continuous, since then any convergentsubsequence of φuk converges to φu.Suppose y /∈ G0. Pick a large k such that uk and u agree on a large central block [−J, J ]. If x is ina sufficiently small neighborhood N of y, then T jx and T jy are either both in (0, β) or both not in(0, β), for all j ∈ [−J, J ]. We are just making sure that x and y are in the same cell of the partition∨J−J T

jP ′. We are taking N =⋂J−J T

−jIj with Ij equal to either (0, β) or (β, 1) depending on

which one T jy is in.

Page 66: KARL PETERSEN

62 Karl Petersen

If i is large enough that ki ≥ k and xki ∈ N , then the itinerary of xki from time −J to J is thesame as that of y. The central (2J + 1)-block of uki is the same as the central (2J + 1)-block of u.Since J is arbitrary, ψy = {u}.

What if y ∈ G0? There are at most two times n and n + j when Tmy = 0 or β which requiresspecial attention. At these times, for large enough k, all xk’s have to be on the side of 0 or βdetermined by the n’th coordinate of u. Suppose for example that y = β and u− 0 = 0. Then forlarge i, xki is near y and to its right. Suppose β /∈ Zα so the orbit of y hitS β for time 0 only. Thenψy is two points, one of which is u.

If β ∈ Zα and again for example y = β and u0 = 0, then again for large i xki is to the right ofβ, but also T−1xk is to the right of 0 so u−1 = 1. Again u ∈ ψy.

From above, ψG is shift-invariant, so (ψG, σ) is a closed subshift of Σ2.Claim: every orbit in ψG is dense. Let u ∈ ψx for some x ∈ G. Take v ∈ ψy for some y ∈ G. We

want to find n such that σnu agrees with v on a central (2J + 1)-block. Take a small neighborhoodof y such that every point in that neighborhood has the same P coding for the time interval [−J, J ].We hit {0, β} at most twice with y. Looking at v tells us whether to use (y − δ, y) or (y, y + δ).Choose n such that Tnx is in the selected interval. Then σu and v will agree with u on their central(2J + 1)-blocks.

Corollary 19.1. The Sturmian systems (S(α, β), σ) are uniquely ergodic.

Proof. As for the Toeplitz systems, the factor map φ

(S(α, β), σ)φ→ (G,T )

is 1 to 1 except on a countable set H0 = ψ(Zα ∪ (β + Zα)), that is φH0 = G0. (G,T ) is uniquelyergodic with invariant measure m = Lebesgue measure, and G0 is countable and so has Lebesguemeasure 0. Then any σ-invariant measure on (S(α, β), σ) assigns mass 0 to H0 and is uniquelydetermined on (S(α, β), σ) \H0 by the isomorphism

ψ : G \G0 → S(α, β) \H0.

Page 67: KARL PETERSEN

Symbolic Dynamics 63

20. March 24 (Notes by DJS)

20.1. Subshifts of Finite Type. (For reference see Lind and Marcus, and Kitchens books.)The definition of a subshift of finite type begins with a finite collection of forbidden (finite

length) words on a given alphabet, D = {0, 1, ..., d} say. Denote this collection by F , and notethat F ⊂ {0, 1, ..., d − 1}∗. Then define XF ⊂ DZ (or X+

F ⊂ DN) to be the set of all sequencesnone of whose sub-blocks are in F . Since XF is closed and σ-invariant, (XF , σ) is a subshift, andthis defines a subshift of finite type (SFT). (To see that XF is closed, consider a sequence x in itscomplement. Since x contains some bad word, any sequence sufficiently close to it also containsthat word. So there is a neighborhood containing x which does not intersect with XF , showingthat XF is closed. The property of σ-invariance is immediate.)

There are many reasons for studying these subshifts, some of which are listed below:

(1) It was shown in Spring ’97 Math 261 that SFTs model attractors of Axiom A diffeomor-phisms (based on codings of Markov partitions). See Theorem 30.1 on page 63 of thosenotes.

(2) They are the natural domain for Markov measures.(3) They model inputs to certain information transmission and storage devices; e.g. some

devices may not be able to handle quickly alternating blocks of zeros and ones.(4) They model changing situations in which the choice of future state depend only on the

current state.

20.2. Graph Representations of Subshifts of Finite Type. We may assume that all of thewords in F are of the same length m + 1. Then we say that (XF , σ) is m-step or has memory m.For example, if D = {0, 1} and F = {11, 101}, then change F to {110, 011, 111, 101} by extendingthe shorter word with all possible sub-blocks. Here (XF , σ) has memory 2.

We represent (XF , σ) by two kinds of graph shifts; namely edge shifts and vertex shifts. If (XF , σ)has memory m, construct a directed graph GF with vertices V(GF ) and edges E(GF ) as follows:

• The vertices are (and are labeled by) the allowed m-blocks in XF .• Put an edge from B2 to B1 if for a, z ∈ D and B ⊂ Dm−1, B2 = aB and B1 = Bz. Label

this edge as aBz.

This defines higher block codings of XF to X[m+1]F ⊂ E(GF )Z and X

[m]F ⊂ V(GF )Z, and the resulting

subshifts, (X[m+1]F , σ) and (X

[m]F , σ), are isomorphic to an edge shift and vertex shift, respectively,

which are defined in the next section. In each of these two ways, by tracking edges or vertices, abi-infinite walk on the graph is a representation of a sequence in XF . Theorem 2.3.2 of Lind andMarcus states that any m-step subshift of finite type can be represented by doubly-infinite walkson a graph.

Example 20.1. For F = {110, 011, 111, 101}, we have V(GF ) = {00, 01, 10} and E(GF ) ={000, 001, 010, 100}. (XF , σ) may be represented by the graph of Figure 28.

A walk along edges 100 → 000 → 001 visits vertices 10 → 00 → 00 → 01 in that order. Therepresented block is 10001. We can see the edge-shift representation by examining each 3-block ofthe sequence:

Page 68: KARL PETERSEN

64 Karl Petersen

00

01

10

001

100

000

010

Figure 28. Graph representation of (XF , σ) for F = {110, 011, 111, 101}

0

01

10

00

1

Figure 29. Graph representation of (XF , σ) for F = {11}

1 0 0 . .. 0 0 0 .. . 0 0 1[1 0 0 0 1

]The vertex-shift representation is seen in the 2-blocks of the sequence:

1 0 . . .. 0 0 . .. . 0 0 .. . . 0 1[1 0 0 0 1

]Example 20.2. (Golden Mean SFT)

For F = {11}, we have V(GF ) = {0, 1} and E(GF ) = {00, 01, 10}. (XF , σ) may be representedby the graph of Figure 29.

20.3. General Edge and Vertex Shifts. Let G be a directed graph with edge set E(G) andvertex set V(G). The edge shift, (XE(G), σ), determined by G is the subshift of E(G)Z whichconsists of all sequences of edges which are the paths of all doubly-infinite walks along edges ofG. The vertex shift, (XV(G), σ), determined by G is the subshift of V(G)Z which consists of allsequences of vertices for such walks on G.

A general vertex shift (on d vertices) is described by a d× d matrix with entries in {0, 1}, wherea 1 in the (i, j)th entry indicates an edge from vertex i to vertex j. A general edge shift is describedby a d × d (nonnegative) integer matrix where the (i, j)th entry represents the number of edgesfrom vertex i to vertex j. In either case, the matrix is called an incidence or adjacency matrix.

Page 69: KARL PETERSEN

Symbolic Dynamics 65

0 1

001

100

101

000

010

Figure 30. Graph of A2 for Golden Mean SFT

Conversely, any nonnegative integral d × d matrix A corresponds to some graph GA (which mayhave multiple edges between the same pair of vertices) and an edge shift XE(GA) = XA. It isinteresting and useful to notice that for a transition matrix A, the (i, j)th entry of An representsthe number of paths of length n from vertex i to vertex j.

Example 20.3. The edge shift corresponding to the Golden Mean SFT of Example 20.2 hastransition matrix

A =0 1

01

[1 11 0

].

Then

A2 =0 1

01

[2 11 0

],

so there are two ways to go from 0 to 0 in two steps. The graph of the matrix A2, representingpaths of length 2 between vertices, is given in Figure 30.

A nonnegative integral d×d matrix A is called irreducible and the corresponding graph (strongly)connected if for every i, j ∈ V(GA), there is a n ∈ N so that (An)ij > 0. (i.e. there is a path in GAof length n that starts at i and ends at j.) A is called primitive or aperiodic if there is a n ∈ Nso that (An)ij > 0 for every i, j ∈ V(GA). Notice that aperiodicity implies irreducibility, but theconverse does not hold. These terms are also applied to nonintegral matrices.

The properties of irreducibility and aperiodicity of A relates to topological properties of the edgeshift of GA. We have that A is irreducible if and only if (XA, σ) is topologically ergodic. Also,A is aperiodic if and only if (XA, σ) is topologically (strongly) mixing. What’s more is that thetopological entropy of the edge shift is determined by A: Set Nn(A) to be the number of n-blocksin the language of the edge shift, (XA, σ). So

Nn(A) =∑i,j

(An)ij .

Then the topological entropy of (XA, σ) is

htop(XA, σ) = limn→∞

1

nlogNn(A).

It follows from the Perron-Frobenius Theorem that htop(XA, σ) = log λA, where λA is the largestpositive eigenvalue of A, called the Perron-Frobenius eigenvalue.

Page 70: KARL PETERSEN

66 Karl Petersen

To see that irreducibility implies topological ergodicity, consider blocks B1, B2 ∈ L(XE(G)).Then for some n there is a path of length n starting at B1 and ending at B2. Associated with thispath is an x ∈ XE(G) which contains both B1 and B2 separated by n shifts:

x = . . .

n︷ ︸︸ ︷B1 . . . B2 . . .

Then σn[B1] ∩ [B2] 6= ∅, and (XE(G), σ) is topologically ergodic.

Page 71: KARL PETERSEN

Symbolic Dynamics 67

21. March 26 (Notes by DJS)

From last class we had a theorem relating a transition matrix to the topological entropy of theassociated subshift of finite type.

Theorem 21.1. Let (XA, σ) be a SFT given by an edge shift with associated (nonnegative) integralirreducible matrix A. Then the topological entropy of the system is

htop(XA, σ) = limn→∞

1

nlog(# of n-blocks in XA) = lim

n→∞

1

nlog∑i,j

(An)ij = log λA,

where λA is the positive eigenvalue of A of maximum modulus among all eigenvalues (the Perroneigenvalue).

(Since n/(n+ 1)→ 1 it doesn’t matter whether we count blocks on vertices or blocks on edges.)Theorem 21.1 follows directly from the Perron-Frobenius Theorem.

Theorem 21.2. (Perron-Frobenius Theorem) Let A be a nonnegative irreducible (and not neces-sarily integral) d × d matrix. Then A has a strictly positive right (column) eigenvector r (ri > 0for every i = 1, . . . , d). The corresponding eigenvalue λA is positive and it is the unique eigenvaluewith these properties. Other properties of λA are: if ζ is any other eigenvalue of A, then λA ≥ |ζ|;λA is a simple root of the characteristic polynomial of A; its eigenspace is one-dimensional; andthere is a c > 0 such that

1

cλnA ≤

∑i,j

(An)ij ≤ cλnA

for all n = 1, 2, . . . (so λA is unique with this property and further λAtr = λA.) So A also has apositive left (row) eigenvector l, unique up to a constant multiple. (Normalize so that lr = 1.)

If A is aperiodic (primitive), then λA > |ζ| for every other eigenvalue ζ of A. Finally, in theaperiodic case

(An)ij = (liri + εij(n))λnA,

where each εij(n)→ 0 as n→∞.

The proof will be given for the aperiodic case, following the one in Seneta’s book.

Proof. Suppose A is aperiodic. Let P = {(column vectors) x ∈ Rd : xi ≥ 0 for all i = 1, . . . , d}.Then AP = P, and we want to find a fixed direction.

For an x ∈ P with x 6= 0 (i.e. some xi > 0) define

S(x) = mini=1,...,d

{(Ax)ixi if xi 6= 0

∞ if xi = 0

Then 0 ≤ S(x) <∞ for every x ∈ P\{0}, and for every i, xiS(x) ≤ (Ax)i. So∑i

xiS(x) ≤∑i

(Ax)i;

that isS(x)1x ≤ 1Ax = (1A)x,

where 1 is the row vector of ones. But 1A is the row vector whose jth entry is∑

iAij , so settingM = maxj

∑iAij we have

S(x)(1x) ≤M(1x).

Page 72: KARL PETERSEN

68 Karl Petersen

Therefore, S(x) is uniformly bounded by M .Now let

λA = supx∈P\{0}

S(x).

By irreducibility, λA ≥ S(1tr) > 0, so 0 < λA ≤M . Also, S(x/‖x‖) = S(x), so

λA = supx∈P : ‖x‖=1

S(x).

Now, S(x) is upper semicontinuous on {x ∈ P : ‖x‖ = 1} (if x(k) → x then lim supS(x(k)) ≤ S(x)),and since every upper semicontinuous function on a compact set achieves its absolute maximum,there is an x ∈ {x ∈ P : ‖x‖ = 1} with S(x) = λA. We have that (Ax)i ≥ λAxi for every i, withequality for some i.

Now letu = Ax− λAx ≥ 0,

and suppose that u 6= 0. Aperiodicity implies that we can choose n such that An > 0. Then

An(Ax)− λAAnx > 0.

But thenA(Anx)− λAAnx > 0,

which is impossible since for any v ∈ P\{0} (including v = Anx), (Av)i/vi ≤ λA for some i.Therefore u = 0, which shows that λA is an eigenvalue of A with Ax = λAx. Furthermore,Anx > 0 so Anx = λnAx implies that x > 0.

Now, for any i and n, λnAxi = (Anx)i =∑

j(An)ijxj , so

λnA =1

xi

∑j

(An)ijxj .

Thus, for c = (maxxi)/(minxi) we have that

1

cλnA ≤

∑i,j

(An)ij ≤ cλnA

holds for any n = 1, 2 . . . �

Page 73: KARL PETERSEN

Symbolic Dynamics 69

22. March 31 (Notes by DD)

Last week we used the Perron-Frobenius Theorem to see (among other things) that the topologi-cal entropy of a subshift of finite type (XA, σ) (an edge shift given by a square nonnegative integermatrix A) is log λA, the logarithm of the maximum positive (Perron) eigenvalue of the transitionmatrix. Thus if D = {0, . . . , d− 1} is the alphabet and we let

Nn(XA) = card (L(XA) ∩Dn) for n = 1, 2, . . .

denote the number of n-blocks seen in sequences in XA, then

htop(XA, σ) = limn→∞

1

nlogNn(XA) = log λA.

Topological entropy is a measure of the size or richness of a language, or of how much freedom toconcatenate words it allows. Claude Shannon, the founder of information theory, also considereda subshift of finite type to be a type of communication system or signal generator, and then itstopological entropy is a measure of the number of different messages it can produce in a given time;this could be called the capacity of the device. (See the Appendix of the Parry-Tuncel book.) If(XA, σ) is an edge shift, we consider the vertices of the underlying graph G to be states of thedevice and edges leaving each state (thought of as being labeled by elements of the alphabet D) torepresent the symbols that the device can print or transmit if it is in that state.

w

e

v

Figure 31. The Shannon Machine G

If there is an edge e from state v to state w, when the device is in state v it can emit symbol e andend up in state w. This is supposed to model some of the practical restrictions that actually occurin communication systems: for example, in Morse code (not the same Morse as before), successivespaces are not allowed (all spaces of any length are considered equivalent), and in ordinary typingwe might demand that each period be followed by a space, then a capital letter. We could alsohave a cost, length, or transmission time associated with each symbol, but here we will keep eachsymbol having length 1.

It is also of interest to take into consideration the statistics of the messages that are being sent.An actual source (say English text, or its Morse encoding) has its symbols appear with certain

Page 74: KARL PETERSEN

70 Karl Petersen

frequencies, as do its 2-blocks, 3-blocks, etc. These statistics might be badly matched to theproperties of the signal generator. For example, an infrequent letter, like q, might appear as thelabel of a very popular edge in G, one which forms a sort of hub, in that many edges lead to andfrom it and so it appears often in many words of L(XA). So it seems worthwhile to look for someintrinsic statistics for the signal generator, a sort of guide on how to operate the device, that is tosay a measure on (XA, σ) that is somehow optimal; then we could try to recode sources to matchthese optimal statistics as nearly as possible before trying to make the signal generator producethem. An optimal measure µ on (XA, σ) is one that has maximal entropy:

hµ(XA, σ) = htop(XA, σ) = log λA.

If the signal generator produces messages whose statistics are described by such a measure µ, thenthe information content per symbol transmitted (or per unit time) is as high as it could possiblybe, namely the “capacity” of the device.

To help make sense out of this, we recall the definition of measure-theoretic entropy very briefly—for a more thorough account, see Petersen’s book. If α denotes the time-0 partition of XA accordingto the central symbol, then

αn−10 = α ∨ σ−1α ∨ · · · ∨ σ−n+1α

is the partition of XA according to initial n-blocks. Note that this corresponds to the time-0 parti-

tion, according to 1-blocks, of the higher block representation (X[n]A , σ). Our (expected) uncertainty

of what n-block is to be transmitted is then

Hµ(αn−10 ) = −

∑B

µ[B]0 logµ[B]0,

the sum being taken over all n-blocks B, i. e. over all cells of the partition αn−10 ; this is the same

as the amount of information that is conveyed on average when the initial n-block is received. Theuncertainty being removed (the same as the information being conveyed) per symbol transmitted(or per unit time) is

hµ(XA, σ) = limn→∞

1

nH(αn−1

0 ).

Note: The same partition α works for all measures µ because the time-0 partition of a shift spaceis a universal generator.

Now our uncertainty about what n-block is to be transmitted cannot be more than if µ were toassign equal probabilities (1/Nn(XA)) to all of the possible n-blocks (this is not only clear intuitivelybut can be checked by Lagrange multipliers); thus

Hµ(αn−10 ) ≤

∑n-blocks B

1

Nn(XA)log

(1

Nn(XA)

)= log

(1

Nn(XA)

),

and hence such a µ would have entropy equal to htop(XA, σ). Of course it’s only in the case of fullshifts that we can hope to find measures that actually distribute mass equally among all allowablen-blocks. But it was proved first by Shannon, in a restricted form, and then by Parry in generalthat on every irreducible subshift of finite type there is such an optimal measure, that it is Markov,and that it is unique. We shift ground slightly to vertex subshifts so as to make it easier to describethe Markov measure.

Page 75: KARL PETERSEN

Symbolic Dynamics 71

Theorem 22.1. Let (XA, σ) be an irreducible vertex subshift of finite type determined by a d × d0, 1 matrix A. Then there is a unique shift-invariant Borel probability measure µSP, called theShannon-Parry measure, on XA which has maximum entropy:

hµ(XA, σ) = htop(XA, σ) = log λA.

It is a 1-step Markov measure with fixed probability vector (initial distribution) p given by

pi = liri, i = 0, . . . , d− 1,

where l and r are the left and right positive eigenvectors of A corresponding to the eigenvalue λAwith

∑liri = 1, and stochastic transition matrix P given by

Pij =AijrjλAri

for i, j = 0, 1, . . . , d− 1.

Proof. We have seen above that for any shift-invariant (Borel probability) measure µ on XA,

hµ(XA, σ) ≤ htop(XA, σ) = log λA.

So let us proceed to check that the 1-step Markov measure defined in the statement of the Theoremis shift-invariant and achieves this maximum possible entropy. For shift-invariance, we need thatpP = p. But

(pP )j =∑i

piPij =∑i

liriAijrjλAri

=rjλA

∑i

liAij =rjλA

(lA)j

=rjλA

(λAl)j = rjlj = pj .

Recall (see Petersen’s book, around 5.2.12) that for any shift-invariant measure µ,

hµ(α, σ) = Hµ(α|σ−1α ∨ σ−2α ∨ . . . ) = Hµ(α|α∞1 ),

and for a 1-step Markov µ with fixed vector p and transition matrix P this equals

Hµ(α|σ−1α) = −∑i,j

piPij logPij .

Rather than just computing this out, let’s see why it is that the Shannon-Parry measure µSP givesfairly equal measures (one of a choice of finitely many constant multiples of λ−nA ) to all the allowablen-blocks: for an allowable cylinder i0i1 . . . in in XA,

µSP[i0ii . . . in]0 = pi0Pi0i1Pi1i2 . . . Pin−1in

= (li0ri0)Ai0i1ri1λAri0

Ai1i2ri2λAri1

. . .Ain−1inrinλArin−1

= li0rinλ−nA .

Page 76: KARL PETERSEN

72 Karl Petersen

¿From this it follows immediately that

hµ(XA, σ) = limn→∞

1

nHµ(αn−1

0 )

= − limn→∞

1

n+ 1

∑i0,...,in

li0rinλ−nA log(li0rinλ

−nA )

= − limn→∞

1

n+ 1

∑i0,...,in

li0rinλ−nA log(li0rin)− n

n+ 1

∑i0,...,in

li0rinλ−nA log(λA)

= log λA,

since ∑i0,...,in

li0rinλ−nA log(li0rin)

is bounded and ∑i0,...,in

li0rinλ−nA = 1.

Page 77: KARL PETERSEN

Symbolic Dynamics 73

23. April 2 (Notes by DD)

Continuing the proof of Theorem 22.1, it remains to show that the Shannon-Parry measure,defined last time as a 1-step Markov measure on the (topologically transitive) vertex subshift offinite type XA, σ), is the only (shift-invariant Borel probability) measure on (XA, σ) with entropylog λA.

Suppose that µ is an invariant measure on (XA, σ) with entropy log λA. We show first that µmust be a 1-step Markov measure. This is accomplished by considering the 1-step Markovizationµ1 of µ, which is defined to be the (unique) 1-step Markov measure on (XA, σ) which agrees withµ on all cylinder sets defined by blocks of length 2. Thus µ1 has fixed probability vector

q = (µ([0]0), µ([1]0), . . . , µ([d− 1]0))

and stochastic matrix of transition probabilities Q defined by

Qij = µ(j|i) =µ([ij]0)

µ([i]0)=µ(Ai ∩ σ−1Aj)

µ(Ai),

where Ai = {x ∈ XA : x0 = i} is the i’th cell of the time-0 partition α for i, j = 0, 1, . . . , d− 1. Theimportant point is that forming the Markovization can only make the entropy go up: by 5.2.5 (2)in Petersen’s book,

log λA = hµ(XA, σ) = Hµ(α|σ−1α ∨ σ−2α ∨ . . . ) = Hµ(α|α∞1 )

≤ Hµ(α|σ−1α) = Hµ1(α|σ−1α) = hµ1(XA, σ) ≤ log λA,

since µ and µ1 agree on 2-blocks, µ1 is 1-step Markov, and we know that no measure can haveentropy larger than log λA.

It follows that all expressions in the above chain are equal, in particular

Hµ(α|α∞1 ) = Hµ(α|σ−1α).

This implies that µ is 1-step Markov. For equality can hold here if and only if, given σ−1α, α isindependent of σ−2α ∨ . . . , which says exactly that µ is 1-step Markov: given the present (heretime −1), the future (time 0) is independent of the past (times −2,−3, . . . ).

(This conditional independence is a consequence of the convexity of the function f(t) = −t log tand Jensen’s Inequality—see Petersen’s book, 5.2.9, and Smorodinsky’s lecture notes, 4.22. Recallthat two partitions ξ and η are independent given a partition α if on each cell C of α the restrictedpartitions ξ|C and η|C are independent: for each N ∈ η and Z ∈ ξ, µC(N ∩ Z) = µC(N)µC(Z),i. e. , µ(N ∩ Z ∩ C)/µ(C) = [µ(N ∩ C)/µ(C)][µ(Z ∩ C)/µ(C)]. One can now calculate in Rd toverify that µ = µSP, but Parry’s argument using Markovization, entropy, and ergodicity is actuallyeasier as well as more instructive.)

Let µ be any maximal measure for (XA, σ), i. e. , h(µ) = log λA. We know now that µ is 1-stepMarkov. We also know that the Shannon-Parry measure µSP is 1-step Markov and has entropylog λA. Further, µSP is ergodic, since its fixed vector p is positive and its matrix P of transitionprobabilities has positive entries wherever A does, and A is irreducible, hence so is P . Form themeasure

ν =1

2(µ+ µSP).

Page 78: KARL PETERSEN

74 Karl Petersen

Because entropy is an affine function of the measure, we have that also

h(ν) = log λA.

By the preceding paragraph, this implies that also ν is 1-step Markov. Further, ν is ergodic: itsmatrix of transition probabilities is also irreducible, since ν assigns positive measure to every 2-blockto which µSP does. Since ergodic measures are extreme points of the set of invariant probabilitymeasures, we must have ν = µ = µSP. �

Remark 23.1 (Terminology). A topological dynamical system (X,T ) which has a unique measureof maximal entropy is called intrinsically ergodic. Thus we have proved that irreducible SFT’s areintrinsically ergodic.

Example 23.1. The golden-mean SFT is the vertex shift determined by the matrix

A =

(1 11 0

).

Every p ∈ (0, 1) determines a 1-step Markov measure with matrix of transition probabilities(p 1− p1 0

);

but there is only one p that gives the measure of maximum entropy. Putting det(I − λA) = 0, wefind the Perron eigenvalue λA =the golden mean= γ = (1 +

√5)/2. The left and right eigenvectors,

fixed vector, and matrix of transition probabilities are

l =

(1

γ,

1

γ2

), r =

(1/γ1/γ2

),(9)

p = (liri) =(1/γ2, 1/γ4)√1/γ4 + 1/γ8

=

(γ2

1 + γ2,

1

1 + γ2

),(10)

P =

(1/γ 1/γ2

1 0

).(11)

Page 79: KARL PETERSEN

Symbolic Dynamics 75

24. April 7 (Notes by LK)

Last week we showed that an irreducible shift of finite type (XA, σ) has a unique measure µSPof maximal entropy; i.e.

h(µSP ) = htop(XA, σ) = log λA.

We discussed the example of the golden mean subshift of finite type.

24.1. Generalization to Equilibrium States. Let (X,T ) be a topological dynamical systemand φ : X → R a continuous function (a “potential function”). An equilibrium state for φ is any T -invariant Borel probability measure on X which achieves the supremum over all T -invariant Borelprobability measures µ of

h(µ) +

∫Xφdµ.

Thus a measure of maximal entropy is an equilibrium state for the function φ = 0.We are interested in determining when equilibrium states exist and, if so, when they will be

unique. There are various theorems guaranteeing the existence of an equilibrium state when φ is“very continuous” in some sense, i.e. is Holder continuous, or has summable variation. If morethan one equilibrium state exists for a function φ then we say that there is a phase transition. Anexample of a physical model with a phase transition would be a state where ice and cold waterexist simultaneously.

We have already shown that an irreducible subshift of finite type has a unique equilibrium statefor the function φ = 0. More generally, if (XA, σ) is an irreducible subshift of finite type andφ : X → R depends on finitely many coordinates (and without loss of generality on two coordinatessince we can use a higher block code representation), then there is a unique equilibrium state for φ.This measure is one-step Markov and can be found in the same way as the Shannon-Parry measure,except that instead of working with the matrix A we work with A(φ) where

(A(φ))ij = Aijeφ(ij).

(Recall that the rows and columns of the d×d matrix A are indexed by the symbols in the alphabet).From this we obtain the left and right eigenvectors l and r:

lA(φ) = λAl,

A(φ)r = rλA(φ),

l · r = 1,

and use them to form the probability vector p,

pi = liri,

and the matrix of transition probabilities

P (φ)ij = (A(φ))ijrj

λA(φ)rj.

See page 22 in Parry and Tuncel for the details of this argument or Theorems 39.4 (page 80), 39.3(page 79), and 39.1 (page 78) in the Spring 1997 261 notes for a more general discussion.

Page 80: KARL PETERSEN

76 Karl Petersen

24.2. Coding between Subshifts. Let (X,σ) ⊂ DZ be any subshift where D = {0, 1, . . . , d−1}, and let a ∈ N. Recall the higher block representation (a-block) of (X,σ). We take a newalphabet Da of strings of length a. Define a map

ca : X → (Da)Z

by

(cax)i = xixi+1 . . . xi+a−1.

This map is a continuous, shift-commuting map and gives a topological conjugacy between

(X,σ) ≈ (ca, σ).

We note that Lind and Marcus use the notation (X [a], σ) for (ca, σ). The map ca is a sliding blockcode with memory zero and anticipation one. The map ca

−1 is a one-block code with memory zeroand anticipation zero. That is, if y0 = b1 . . . ba−1 then

ca−1(y)0 = b1.

The inverse map ca−1 compresses information by taking the first symbol and forgetting the rest.

Recall that if (XF , σ) is an a-step subshift of finite type then (X [a], σ) is a one-step subshift of

finite type called the vertex shift of (XF , σ), and (X [a+1], σ) is the edge shift. If α is the time-0

partition of x, then α∨σ−1α∨ . . . σ−n+1α corresponds to the time-0 partition of X [a]. Thus higherblock codes could let us concentrate on time-0 partitions rather than successive refinements.

24.3. State Splitting and Amalgamation. Here we begin to follow closely Lind and Marcus.Let G be an irreducible directed graph, perhaps with multiple edges. Then G has an associatededge shift

(XG, σ) ⊂ E(G)Z,

where the symbols in E(G) are the edges of G and E(G)Z is all strings determined by infinite walkson the graph G. Let A(G) be the nonnegative integer matrix which is indexed by V(G), the verticesof the graph, and for which A(G)ij is the number of edges from i to j.

Next we describe how to take an elementary out-splitting of one vertex v ∈ V(G), which hasno loops, into two new vertices. First, we need to partition the outgoing edges from v. LetEv = {edges starting at v}. We will illustrate splitting vertex v on the following graph G.

w1

w2

v

@@ @@

88

&&

����

G

w3

w4

Page 81: KARL PETERSEN

Symbolic Dynamics 77

Suppose we have a partition Ev = E1v ∪ E2

v where E1v contains the top three edges in the graph of

G and E2v contains the bottom three edges. We split the state v into two new vertices v1 and v2,

where v1 gets all the outgoing edges from v that are in E1v and v2 gets all the outgoing edges in E2

v .Thus the graph of G with v split into v1 and v2 can be illustrated as follows:

w1

v1

77 77

// w2

H

v2//

''''

w3

w4

The edges that are coming into v get cloned. That is, if an edge exists from w to v we create anew edge so that we have an edge from w to v1 and an edge from w to v2 as follows.

v1

w // v G w

>>

H

v2

Thus in performing the out-splitting of a vertex we partition the outgoing edges and clone theincoming edges to obtain a new graph H. We also allow H to be replaced by any graph isomorphicto it.

Theorem 24.1. The edge shifts XG and XH are topologically conjugate.

Proof. We select the particular graph in the isomorphism class of H that is constructed as aboveand define π : XH → XG by identifying edges that are clones of each other. That is, if the edgesof G are denoted by {e, f, . . .} and the edge e is split into e1, e2 in H, and the edge f is split intof1, f2 etc. then

π(e1) = π(e2) = e, π(f1) = π(f2) = f, etc.

The map π is one-to-one because we can recover the subscript of an edge (π deletes subscripts) bylooking at the next edge and seeing what cell of the partition it lies in.

The map π−1 : XG → XH is a two-block code with memory zero and anticipation one. Forexample, if f ∈ Ekv and

x = . . . ef . . . ∈ XG,

then π−1x = . . . ek . . ..�

Page 82: KARL PETERSEN

78 Karl Petersen

Next, we discuss state splitting in general, where we out-split any number of vertices in a graphwhich may have loops. As before, our procedure will be to partition outgoing edges and cloneincoming edges.

Let G be a graph and (XG, σ) be its edge shift. For each vertex v ∈ V(G) we partition Ev ={edges starting at v} into

Ev = E1v ∪ E2

v ∪ . . . ∪ En(v)v .

We denote this partition by P = {Ejv : 1 ≤ j ≤ n(v)}. The elements of P are the vertices of the

new graph. That is, H = H(G,P) has vertices which are cells of the partition denoted by Ejv or vj

for 1 ≤ j ≤ n(v). We define the edges of H as follows. If an edge connects vertex v to w in G, thenfind the i such that e ∈ E iv and put an edge from vi to each wj for 1 ≤ j ≤ n(w). Thus each edgeis cloned as before according to the number of elements its terminal vertex is split into.

We illustrate this procedure on the following graph G, using the complete partition of E(G) intosingletons.

v

��// wOOoo G

We obtain the following graph H. We describe what happens to the edge labeled with a star(*). That edge can be thought of as the part of v that leaves v. We’ll assume that it lies in thepartition element E1

v so in the graph H it will come from the vertex v1. We must clone it so that itconnects the vertex v1 to the vertices w1 and w2 which are created in the graph of H since w hastwo outgoing edges each belonging to their own atom in the partition of the edges of G.

v1//

w1oo

~~

H

v2//

OO

w2oo

``

We can perform an in-splitting of a graph as well. This procedure is similar to out-splittingexcept that we think of the partition P of E(G) as partitioning the edges coming into each vertexv. We then clone edges according to which pieces of each (new) vertex they start at.

For example, if we use the golden mean shift with the complete partition,

v��// wOO G

we obtain the in-splitting H = H(G,P)

Page 83: KARL PETERSEN

Symbolic Dynamics 79

v2//

&&w

xx

H

v1

OO

88

If H is an in- or out-splitting of G, then we say that G is an (in- or out-) amalgamation of H.Splittings and amalgamations of graphs give rise to topological conjugacies of their edge shifts;these are called splitting codes and amalgamation codes.

Page 84: KARL PETERSEN

80 Karl Petersen

25. April 9 (Notes by LK)

Recall that we perform an out-splitting by using a partition of the edge set E(G). Each vertexv ∈ V(G) splits according to the partition of edges leaving v. Each edge into w ∈ V(G) is cloned ac-cording to fragments of w. The out-splitting is a two-block map with memory zero and anticipationone, and the amalgamation code is a one-block map with memory and anticipation zero.

We want to determine how the matrices for G and H are related. Recall that AG is the d × dmatrix where

(AG)vv′ = number of edges in G from v ∈ V(G) to v′ ∈ V(G).

Suppose we have an out-splitting from G to H. We define two useful matrices. The first isthe division matrix D, which is a 0, 1 matrix determined by a partition P of E(G). This is a|V(G)| × |V(H)| matrix, where the entry Dvu is{

1 if u is a vertex in H that is created by splitting a vertex v in G,0 otherwise.

Thus each row of D has at least one 1, and each column of D has exactly one 1.The second matrix of interest is the nonnegative integer |V(H)| × |V(G)| edge matrix E. We let

Euv = the number of edges in G that end at v and are in the partition element u of P.

The edge and division matrices will give us a good sense of how the matrices AG and AH changeunder splitting or amalgamation.

Theorem 25.1. If H is an out-splitting of G, and D and E are the corresponding division andedge matrices, then DE = AG and ED = AH .

We note that the converse holds as well.

Proof. Let v, v′ ∈ V(G). Then

(DE)vv′ =∑

u∈V(H)

DvuEuv′ .

However, Dvu = 1 if and only if u is a fragment of v in H. Therefore,

(DE)vv′ =∑

u fragment of v

Euv′ ,

where Euv′ is the number of edges in G to v′ which are in the partition element u (a fragment ofv). Thus this sum is equal to the number of edges from v to v′ in G, which by definition is (AG)vv′ .

If u, u′ ∈ V(H), then

(ED)uu′ =∑

v∈V(G)

EuvDvu′ .

Recall that Dvu′ is 1 if and only if u′ is a fragment of v. Therefore, this sum has only one nonzeroterm Euv, where u′ is a fragment of v. However, Euv is the number of edges in G that are in cell uand end at v. Each edge is cloned according to the fragments of v. That is, if there are n edges inthe partition element u, then we put n edges into each u′ from u. Therefore, Euv is the number ofedges in H from u to u′, which by definition is (AH)uu′ . �

Page 85: KARL PETERSEN

Symbolic Dynamics 81

25.1. Matrices for In-splittings. Suppose we want to perform an in-splitting of the graph G.Define the transpose Gtr of G by reversing all of the edges of G. Then the adjacency matrix of Gtr

is the transpose of the adjacency matrix of G:

AGtr = (AG)tr

If we perform an out-splitting of Gtr, we obtain a new graph denoted Htr, and anything isomorphicto H is an insplitting of G. We can obtain the division and edge matrices D and E for Gtr:DE = (AG)tr and ED = (AH)tr. Then we have EtrDtr = AG and DtrEtr = AH . Thus we have atheorem similar to Theorem 25.1 for in-splittings.

25.2. Topological Conjugacy of SFT’s and Strong Shift Equivalence. Suppose we havea finite sequence of splittings and amalgamations taking a graph G and producing end result(isomorphic to) H. Then

(1) The edge subshifts of finite type (XG, σ) and (XH , σ) are topologically conjugate.(2) There exists a sequence of pairs of rectangular non-negative integer matrices (D0, E0), (D1, E1), . . .

(Dn, En) such that

AG = A0 = D0E0, A1 = E0D0,

A1 = D1E1, A2 = E1D1,

. . .

An = DnEn, An+1 = EnDn = AH .

In each case, D is either an edge or a division matrix depending on whether we are takingan in- or out-splitting or amalgamation. We note that any one of the above rows is calledan elementary equivalence (e.g. A1 ∼ A2). Elementary equivalence is not transitive – AGand AH are related by its transitive closure; we say that the matrices AG and AH are strongshift equivalent.

Theorem 25.2. (R. Williams) Every topological conjugacy between two edge shifts XG and XH isa composition of finitely many (in- or out-) splittings and (in- or out-) amalgamations. XG andXH are topologically conjugate if and only if the matrices AG and AH are strong shift equivalent.

We sketch the proof. The details can be found in Lind and Marcus. We recall that the aboveequivalence relation allows graph isomorphisms.

Proof. Suppose φ : XG → XH is a topological conjugacy. For x ∈ XG, we know by the Curtis-Hedlund-Lyndon Theorem (Theorem 3.2) that (φx)0 depends on a finite window [x−w, . . . , xw].

We can thus take a higher block representation (X[2w+1]G , σ) to replace φ by a one-block map.

Recall that we have a topological conjugacy between (X[2w+1]G , σ) and (XG, σ). Therefore, we

assume that φ is a one-block map. In using this representation, though, we don’t see any splittingsand amalgamations.

However, we can use the following fact: the two-block representation (X[2]G , σ) is topologically

conjugate to the out-splitting determined by the complete partition of E(G) into singletons. We canuse this fact to make a finite sequence of complete out-splittings which is topologically conjugate

to (X[2w+1]G , σ).

Instead of proving this fact, we illustrate it on the golden mean example.

Page 86: KARL PETERSEN

82 Karl Petersen

v

f

��e

// w

g

OO G

The edge shift is all possible sequences of e, f and g on the graph G. The complete out-splittingH is shown next.

v1

e2

// H

w

g2ww

g1

ff

v2

f177e1

OO

We obtain the two-block representation of G as follows:

e

ee

//

ef

&&f

fgxxg

gf

88ge

OO

We note that this two-block representation is topologically conjugate to the edge shift of the graphH. �

Page 87: KARL PETERSEN

Symbolic Dynamics 83

26. April 14 (Notes by CN)

During the previous lecture, we sketched a proof of the theorem of Williams on the topologicalconjugacy of SFTs and strong shift equivalence. Today we tell how to complete the proof.

Theorem 26.1. (R. Williams) Let φ : XA → XB be a topological conjugacy between edge shiftsof irreducible nonnegative integer matrices A and B. Then φ is a composition of finitely manysplitting and amalgamation codes, and the matrices A and B are strong shift equivalent.

Before proving the theorem, recall that two nonnegative integral matrices are strong shift equiva-lent if there exists a sequence of pairs of nonnegative rectangular integer matrices (D0, E0),. . . ,(Dn, En)such that

A = D0E0, E0D0 = A1

A1 = D1E1, E1D1 = A2

......(12)

An = DnEn, EnDn = B.

(13)

(14)

Ideas of ProofThe complete edge-splitting of a graph corresponds to taking the 2-block code on the edge shift.

Repeating this process takes us to a higher block representation of XA, which corresponds to asequence of out-splitting codes, we may assume that φ is a 1-block code. A 1-block code justrelabels the edges.

That φ is a 1-block code alone is not sufficient to establish the first result; we must also establishthat φ−1 is a 1-block code. In that case, both φ and φ−1 would be (in- and out-) splitting and (in-and out-) amalgamation codes (which include graph isomorphisms, which simply relabel edges).Let φ−1 have memory m and anticipation a:

(15) (φ−1y)0 = φ−1(y−m . . . y−1y0y1 . . . ya).

We will form out-splittings H and G of the graphs G and H (corresponding to edge matrices Aand B) Since the out-splittings (let’s call them ψGG and ψHH) are also topological conjugacies,

they give rise to a new conjugacy φ:XG → XH . We want to arrange it so that the memory remainsm and the anticipation is reduced by to a − 1. Then, repeat this process until a = 0. Similarly,in-splittings on G and H will give rise to conjugacies with unchanged anticipation and reducedmemory. Hence, this process can be used to reduce the memory to m = 0. Finally, (after a numberof steps depending on a and m) we arrive at a conjugacy ψ which is a 1-block code and whoseinverse is also a 1-block code.

Now let H = complete out-splitting of H. (The edge set E(H) is partitioned into singletons).

Page 88: KARL PETERSEN

84 Karl Petersen

Recall: If the graph of H is

H

· · · ve→ w · · ·

Then, find the i such that e ∈ E iv, = the i’th partition element of the set of edges starting at v, i= {e}. We then have

H

· · · vi → wj · · ·for each fragment wj (edge leaving w) of w. There is varied notation for these new edges: call

the edge ej for viej→ wj , e

j = (clone of e into fragment) j = ewj = eEjw = ef where f is an edge

leaving w. (Remember, e is cloned for each edge leaving its destination w.) Now, φ : XG → XH isa 1-block map, given by a labeling of the edges, so we can assume φ comes from a mapping thatsends edges of G to edges of H, Φ : E(G)→ E(H) (not necessarily one-to-one).

Now, to form the out-splitting G, partition E(G) according to its elements’ images under Φ. (Inother words, two elements are in the same partition cell if their images are the same, and we writeEhG = {g ∈ E(G) : Φ(g) = h}.)

G G

h

ve // w

g1??

g2

g3

vheh // wj

Define φ on the edges of G by φ(eh) = (Φe)h. This defines φ : E(G) → E(H) (and hence a1-block code XG → XH) so that the following diagram commutes:

XA = XG

φ

��

ψGG //XG

φ��

αGG

oo

XH

ψHH //XH

αHH

oo

Page 89: KARL PETERSEN

Symbolic Dynamics 85

The actions of these codes work as follows:XG XG

. . . g−1.g0g1g2 . . .ψGG //

φ

��

. . . gh0−1.gh10 gh21 gh32 . . .

φ��

. . . h−1.h0h1h2 . . .ψHH// . . . hh0−1.h

h10 hh21 hh32 . . .

XH XH

We claim that memory hasn’t increased, but anticipation has decreased, i.e. m(φ−1) = m(φ−1),

a(φ−1) = a(φ−1) − 1. For the terms of elements of XH are of the form h0h1 and, thus, the i’th

position possesses foreknowledge of the (i + 1)’st position, one step into the future. For example,

if a = 1 to determine g0 via φ−1, one must know . . . h−1.h0h1︸ ︷︷ ︸from −m to 1

h2 . . ., but to determine goh1 via φ−1,

one need only know . . . hh0−1hh10︸ ︷︷ ︸

from −m to 0

hh21 hh32 . . .. So if the anticipation a(φ−1) = 1, then a(φ−1) = 0.

26.1. Shift Equivalence. We have established strong shift equivalence (SSE) as a necessary andsufficient condition for topological conjugacy between edge shifts. Unfortunately, strong shift equi-valence is difficult to check. There is no algorithm for determining strong shift equivalence, and theproblem may, in fact, be logically undecidable. It is presently unknown whether it is decidable evenfor 2×2 matrices. So, mindful of the difficulty of determining strong shift equivalence, R. Williamsdefined shift equivalence (SE) which is decidable and (in theory) can be checked algebraically:

Definition 26.1. Two square nonnegative integral matrices A, B are said to be shift equivalent ifthere exist rectangular nonnegative integral matrices R, S, and an integer l such that

AR = RB,SA = BS,(16)

Al = RS,Bl = SR.(17)

This relation is transitive, and hence is an equivalence relation.

Exercise 2. Prove that the relation is transitive, and, hence, an equivalence relation (Symmetryand reflexivity are clear).

Page 90: KARL PETERSEN

86 Karl Petersen

Proposition 26.2. If A and B are strong shift equivalent (SSE), then they are shift equivalent(SE).

Proof. Recall the equations from 13. Each row of the equation array, with Ai = DiEi and EiDi =Ai+1 represents a shift equivalence, because if R = Di, S = Ei, and l = 1, then

AiR = AiDi = DiEiDi = DiAi+1 = RAi+1

SAi = EiAi = EiDiEi = Ai+1Ei = Ai+1S

A1i = Ai = DiEi = RS

A1i+1 = Ai+1 = EiDi = SR.

Since shift equivalence is transitive, this implies that ASE∼ B.

26.2. Williams Conjecture. Shift equivalence was put forth by Williams as a simpler relationwhich could be checked. It was his hope and belief that this relation was a sufficient condition forconjugacy, and he set out to prove it.R. Williams Conjecture, a. k. a. the Shift Equivalence Problem:

If ASE∼ B, is A

SSE∼ B ??

26.2.1. Some Positive Evidence: Many people tried for a long time without success to find coun-terexamples. Shift equivalence (SE) also preserves all known (up to 1997) topological conjugacyinvariants, including (in particular) zeta functions, and hence topological entropy, the number ofperiodic points of each period, etc.

If (X,T ) is a topological dynamical system, for each n = 1, 2, . . . let pn = card{x ∈ X : Tnx = x}and qn = number of points with least period n, and

(18) ζ(t) = exp(

∞∑n=1

pnntn).

The function ζ(t) is an invariant of topological conjugacy as {pn} is.

Exercise 3. For an edge shift (XA, σ), show that

(19) ζA(t) =1

det(1− tA)

where the denominator above is the characteristic polynomial of A.

In short, it is hard to find a way to tell shift equivalent edge shifts apart! In 1992, however, K. H.

Kim and F. W. Roush found a counterexample for the case of reducible matrices. And, in 1997,Kim and Roush were able to generate an irreducible counterexample. The 7 × 7 counterexamplematrices for A,B, S, and R were found using a computerized algebra system and are includedin Kim and Roush’s paper, The Williams Conjecture is False for Irreducible Subshifts which canbe found on the internet in Electronic Research Announcements. Their achievements were basedlargely on work by J. Wagoner, M. Boyle, W. Krieger, U. Fiebig, and others.

Page 91: KARL PETERSEN

Symbolic Dynamics 87

26.3. Invariants of Shift Equivalence. There is a complete invariant for shift equivalence, na-mely the dimension triple (∆A,∆

+A, δA) developed by Krieger, Williams, Kim and Roush, Boyle,

Handleman, and Marcus, Boyle and Trow, based on ideas from C∗-algebras.The elements of the dimension triple are as follows:A = d× d nonnegative integer matrix,∆A = dimension group,∆+A = dimension semigroup,

δA : ∆A → ∆A is the dimension group automorphism. These things will be defined during the

next lecture.

Page 92: KARL PETERSEN

88 Karl Petersen

27. April 16 (Notes by CN)

Recall that in the previous lecture we defined strong shift equivalence (SSE) and shift equiva-lence (SE) and proved that strong shift equivalence implied Shift Equivalence 26.2. R. Williamsconjectured that the converse was true, but Kim and Roush uncovered a counterexample in their1997 paper. We neglected to mention, however, that their counterexample matrices, which includenegative entries in some places, are acceptable only because of the following:

Proposition 27.1. If A and B are primitive (aperiodic) matrices, then they are shift equivalentover Z+ if and only if they are shift equivalent over Z.

The Kim-Roush example does involve primitive matrices A and B.

27.1. Invariants of Shift Equivalence. In the previous lecture, we introduced the dimensiontriple, a complete invariant for shift equivalence. Now, we can provide definitions.

Definition 27.1. Let A be a d × d nonnegative integral matrix. The eventual range of A, <(A),is given by

(20) <(A) =

∞⋂k=1

QdAk,

an A-invariant subspace of Qd, where Qd is the set of d-dimensional rationals, i.e. row vectors withrational entries. (Note that here A acts on the right.) The dimension group, ∆A, is given by

(21) ∆A = {v ∈ <(A) : there exists k ≥ 0 such that vAk ∈ Zd},an additive subgroup of <(A). The dimension group automorphism, δA : ∆A → ∆A, is defined byδA = A | ∆A, the restriction of A to ∆A. The dimension semigroup, ∆+

A, is given by

∆+A={ v ∈ <(A): there exists k ≥ 0 such that vAk ∈ (Z+)

d}.

Remarks 27.1.The dimension group automorphism, δA, preserves the dimension semigroup, ∆+

A.

The complete invariant of shift equivalence over Z+ is the dimension triple, (∆A, ∆+A, δA).

The complete invariant of shift equivalence over Z is the dimension pair, (∆A, δA).

Theorem 27.2. For nonnegative integral matrices A and B, the following are equivalent:

(1) A and B are shift-equivalent (over Z+).(2) There is a group isomorphism ∆A → ∆B which maps ∆+

A onto ∆+B and commutes with the

actions of the two automorphisms δA and δB.(3) The corresponding edge shifts (XA, σ) and (XB, σ) are eventually conjugate (though not

necessarily conjugate).

This theorem, developed by Krieger, R. Williams, Kim and Roush, Boyle and Handleman, andBoyle, Marcus, and Trow, requires the following:

Definition 27.2. Two edge shifts, (XA, σ) and (XB, σ), are eventually conjugate if, for all largeenough n, (Xn

A, σn) and (Xn

B, σn) are topologically conjugate, where (Xn

A, σn) has alphabet consis-

ting of n-blocks in XA and the shift is by n places each time. (The map XA → XB is a block mapthat comes from grouping into n-blocks, not a sliding block code).

Page 93: KARL PETERSEN

Symbolic Dynamics 89

Example 27.1. Consider the full shift (Σ2, σ). It has 2-block representation (Σ[2]2 , σ) over an

alphabet of 00︸︷︷︸a

, 01︸︷︷︸b

, 10︸︷︷︸c

, 11︸︷︷︸d

. Sequences are recoded as follows:

. . . .011010001σ→ . . . .1101000

. . . .bdcbc → . . . .dc . . .

The map 00 → a, etc., is a regular sliding block code. The image is a vertex SFT with graph asfollows.

b

��

��a//

@@

coo

]]

6= (Σ4, σ)

d

AA

OO Shift doesn’t have all sequences on these four symbols.

jj

Hence, (Σ2, σ)top.conj.∼= ΣA ⊂ {a, b, c, d}Z, the 4-shift. We have (Σ

[2]2 , σ)

top.conj.∼= (ΣA, σ). On the

other hand, (Σ2A, σ

2)top.conj.∼= (Σ4, σ). For, consider (Σ2

2, σ2), with alphabet 00︸︷︷︸

u

, 01︸︷︷︸v

, 10︸︷︷︸w

, 11︸︷︷︸x

.

Now, the recoding by grouping works as below:

. . . .011010001σ2

→ . . . .1010001 . . .

. . . .vwwu → . . . .wwu...

Thus (Σ22, σ

2) ∼= (Σ4, σ).

Continuing in the subject of codings between SFTs, we now consider...

27.2. Embeddings and Factors. Given two subshifts of finite type, when can you get an embed-ding? If there is a one-to-one continuous shift-commuting map φ : XA → XB between irreducibleSFTs (XA, σ) and (XB, σ) and φ(XA) is a proper subset of XB, then:

(1) htop(XA, σ) < htop(XB, σ).(2) If qn(XA) = the number of points in XA with least period n, we must have qn(XA) ≤ qn(XB)

for every n. (Or, equivalently, there is a shift-commuting injection ℘(XA, σ) ↪→ ℘(XB, σ),where ℘(XA, σ) is the set of periodic points of (XA, σ)).

Statement (1) holds because φ(XA) is a closed σ-invariant proper subset of XB and (φ(XA), σ) ⊂(XB, σ). This implies that its complement is nonempty and open. (φ(XA), σ) is also a SFT,hence it has a unique Shannon-Perry measure, µ, whose entropy h(µ) = log λA < log λB. (TheShannon-Perry measure on XB has full support, so it is not equal to µ).

Remark 27.1. Any subshift conjugate to an SFT is an SFT.

Page 94: KARL PETERSEN

90 Karl Petersen

Theorem 27.3 (Krieger,1982). There exists a proper embedding φ : (XA, σ)→ (XB, σ) if and onlyif

(1) htop(XA, σ) < htop(XB, σ) and(2) for all n,qn(XA) ≤ qn(XB).

27.2.1. What about factors?

Remarks 27.2. If there exists a shift-commuting continuous onto map (i.e., a factor map) π :(XA, σ)→ (XB, σ), then

(1) htop(XA, σ) ≥ htop(XB, σ) (taking a factor map can only cause entropy to decrease), and(2) If x ∈ XA has least period p, then πx has to have period that divides p, since periodic points

must be mapped to periodic points, but the actual period may change. If for all x ∈ XA

with least period p there exists y with least period that divides p, write ℘(XA) ↘ ℘(XB).This is equivalent to saying that there exists a shift-commuting map ℘(XA, σ)→ ℘(XB, σ)(not necessarily onto). (Nonperiodic points may be mapped to periodic points).

(3) When factoring is possible in case htop(XA, σ) = htop(XB, σ) remains an open question.

Theorem 27.4 (Boyle, 1983). If (XA, σ) and (XB, σ) are topologically transitive SFTs with htop(XA, σ) >htop(XB, σ), then there exists a factor map (XA, σ)→ (XB, σ) if and only if ℘(XA, σ)↘ ℘(XB, σ).

28. Sofic Systems

Definition 28.1. A sofic system is a subshift that is a factor of a subshift of finite type.

Equivalently, a sofic system is a subshift consisting of all infinite walks on the edges of a graphwhose edges have been labeled (though maybe not in a one-to-one manner). Sofic systems includeall SFTs, since every SFT is conjugate to an edge shift (⊂ (E(G)Z, σ)).

Consider the following

(XG, σ) ⊂ ({e1, e2, f1, f2, f3, g1, g2}Z, σ)

π(XG, σ) ⊂ ({e, f, g}Z, σ)

g1

��

g

��

◦e1

//

f1

//

e2

@@

g2

@@

f2

��

π→ ◦e

//

f

//

e

@@

g

@@

f

��◦

f3

OO

f

OO

The map π just erases subscripts. The image system may or may not be conjugate to theoriginal system, and it may or may not be an SFT. Image systems all come from relabeling edgesin a non-unique way.

Page 95: KARL PETERSEN

Symbolic Dynamics 91

Example 28.1. The Golden Mean SFT’s edge shift, under non-unique labeling of its edges producesthe even system.

◦e

//

f2

��◦

f1

OOπ→ ◦

0

//

1

��◦

1

OO

(Γ, σ) (S, σ)

(S, σ) is among the simplest sofic systems. S = all sequences in {0, 1}Z such that between any 2zeros there is an even number of ones. Consequently, (S, σ) is not an SFT (it is not determinedby ruling out any finite list of blocks), and hence, by previous remark 27.1 it is not even conjugateto one. For, to see more precisely, suppose the system is m-step. Then if B is a block in L(S, σ)with l(B) ≥ m, aB ∈ L(S, σ) and Bz ∈ L(s, σ), then aBz ∈ L(S, σ) (a word is not in L(S, σ) ifand only if it contains a bad word of length ≤ m + 1. In (S, σ), however, for arbitrarily large n,012n+1 ∈ L(S, σ) and 12n+10 ∈ L(S, σ), but 012n+10 /∈ L(S, σ).

Remarks 28.1. (1) (S, σ) is strictly sofic, i.e. it is not an SFT.(2) SFTs are also called topological Markov chains.(3) Factors of Markov measure are called sofic measures. Furstenberg called them “submarkov

processes.” B. Weiss coined the term “sofic,” a Hebrew term whose meaning conveysfiniteness.

(4) Most systems are not sofic. The Morse and Toeplitz sequences, among many others, standas examples of systems we have studied that are not sofic. The systems fail to meet soficrequirements because for example, they lack periodic points.

28.1. Shannon’s Message Generators. A Shannon’s Message Generator is a directed graph inwhich the vertices are the ‘states’ and the edges are labeled (maybe not uniquely) with symbols.

��v

e //f// w

>>

If the machine is in state v, it can emit any symbol written on an edge leaving v (either symbole or f in the picture) and move to the terminal vertex of that edge (w in the picture). The set ofall possible messages is a sofic system.

Page 96: KARL PETERSEN

92 Karl Petersen

29. April 21 (Notes by KN)

29.1. Sofic Systems.

Definition 29.1. A sofic system is a subshift which is a factor of a subshift of finite type. Asofic system can also be thought of as a labeled edge-shift of a directed finite graph (with possiblyrepeated labels).

The word sofic comes from a Hebrew word which means finite. As we read a sequence from asofic system, at each state there are only a finite number of possible futures. This is a generalizationof the situation in a 1-step subshift of finite type.

Example 29.1. The Even Sofic Shift. Recall that this shift requires an even number of 1’s betweenany pair of 0’s. When you are generating sequences, there are two states determined by the parityof the number of 1’s seen since the last 0.

odd even

1

1

0

Figure 32. The even sofic shift.

29.1.1. Characterizations of Sofic Systems. Consider the following eight different characterizationsof a sofic system, (S, σ), which we will prove to be equivalent.

(1) Follower Sets.Let (X,σ) ⊂ (DZ, σ) be any subshift. Let w ∈ L(X,σ) be any block that appears in

some x ∈ X. Let D∗ =⋃∞i=0D

Z, the set of all finite words on D. Define the follower set ofw to be

F(w) = { words s ∈ D∗ : ws ⊂ L(X,σ)}.Then (S, σ) is sofic if and only if card{F(w) : w ∈ L(S, σ)} < ∞. That is (S, σ) is sofic

if and only if the number of follower sets is finite.

Example 29.2. In an SFT or vertex shift, F(w1 . . . wn) is completely determined by wn.

(2) Predecessor Sets, P(w).The predecessor sets are defined in an analogous way to the follower sets.

(3) (S, σ) is a factor of an SFT, the image of a relabeling of the edges of a vertex shift, (1-blockmap), or 1-block map on an edge shift.

(4) L(S, σ) is a regular language.

Definition. A regular language consists of all words recognized by a (deterministic) finiteautomaton (DFA).

Page 97: KARL PETERSEN

Symbolic Dynamics 93

1

q_ q

0

0 b1

0 1

Figure 33. A DFA which recognizes the golden mean SFT.

Definition. LetD = the alphabet (for the words),Q = the finite set of states or vertices,q ∈ Q = the initial state,Q+ ⊂ Q = the set of final (good) states,δ : Q×D → Q, δ(q, a) = q · a = the state entered by reading symbol a from state q. We

require δ(q, ε) = q, where ε is the empty word. For words w ∈ D∗, w = w1 . . . wn, defineδ(q, w) = q ·w = ((q ·w1) ·w2 · . . .) ·wn. We say a word w is accepted by the DFA if q ·w ∈ Q+.

A nondeterministic finite automaton (NDFA) has δ : Q × D → 2Q. That is, an NDFAcould have many edges leaving state q with the same label, a ∈ D. It can also have ε-moves, where δ(q1, ε) = q2, but q1 6= q2.

A language L is accepted by a DFA if and only if it is a language accepted by an NDFAwith ε -moves. Note that there exist regular languages that do not come from subshifts.For example consider the language consisting of just one symbol.

Example 29.3. A DFA that recognizes the golden mean SFT: see Figure 33

(5) L(S, σ) is denoted by a regular expression.

Definition. Let D be a finite alphabet. A regular expression is a finite string on D =D ∪ {+, ·, ∗, ∅, ε, (, )}. The set of finite regular expressions, R(D), is the smallest family offinite strings on D which has the following two properties: (i) R(D) ⊃ D ∪ {∅, ε}, and (ii)R(D) is closed under +, · and ∗, (r, s ∈ R(D) implies r + s, r · s, and r∗ ∈ R(D)).

For each r ∈ R(D) we define a language L(r) ⊂ D∗ as follows:L(∅) = ∅ (the empty language is accepted by an automaton which has no final states),L(ε) = {ε},L(a) = {a}, for a ∈ D,L(r + s) = L(r) ∪ L(s),L(r · s) = L(r) · L(s) = {uv : u ∈ L(r), v ∈ L(s)},L(r∗) = L(r)∗ = ∪∞i=0L(r)i.

Example 29.4. (a) Let r = (0 + 1)∗(00)(0 + 1)∗. Then L(r) = set of all words on D ={0, 1} which contain 00.

Page 98: KARL PETERSEN

94 Karl Petersen

(b) Let r = (1 + 0 + ε)(0 + 11)∗(1 + ε). Then L(r) = all blocks on {0, 1} with no 012n+10.So, L(r) = L(S, σ), where (S, σ) is the even sofic subshift. Note that the even subshiftallows words starting or ending with an odd number of 1’s.

(6) Semigroup Realization.There is an injection of D, the alphabet of (S, σ), into a finite multiplicative semigroup,

S, with an absorbing element 0 ∈ S such that 0s = s0 = 0 for all s ∈ S. The mapping isdefined by a→ sa for a ∈ D. For any w = w1 . . . wn ∈ D∗, we say w ∈ L(S, σ) if and onlyif sw1sw2 . . . swn 6= 0.

(7) Matrix Semigroup Realization.The semigroup in (6) can be realized by a semigroup of d × d matrices on {0, 1} with

“reduced”matrix multiplication, where any nonzero element is changed to a 1.(8) L(S, σ) is generated by a linear phrase structure grammar.

Let D = terminals and V = variables be two finite alphabets. Let S be a start symbol.A production is a pair of words (α, ω) on V ∪D, that is α, ω ∈ (V ∪D)∗. (We also writethe production as α→ ω.) We assume we have a finite set of such productions. Let L = allfinite words on D which can be made by starting with S and applying a finite sequence ofproductions. The grammar is linear if all productions are of the form A → Bw or A → wfor some A,B ∈ V,w ∈ D∗.

Page 99: KARL PETERSEN

Symbolic Dynamics 95

30. April 23 (Notes by PS)

Theorem 30.1. The above eight characterizations of sofic subshifts are all equivalent.

Proof. We give an outline only. We may need to assume some of the subshifts in question aretopologically transitive.

30.1. 3 ⇒ 1. Assume (S, σ) is a relabeling of the edges of a vertex shift (XA, σ), that defines afactor map

π : (XA, σ)→ (S, σ).

For any word w ∈ L(S, σ), there is some word w′ ∈ XA, not necessarily unique, with π(w′) = w.Consider all possible w′, and all possible terminal vertices w′n of w′. The follower set F(w) iscompletely determined by this subset of terminal vertices in XA. Since there are only finitely manysuch subsets, there are only finitely many follower sets for (S, σ).

30.2. 1 ⇒ 3. Suppose (S, σ) is a subshift whose language L(S, σ) is a language on the alphabetD with only finitely many follower sets. We need to construct a vertex SFT whose edge-labelingfactors into (S, σ). Make a graph G whose vertices are the follower sets F1, . . ., Fn. Put an edgewith label a ∈ D between Fi = F(w) and Fj if and only if wa ∈ L(S, σ).

Fi = F(w)a→F(wa) = Fj

Note that this labeling does not depend on the choice of w. For if w′ is another word withF(w′) = F(w), then wa ∈ L(S, σ) if and only if w′a ∈ L(S, σ), and F(w′a) = F(wa).

It is clear that all words read off edge paths in G are in L(S, σ): given a path

F(w1)a1→F(w1a1)

a2→ . . .an−1→ F(w1a1 . . . an−1)

then a1 . . . an−1 ∈ L(S, σ).Conversely, given a1 . . . an−1 ∈ L(S, σ), by extendability there is an a0 such that a0a1 . . . an−1 ∈

L(S, σ), so that

F(a0)a1→F(a0a1)

a2→ . . .an−1→ F(a0a1 . . . an−1)

is a path in G.This gives a 1-block factor map on the edge shift (XG, σ), which leads to our desired sofic shift.

30.3. 2 ⇔ 3. The case of finitely many predecessor sets can be handled just as with finitely manyfollower sets.

30.4. 1 ⇒ 6. Suppose L(S, σ) has finitely many follower sets. We wish to show that membershipin L(S, σ) is determined by a finite semigroup. Define an equivalence relation on L(S, σ) by w ∼ w′if and only if F(w) = F(w′) and P(w) = P(w′). Define a multiplication of equivalence classes [u]and [v] for u, v ∈ L(S, σ) by

[u][v] =

{[uv] if uv ∈ L(S, σ)

0 otherwise

Furthermore, let [u]0 = 0[u] = 0, for all u ∈ L(S, σ).Then

Page 100: KARL PETERSEN

96 Karl Petersen

Σ = (L(S, σ)/∼) ∪ {0}is the desired semigroup with absorbing element 0. Clearly

w1w2 . . . wn ∈ L(S, σ) if and only if [w1][w2] . . . [wn] 6= 0.

Remark: (L(S, σ)/∼) is sometimes called the “syntactic monoid”.

30.5. 6 ⇒ 1. Suppose membership in L(S, σ) is determined by a finite semigroup Σ. Note thatF(w1w2 . . . wn) depends only on the semigroup element s = sw1 . . . swn , because

u1 . . . un ∈ F(w1 . . . wn) if and only if (sw1 . . . swn)(su1 . . . sun) 6= 0

Since Σ is a finite semigroup, it follows that there are only finitely many follower sets.

30.6. 3 ⇔ 7. We wish to show (S, σ) is a factor of a vertex subshift of finite type by a 2-block mapby relabeling vertices if and only if membership in L(S, σ) is determined by a finite semi- group ofd× d 0,1-matrices.

Suppose we have a 2-block map ϕ(ij) for i, j ∈ D, generating a factor map

ϕ : (XA, σ)→ (S, σ).

Take the adjacency matrix A and decompose it according to the edge labeling ϕ:

A = A1 +A2 + . . .+Ak,

where 1, 2, . . ., k are the labels on the edges of the graph of XA, as follows: put

(Ak)ij =

{1 if ϕ(ij) = k0 otherwise

.

Let Σ be the semigroup generated by A1, . . ., Ak with reduced matrix multiplication, that iswhere nonzero entries are replaced by 1 after multiplication. There are only finitely many suchmatrices, namely

card(Σ) ≤ 2(d2).

Note that a word w = w1w2 . . . wn on the alphabet D of (S, σ) occurs along an edge path if andonly if Aw1Aw2 . . . Awn 6= 0. If we have

x1w1→x2

w2→ . . .wn→xn+1

then

(Aw1Aw2 . . . Awn)x1xn+16= 0

and conversely, since a nonzero entry implies the existence of a path.For example, consider the even shift.With respect to the the 2-block map ϕ(12) = ϕ(21) = 1 and ϕ(11) = 0, the matrix A decomposes

as

Page 101: KARL PETERSEN

Symbolic Dynamics 97

1 20

1

1

Figure 34. The even shift

A =

(1 11 0

)→(

0 11 0

)+

(1 00 0

).

30.7. 3 ⇒ 4. Suppose we have an edge labeling of a subshift of finite type. We wish to show thatthe resulting language is recognizable by a DFA, or equivalently an NDFA with ε-moves allowed.LetQ = the set of states of the automaton = vertexes of the graph (XA, σ),Q+ = the set of good final states = Q,q− = starting state with an ε-move to any state in Q,b = bad final state.Define moves appropriately. For example, for the even shift

1 20

1

1

q-

b

0

εε

Figure 35. The even shift as an NDFA with ε-moves

30.8. 4 ⇒ 3. Given a DFA, look at all doubly-infinite admissible sequences on the graph of theautomaton. This is an edge labeling of a vertex shift and thus a sofic shift.

Page 102: KARL PETERSEN

98 Karl Petersen

31. April 28 (Notes by SB)

Proof (continued) of the equivalence of the eight different characterizations of a sofic system,(S, σ):

4⇔ 8. Recall:4. L(S, σ) is a regular language (recognized by a DFA).8. L(S, σ) is generated by a linear phrase structure grammar.

We establish an equivalence between the two systems according to the following table:DFA phrase structure grammaredge labels terminals, Dvertices variablesinitial state q− start state, Seach edge (to a good vertex) a production

Example 31.1. For the golden mean SFT, we have the DFA given by Figure 36. Using the

1

q_ q

0

0 b1

0 1

Figure 36. A DFA which recognizes the golden mean SFT

equivalence established in the table above, we have that the terminals for the phrase structuregrammar of the golden mean SFT are D = {0, 1}, and the variables are V = {q−, q, b}. To figureout the set of productions we need to decide what q− and q can go to (since these are the goodvertices). We have the following:

q− → 0q− → 1q− → 0q−q− → 1qq → 0q → 0q−

This gives a right-linear phrase structure grammar, since all of the productions are of the formvariable→terminal or variable→terminal·variable. To form a word in the language, we start withS = q− and apply a finite sequence of productions in such a way as to end up with a string ofterminals. All possible words in the language are found as the leaves on the derivation tree: listall possible productions from the start state S and then list all possible places where each variablecould go. For example, suppose w = 1000101001. Then part of the derivation tree leading to wwould be:

S → 1q → 10q− → 100q− → 1000q− → 10001q → 100010.

Page 103: KARL PETERSEN

Symbolic Dynamics 99

Note: A language L is generated by a left-linear grammar if and only if it is also generated by aright-linear grammar.

To establish the equivalence of (4) and (8) in the other direction, we need to describe how tomake a DFA that recognizes the phrase structure grammar. Simply make a graph using the varia-

bles as vertices, that is, given A→ aB put an edge Aa−→ B.

5⇔ 4. Recall:4. L(S, σ) is a regular language recognized by a DFA or an NDFA with ε-moves.5. L(S, σ) is denoted by a regular expression.A standard tool for proofs in language theory is to induct on the length of the regular expression

r. This is the technique we use to prove 5⇒ 4.Suppose r has length 1. Thenr = ε or ∅ or a ∈ D, and correspondinglyL(r) = ε or ∅ or {a}.

We want NDFA’s with ε-moves that will recognize these languages. The following one will sufficefor the case r = ε:

bai←−q−

ε−→qWe are allowed to go via ε to any good state. From q−, on any other letter that might come in, wego to the bad state b. For the case r = ∅ we have the following NDFA:

q−ε−→b /∈ Q+

q1 ∈ Q+.

Here everything is sent straight to a bad state, b. A good state, q1 can exist in the NDFA, but noedges go into it. The NDFA below recognizes the language denoted by r = a:

q−a−→q ∈ Q+.

So we have found NDFA’s for all cases arising when r = 1. The rest follows by induction so longas we can describe how the NDFA will work to be compatible with the +, ∗, and · operations forbuilding up regular expressions. We achieve this in the following way:

For r1 + r2 we have L(r1 + r2) = L(r1) ∪ L(r2). So if A1 is the NDFA for r1 and A2 is theNDFA corresponding to r2, then wire these machines in parallel to obtain the NDFA for r1 + r2.See Figure 37. If the NDFA for r1 is A1, and the NDFA for r2 is A2, then the NDFA for r1 + r2 isA1 and A2 wired in parallel. For A1, let q1 be the initial state and f1 be a final good state so thatevery final good move gets sent to f1 by an ε-move. Similarly, for A2. Then let q− be the initialstate and f the final good state for A1 and A2 wired in parallel. Then anything in L(r1 + r2) willbe accepted by this machine by going along the correct path. Also, if we can find any path thatworks (i.e., any path that is accepted by the machine), then the word is in L(r1 + r2).

For concatenation, r1 · r2, we have L(r1 · r2) = L(r1)L(r2), so put the machines A1 and A2

in series, connecting them via ε-moves as in Figure 38. There was a question about what wouldhappen if there was a long word for which some first part of it is accepted by machine A1 so thatthe word gets pushed into machine A2 too early, causing the word to end up in a bad state. Thepoint is, however, that there is some path in machine A1 that will accept the entire part of theword that lies in L(r1).

For r∗, L(r∗) =⋃∞i=0[L(r)]∗, so let A be the NDFA for r and design the NDFA as in Figure 39.

The ε-move from the final good state f ′ of A to the initial state q′ of A allows for iterations. The

Page 104: KARL PETERSEN

100 Karl Petersen

2

fq_

1

2

. . . . . . . . .

. . . . . . . . .

q

q

f

f

1 1

2 2

ε

ε

ε

ε

ε

ε

ε

ε

ε

ε

Figure 37. To create the NDFA for the sum, wire the two individual machines in parallel

12

. . . . . . . . . . . . . . . . . .qf

qf_q f1 1 2 2

ε

ε

ε

ε

ε

ε

ε

εε

Figure 38. The NDFA for r1 · r2. Again, q− is the initial state and f the finalgood state for the whole machine. A word is accepted by the compound machine ifand only if the first machine accepts the part of the word in L(r1) and the secondmachine accepts the part of the word in L(r2).

q_ f. . . . . . . . . q’ f’

ε

ε

ε

ε

ε

ε

ε

Figure 39. The NDFA for r∗.

ε-move from the initial state q− to the final good state f of the whole machine allows for the ε wordto be in the language. So by induction, as we build up the regular expression, we use our soldering

Page 105: KARL PETERSEN

Symbolic Dynamics 101

qi

ε

Figure 40. The DFA for R0ij . There are no other ε-moves allowed in this DFA.

tool to put smaller machines together, thus building more and more complicated machines until weachieve an NDFA that will recognize the regular expression.

4⇒ 5. We may assume that we have a DFA A with no ε-moves. This narrows our hypothesis,making the argument easier. So suppose the set of vertices of the DFA is Q = {q1, . . . , qn}. We usea variation of Warshaw’s Algorithm (an algorithm used to construct, fairly efficiently, the transitiveclosure of a relation).

For k = 0, 1, . . . , n and i, j = 1, . . . , n, putRkij = all strings accepted by paths on the DFA

from the vertex qi to the vertex qjthat do not hit any qm with m > k in between,except maybe at the ends.

This amounts to restricting to a subgraph or subautomaton so that in between the endpoints, weare only allowed to move within this subgraph. Notice that

L(A) =⋃

qj∈Q+

Rn1j

if q− = q1. Also Rkij ⊂ Rk+1ij . We induct to show that each Rkij is denoted by a regular expression.

For the case k = 0, R0ij= all strings accepted by paths from qi to qj that hit NO element of Q

except maybe at the ends. The only possibilities are ∅, ε, and a ∈ D. For ∅ we get the emptygraph, which has length 0; so r = ∅. For ε we have δ(qi, ε) = qi, which corresponds to the graph oflength 1 in Figure 40. So we take r = ε. For R0

ij = a ∈ D we get the following graph of length 2:

qia−→qj

Here qj could equal qi.We take r = a. So we have proved the base case for the induction.For k ≥ 1 we have

Rkij = Rk−1ik (Rk−1

kk )∗Rk−1kj ∪R

k−1ij .

To see this, consider a path (not a string), qi . . . qj in which there is nothing between qi and qj withindex greater than k. There could, however, be qk’s between qi and qj . If there are no qk’s between

qi and qj , then the string is in Rk−1ij . If there are qk’s between qi and qj , look at the first place

where we hit a qk:

qi︷︸︸︷. . . qk

︷︸︸︷. . . qk︷︸︸︷. . . qk . . . qk

︷︸︸︷. . . qk︷︸︸︷. . .︸ ︷︷ ︸ qj

Then the first segment qi . . . qk contains no qk’s on the inside. So this part of the path is in Rk−1ik .

Then we hit a bunch of paths qk . . . qk which have no qk’s on the inside. These all belong to (Rk−1kk ),

so the string of them belongs to (Rk−1kk )∗. Finally, at the last qk we have a path qk . . . qj which has

no qk’s inside, so this path is in Rk−1kj . The equivalence follows by induction.

Page 106: KARL PETERSEN

102 Karl Petersen

This concludes the theorem that all of the eight characterizations of a sofic system are equivalent.

Theorem 1 (R. Fischer, 1975; also W. Krieger et.al). Let (S, σ) be a topologically transitive (“ir-reducible”) sofic system. Then there exist an irreducible vertex SFT (XA, σ) and a factor mapπ : (XA, σ)→ (S, σ) given by an edge labeling such that:

(1) The edge labeling of the graph G of (XA, σ) is right resolving, i.e., the edges leaving anyvertex have distinct labels.[Definition: If (X,σ) and (Y, σ) are subshifts and π : X → Y is a 1-block map, then π iscalled right resolving if whenever ab, ac ∈ L(X,σ) and φ(b) = φ(c) then b = c.]

(2) Hence π : (XA, σ) → (S, σ) is boundedly finite-to-one (i.e., card π−1(y) ≤ m < ∞ for all

y ∈ S) and one-to-one on doubly transitive points (i.e.,points x such that O+(x) = O−(x) =X. where O+(x) = {σnx : n ≥ 0}). Then π is one-to-one almost everywhere with respectto any ergodic invariant measure with full support and on a residual set.

(3) (XA, σ) is the smallest right-resolving extension in that it has the fewest vertices.Moreover, any two such extensions (that is, extensions that are right resolving and smal-

lest) have isomorphic labeled graphs. (XA, σ) is called the right (or future) Fischer cover of(S, σ).

Corollary 31.1. Every topologically transitive sofic system has a unique measure of maximal en-tropy,(it is intrinsically ergodic), the entropy of which is the logarithm of a Perron number (i.e., apositive algebraic integer that dominates all its conjugates).

Proof. (Sketch)

(1) Make a lift as in the implication (1)⇒ (3) in the above proof of the equivalent definitionsof a sofic system to obtain an SFT that factors onto the sofic system, (S, σ).

(2) This lift might be “too big”, that is, some of its vertices might be redundant, so merge thevertices that have the same follower sets (of labeled edges) to get a “tighter” factor map ofan SFT onto (S, σ).

(3) In irreducible sofic shifts, there are synchronizing words or “Markov magic words,” that is,words τ ∈ L(S, σ) such that whenever you see τ along the edges of G, the right-resolvinggraph above, the terminal vertex is always the same. Equivalently, if wτ, τv ∈ L(S, σ),then wτv ∈ L(S, σ). The idea of τ always leading to the same terminal vertex can bedescribed using the analogy of a “road map to Cleveland.” The map is a word on thesymbols {l, r, sa, b, c, d}. Roads leaving each junction have been labeled with these foursymbols. At each junction, the driver chooses which roads to follow depending on what thenext letter in the word tells him to do. Regardless of where the driver starts, the “wordroad map” takes him or her to Cleveland.

Exercise 4. In an irreducible sofic system, such a word always exists.

(4) Every y ∈ S that has such a word τ infinitely many times to the left has a singletonpreimage.

Page 107: KARL PETERSEN

Symbolic Dynamics 103

32. April 30 (Notes by SS)

32.1. Shannon Theory. For reference, see C. Shannon, A. Khinchin, R. Potschke–F. Sobik, R.Gallager, T.M. Cover, J. Singh, J. Pierce (popular), Martin–England, or I. Csiszar’s article.

Definition 32.1. A source [A0, µ0] is a finite-state stationary (ergodic) stochastic process, whereA0 is a finite alphabet and µ0 is a shift-invariant ergodic measure on AZ

0 . An example might be aShannon machine with a probability measure determined by transition probabilities on its edges.

Definition 32.2. A channel [A, {νx}, B] consists of a finite input alphabet A, a finite outputalphabet B, and a family of measures {νx : x ∈ X} defined as follows :

If X = AZ = all potential input messages and Y = BZ = all potential output messages, then foreach x ∈ X, νx is a (Borel probability) measure on Y such that

(1) νσx = σνx = νx ◦ σ−1 (so the channel is stationary) and(2) for all measurable F ⊂ Y , the map x 7→ νx(F ) is measurable on X.Now, given a measure µ on X (the input measure), the input-output measure λ on X × Y is

defined by

λ(E × F ) =

∫Eνx(F )dµ(x)

for measurable E ⊂ X,F ⊂ Y .This represents the probability that the output signal is in F given that the input was in E.

Example 32.1. Let A = B = {0, 1}, and let β0 = (1− ε)δ0 + εδ1 and β1 = εδ0 + (1− ε)δ1 be two

measures on A,B. Define, for x = (xk) ∈ X = {0, 1}Z,

νx =∞∏

j=−∞βxj on Y = {0, 1}Z.

For example, if E = {x : x0 = 0} ⊂ X and F = {y : y0 = 0} ⊂ Y , then

λ(E × F ) = λ({(x, y) : x0 = 0, y0 = 0})

=

∫{x:x0=0}

νx({y : y0 = 0})dµ(x)

= (1− ε)µ({x : x0 = 0}),since νx({y : y0 = 0}) = ε if x0 = 1, and 1− ε if x0 = 0.

Similarly,λ({(x, y) : x0 = 1, y0 = 1}) = (1− ε)µ({x : x0 = 1})

and soλ({(x, y) : x0 = y0}) = 1− ε.

This channel represents a situation in which 0 changes to 1, 1 changes to 0, with probability ε,independently in each coordinate. It is called a DMC, discrete memoryless channel.

32.1.1. Source coding. Suppose there are given a source [A0, µ0] and a channel [A, {νx}, B]. Wemay wish either to recode the source to compress information before connecting to the channel(Shannon-McMillan-Breiman Theorem) or connect to the channel by means of an encoder, a mape : AZ

0 → AZ (it is not necessarily one-to-one, continuous, or shift-commuting).

• Three popular types of encoder

Page 108: KARL PETERSEN

104 Karl Petersen

(1) Block code : e is determined by a fixed map from Ak0 to An, i.e., for s ∈ AZ0 ,

s = · · · · · · · · · · · · . s0s1 · · · sk−1︸ ︷︷ ︸ sksk+1 · · · s2k−1︸ ︷︷ ︸ · · ·↓

e(s) = · · · · · · · · · · · · .︷ ︸︸ ︷a0a1 · · · an−1

︷ ︸︸ ︷anan+1 · · · a2n−1 · · · .

Here n/k = the rate of the encoder (e is σk, σn-commuting).

(2) Sliding block code : For s ∈ AZ0 , e is defined by

(es)0 = e(s−m · · · s−1s0s1 · · · sa).(3) Arbitrary measurable map φ : AZ

0 → AZ.

32.1.2. Shannon-McMillan-Breiman Theorem. The possibilities for efficient recoding (or encoding)of a source are given by the Shannon-McMillan-Breiman Theorem :

Theorem 32.1. (Shannon-McMillan-Breiman) If µ0 is an ergodic shift-invariant measure on X =AZ

0 , then

− 1

nlogµ0[x0, · · · , xn−1]︸ ︷︷ ︸

fn(x)

−→ h(µ0) a.e.

and in L1, i.e.,∫X | fn(x)− h(µ0) | dµ0(x)→ 0.

• Two reinterpretations

(1) The asymptotic entropy equipartition property : Since convergence in L1 implies convergencein measure, given ε > 0, there exists N > 0 such that if n ≥ N , then

µ0({x : | − 1

nlogµ0[x0, · · · , xn−1]− h(µ0)| < ε}) > 1− ε.

Then we get the asymptotic entropy equipartition property : for n ≥ N , there is a list of “good”n-blocks on A0 such that

(i) the union of the corresponding cylinder sets has measure > 1− ε,(ii) each corresponding cylinder set has µ0-measure between e−n[h(µ0)+ε] and e−n[h(µ0)−ε],

(iii) there are at least en[h(µ0)−ε] of them.

(2) Source Coding Theorem : Let us switch the base of logarithms to 2 so that we can talk aboutbits. Then the minimum mean number of bits per symbol required to encode an ergodic source isthe entropy of the source.

Theorem 32.2. Let [A0, µ0] be an ergodic stationary source and A = {0, 1}. Then, given ε > 0,if k is large enough, if n/k > h(µ0) + ε, then there is a block code e : Ak0 → An (so rate = n/k)which is one-to-one on a set of k-blocks on A0 whose associated cylinder sets form a set of inputsequences of µ0-measure > 1− ε.

Proof. We have K ≤ 2k[h(µ0)+ε] “good” sequences on Ak0 (that cover > 1 − ε of X). We wantto assign to each “good” k-block a different n-block on {0, 1}. If n ≥ logK, then 2n ≥ K andwe have 2n n-blocks on {0, 1}. So the assignment can be made if n ≥ logK ≥ k[h(µ0) + ε], i.e.,n/k ≥ h(µ0) + ε. �

Page 109: KARL PETERSEN

Symbolic Dynamics 105

Conversely, any code from A∗0 to A∗ which is one-to-one on each Ak0 must have expected rate≥ h(µ0). We state this more precisely as follows :

If e : A∗0 → A∗ is a map which is one-to one on each Ak, then∫l(e[x0 · · ·xk−1])

kdµ0(x) ≥ H(αk−1

0 )

k≥ h(µ0).

(Here α is the time-0 partition in AZ0 and l(B) denotes the length of a block B.)

H. White showed that in fact

lim infk→∞

l(e[x0 · · ·xk−1])

k≥ h(µ0) a.e. dµ0.

Page 110: KARL PETERSEN

106 Karl Petersen

33. May 5 (Notes by KJ and RP)

The Shannon-McMillan-Breiman Theorem of the last section gives a coding-theoretic understan-ding of the entropy of the source. There might be some part of the source which is irrelevant ofinfrequent, so you don’t need to reserve all the extra blocks to code efficiently.

Remark 33.1. This sort of source coding becomes more effective in rate distortion theory. Thistheory begins with a measure of cost or distortion associated with each pair

(22) u ∈ A∗0, e(u) ∈ A∗.Whereas in the applications of the Shannon-McMillan-Breiman Theorem we either code a block orwe don’t, in the Source Coding Rate Distortion Theorem we minimize total distortion.

33.1. Connecting the Source to the Channel. Recall that we have a source [A0, µ0] and achannel [A, {νx}, B] where A and B are the input and output alphabets, respectively, and each νxis a measure on Y = BZ for each x ∈ AZ. We also have an input-output measure λ defined by

(23) λ(E × F ) =

∫Eνx(F )dµ(x)

where E ⊂ X = AZ, F ⊂ Y = BZ and µ is the input measure on X. As discussed before, thechannel can be seen as a wire which transmits the message or maybe something else, depending onchance.

We also have an encoder e : AZ0 → AZ, a measurable function which prepares the message to be

sent across the channel. For example, e could be a block or a sliding block code, the cases we willfocus on.

The encoder determines an input measure µ = eµ0 on AZ , i.e. µ(E) = µ0(e−1E). We alsoget an output measure ν on BZ = Y defined by νF = λ(π−1

Y F ), where πY : X × Y → Y is theprojection. The measure ν gives the statistics of messages coming out of the channel if they comein with statistics given by the input measure.

In addition to the encoder, we may also have a decoder d, a measurable mapping from Y to AZ0 .

After the process of encoding, transmission across the channel, and decoding, we hope that themessages which come out give an idea of the messages that went in.

In some of what follows we simplify the situation, assuming (for example) that the encoding mapis the identity map, so that we can concentrate on the channel.

33.2. Mutual Information and Capacity. Capacity is a measure of how much information onecan hope to push across the channel, analogous to how much water per-unit time can flow througha pipe.

Let (X,B, µ) be a probability space and T : X → X a measure-preserving transformation. Inour case, X is a subshift and T is the shift. If α and β are measurable partitions of X, then the

ν µ[A , ]µx0

[A,{ 1[A , ]0

},B] 0d (decoder)e (encoder)

Figure 41. Encoding, transmission and decoding of a message

Page 111: KARL PETERSEN

Symbolic Dynamics 107

mutual information of α and β is

H(α;β) = H(α)−H(α|β)(24)

= H(β)−H(β|α)(25)

= H(α) +H(β)−H(α ∨ β)(26)

(The last equation comes from page 21 of the ergodic theory notes, Proposition 6.3 which givesH(α ∨ β) = H(α) +H(β|α)).

We can interpret H(α|β) as the amount of extra information you get from knowing what cell ofα you are in, given that you know what cell of β you are in.

From the definitions of entropy,

(27) H(α;β) = −∑A∈α

λ(A) log λ(A)−∑

A∈α,B∈βλ(A)λ(B) log λ(A|B),

(where the log in question may be base 2).When we are talking about a channel, we use this definition on

α = π−1X (time-0 partition of X)(28)

β = π−1Y (time-0 partition of Y ).(29)

This gives the mutual information of one symbol going in and another coming out. If we wish, wecan use a higher block representation and get mutual information based on initial n-blocks: insteadof α and β above, use

αn−10 = π−1

X (time 0 partition of X into n blocks) and(30)

βn−10 = π−1

Y (time 0 partition of Y into n blocks).(31)

If µ is an input measure on X = AZ for a channel [A, {νx}, B], define its transmission rate to be

R(µ) = limn→∞

1

nHλ(αn−1

0 ;βn−10 )(32)

= limn→∞

1

n[Hλ(αn−1

0 )−Hλ(αn−10 |βn−1

0 )](33)

= limn→∞

1

n[Hλ(βn−1

0 )−Hλ(βn−10 |αn−1

0 )](34)

= limn→∞

1

n[Hλ(αn−1

0 ) +Hλ(βn−10 )−Hλ(α ∨ β)n−1

0 ](35)

= hµ(σX) + hν(σY )− hλ(σX×Y ).(36)

(Recall that λ projects to µ on the first coordinate, and α depends only on the first coordinate,and similarly for ν and β).

To interpret these equations, remember that H represents the average information per symbol,and that gaining information is like losing uncertainty. So in equation 33 we see that the amount ofinformation coming across per unit time is the information put in minus the uncertainty remainingabout what was put in (αn−1

0 ) given what we received (βn−10 ). Also, we can interpret it (see 34) as

the amount of information coming out minus the extra uncertainty due to noise, i.e., the uncertaintyabout what would come out (βn−1

0 ) even if we knew what had been put in (αn−10 ). Finally (in 35),

the entropy of the input process plus the entropy of the output process minus entropy of the jointprocess reflects the information resulting from the connection between the two processes.

Page 112: KARL PETERSEN

108 Karl Petersen

Given an input measure µ, R(µ) tells how much useful information is coming out of the otherend of the channel per unit time.

The capacity of the channel is

(37) C = supµR(µ),

the supremum being taken over all stationary ergodic measures µ.In talking about capacity, engineers like to consider “operational definitions”, discussing what

you can actually do, as opposed to this more theoretical treatment. For other variations, we canrestrict or open up the types of input statistics allowed (for example, consider non-ergodic or evennon-stationary input measures). In principle, these definitions of capacity might be essentiallydifferent.

33.3. Shannon’s Channel Coding Theorem. We will consider a “good” channel (which will bedefined later) with capacity C.

Lemma 33.1 (Feinstein’s Lemma). Given ε > 0, for large enough n there are at least N ≥ 2n(c−ε)

ε-distinguishable code words ui in An: that is, there exist disjoint sets V1, V2, . . . , VN of n-blocks inBn such that for every i,

νui(Vi) = λ(ui as input | output was in Vi)(38)

= λ{(x, y) : x0 . . . xn−1 = ui|y0 . . . yn−1 ∈ Vi}(39)

≥ 1− ε.(40)

Remark 33.2. So with probability 1 − ε we can decode the message: if a block in Vi is received,then with probability 1− ε we can determine the block that it came from.

The proof involves a jazzed-up Shannon-McMillan-Breiman Theorem, again a random-coding,i.e. block-counting, argument. How you code depends on the channel statistics: we add checkbits to protect against noise in the channel. There is a give and take between recoding the sourcefor efficiency (noiseless coding, compression) and adding bits back to make sure the message getsacross. Feinstein’s Lemma says that it is theoretically possible to do this.

Finally, we have

Theorem 33.2 (Channel Coding Theorem). Consider a “good” channel [A, {νx}, B] with capacityC. Let [A0, µ0] be a stationary ergodic source with h(µ0) < C. Then given ε > 0, for large enoughn there exists a block code e : An0 → An and a decoder d : Bn → An0 such that when messages fromthe source are sent across the channel and decoded, the probability of error is < ε, i.e.

(41) λ{(x, y) : e−1(x0 . . . xn−1) 6= d(y0 . . . yn−1)} < ε,

and also R(eµ0) > h(µ)− ε.Conversely, if h(µ0) > C, this is not possible for every ε > 0.

The idea is that, for example, when h(µ0) < 1 and A = {0, 1}, then there are about 2nh(µ0) good

n-blocks in the source. For large n, we find N ≥ 2n(C−ε) ε-distinguishable n-blocks in An. Assignto each good n-block an ε-distinguishable block.

If C − ε > h(µ0), then this can be done using Feinstein’s Lemma and the Shannon-McMillan-Breiman Theorem. We conclude that for all but ε of the source involved, blocks are distinguishedwith probability > 1− ε.

When this can be done for every ε > 0, the source is called block transmissible.

Page 113: KARL PETERSEN

Symbolic Dynamics 109

33.4. Good Channels. What is a “good” channel (i.e., good enough that the conclusion of thepreceding Channel Coding Theorem should hold)? It should produce an ergodic output process(Y, ν) for each ergodic input (X,µ). What if it’s nonanticipating and with finite memory? That is,νx{y0 = b} depends only on (x−m . . . x−1x0). This is not enough.

One way we can get a good channel is to impose an additional condition called Nakumuraergodicity: For all cylinder sets U, V ⊂ X and W,Z ⊂ Y ,

1

n

n−1∑k=0

∫σkU∩V

∣∣∣νx(σkW ∩ Z)− νx(σkW )νx(Z)∣∣∣ dµ(x)→ 0.

Alternatively, we can replace all three conditions with R. L. Adler’s output weakly mixing (donot assume finite memory, nonanticipating): For all cylinder sets F, F ′ ⊂ Y and for all x ∈ X,

1

n

n−1∑k=0

∣∣∣νx(σkF ∩ F ′)− νx(σkF )νx(F ′)∣∣∣ → 0.

33.5. Sliding Block Code Versions (Ornstein, Gray, Dobrushin, Kieffer). Basic idea: ap-proximate a block code φ : An → Bn by a factor map ψ : X → Y (where X = (AZ, µ) andY = (BZ, ν)) by using Rokhlin towers: Given ε > 0 and n ∈ N, we can find F ⊂ X withF, σF, σ2F, . . . , σn−1F disjoint and union having measure > 1− ε.

σ

!!

set ofmeasure < ε

————

σ

OO

σ

��

σn−1F

...

σ

OO

...

————

σ

OO

σ2F

————

σ

OO

σF

————

σ

OO

F

Let x ∈ X,

x = . . . xrxr+1 . . . xr+n−1 . . . xsxs+1 . . . xs+n−1 . . . ,

where σrx, σsx ∈ F . Then define

ψx = . . . 00 . . . 0{φ(xr . . . xr+n−1)}00 . . . 0{φ(xs . . . xs+n−1)} . . . .

ψ is shift-commuting, measurable, and approximates φ. The 00 . . . 0 spacers have frequency < ε.Then approximate ψ by a sliding block code by approximating F by cylinder sets. This construction

Page 114: KARL PETERSEN

110 Karl Petersen

leads to the following ergodic-theoretic channel coding theorems, which hold for different kinds of“good” channels.

Definition 33.1. A channel [A, {νx}, B] is

(1) weakly continuous if µn → µ weakly on AZ (i.e., weak *, µn(E)→ µ(E) for all cylinder setsE) implies that the corresponding input-output measures λn → λ (defined by µn and µ)weakly on X × Y = AZ ×BZ;

(2) d-continuous if

supE cylinder setsof length n in X

supx,x′∈E

dn(ν(n)x , ν

(n)x′ )→ 0 as n→∞,

where

dn(ν(n)1 , ν

(n)2 ) = inf

joinings (u,v)(with measure m)

of ν1 and ν2

1

n

n−1∑k=0

m{uk 6= vk}.

Recall that a joining (or coupling) of two spaces, systems, or processes, is a system that factorsonto both of them:

(Y1 × Y2,m)πY1

xx

πY2

&&

(u, v) ∈ Y1 × Y2

(Y1, ν1) (Y2, ν2)

(See Petersen’s notes on ergodic theory, p.4).

Proposition 33.3. Finite memory, nonanticipating ⇒ d-continuous ⇒ weakly continuous.

Theorem 33.4. Suppose [A, {νx}, B] is a weakly continuous (stationary) channel.

(1) Then an ergodic source [A0, µ0] is block transmissible (in the above sense) over the channelif h(µ0) < C and not block transmissible if h(µ0) > C (the case h(µ0) = C is not settled).

(2) It’s sliding-block transmissible if and only if h(µ0) ≤ C.(3) If h(µ0) < C, then the source is 0-error transmissible across the channel: There exist

measurable e : AZ0 → AZ and d : BZ → AZ

0 such that λ{(x, y) : e−1x 6= dy} = 0.

33.6. Further Topics.

(1) “ergodic decompositions” of channels(2) different definitions of capacity(3) different kinds of channels (parallel, feedback, etc.)(4) rate distortion theory (see McEliece)(5) construction of codes

Page 115: KARL PETERSEN

Symbolic Dynamics 111

List of Figures

1 An image quantized into pixels 2

2 Two equivalent metrics for the finite alphabet shift 4

3 Two non-equivalent definitions of metrics for the countable alphabet shift 5

4 Example of the cylinder set [B]j 5

5 Two points are close if they agree on a long central block 6

6 Motivation for the term “cylinder set” 6

7 A sliding block map 8

8 Sliding block map with memory m and anticipation n; sliding block map with negative memory 9

9 The sliding block map is continuous 10

10 Uniform continuity gives us equivalence classes of 2m+ 1 blocks 10

11 The 2-higher block representation of Σ2, along with a map from (Σ2, σ2) to (Σ4, σ). 12

12 Regional Transitivity 15

13 Return time r ∈ R(U) 15

14 The Cycle x1 to x3 17

15 Rotation by α 18

16 After some time, the set U under the action of T will stay in contact with every sampling set V 21

17 Shifting intervals [0, n− 1] by small amounts causes heavy overlap. 25

18 P (A ∩ B) = P (A)P (B), i.e., the probability that the cylinder sets appears in the places shown is equal tothe product of the probability of the cylinder sets appearing on their own. 26

19 Here 0 = α, 1 = β, 2 = γ. 26

20 Here 0 = α, 1 = β, 2 = γ. 26

21 (a) The adic graph, (b) The paths x = 11101 . . . and y = 00011 . . .. 35

22 (a) Adic graph for H, (b) Adic graph with restricted paths. 35

23 Cutting and Stacking the Unit Interval 39

24 Graph of the Cutting and Stacking Function 39

25 Stages of Cutting and Stacking 40

26 Densities of τ(l(k)θ). 55

27 Defining the Sturmian system 60

28 Graph representation of (XF , σ) for F = {110, 011, 111, 101} 64

29 Graph representation of (XF , σ) for F = {11} 64

30 Graph of A2 for Golden Mean SFT 65

31 The Shannon Machine G 69

32 The even sofic shift. 92

33 A DFA which recognizes the golden mean SFT. 93

34 The even shift 97

35 The even shift as an NDFA with ε-moves 97

36 A DFA which recognizes the golden mean SFT 98

37 To create the NDFA for the sum, wire the two individual machines in parallel 100

38 The NDFA for r1 · r2. Again, q− is the initial state and f the final good state for the whole machine. Aword is accepted by the compound machine if and only if the first machine accepts the part of the word inL(r1) and the second machine accepts the part of the word in L(r2). 100

Page 116: KARL PETERSEN

112 Karl Petersen

39 The NDFA for r∗. 100

40 The DFA for R0ij . There are no other ε-moves allowed in this DFA. 101

41 Encoding, transmission and decoding of a message 106


Top Related