lecture 30 iterated function systems (ifs) and the ...links.uwaterloo.ca/amath343docs/week11.pdf ·...

30
Lecture 30 Iterated function systems (IFS) and the construction of fractal sets (cont’d) We continue with our discussion from the previous lecture. In that lecture, we considered the parallel action of the two maps f 1 (x)= 1 3 x, f 2 (x)= 1 3 x + 2 3 . (1) (Actually, we looked at the set-valued maps associated with these maps, but we’ll skip that detail for now.) Repeated iteration of these maps produced the ternary Cantor set on [0, 1]. The above system of two maps acting in parallel is a particular example of an iterated function system (IFS). For the moment, we shall provide a “working definition” of an IFS in R n . But we must first provide a couple of other useful definitions. Distance function/metric in R n : Let x, y R n . We shall let d(x, y) denote the (Euclidean) distance between x and y. In the special case n = 1, d(x, y)= |x y| , x, y R . (2) For n 1, where x =(x 1 ,x 2 , ··· ,x n ) and y =(y 1 ,y 2 , ··· ,y n ), d(x, y)= x y= k=1 (x k y k ) 2 1/2 , x, y R n . (3) (Note: Other distance functions/metrics can be used but we’ll use the Euclidean distance for simplic- ity.) Definition (Contraction Mapping): Let D R n and f : D D. We say that f is a contraction mapping on D if there exists a constant 0 C< 1 such that d(f (x),f (y)) Cd(x, y) for all x, y D. (4) The smallest C for which the above inequality holds is called the contraction factor of f . In other words, a contraction mapping f maps any two distinct points x and y closer to each other. Examples: Most of our discussion of IFS will be limited to R and R 2 , so the following examples should be sufficient. 242

Upload: others

Post on 10-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Lecture 30

Iterated function systems (IFS) and the construction of fractal sets

(cont’d)

We continue with our discussion from the previous lecture. In that lecture, we considered the parallel

action of the two maps

f1(x) =1

3x , f2(x) =

1

3x+

2

3. (1)

(Actually, we looked at the set-valued maps associated with these maps, but we’ll skip that detail for

now.) Repeated iteration of these maps produced the ternary Cantor set on [0, 1].

The above system of two maps acting in parallel is a particular example of an iterated function

system (IFS). For the moment, we shall provide a “working definition” of an IFS in Rn. But we must

first provide a couple of other useful definitions.

Distance function/metric in Rn: Let x, y ∈ R

n. We shall let d(x, y) denote the (Euclidean)

distance between x and y. In the special case n = 1,

d(x, y) = |x− y| , x, y ∈ R . (2)

For n ≥ 1, where x = (x1, x2, · · · , xn) and y = (y1, y2, · · · , yn),

d(x, y) = ‖x− y‖ =

[

k=1

(xk − yk)2

]1/2

, x, y ∈ Rn . (3)

(Note: Other distance functions/metrics can be used but we’ll use the Euclidean distance for simplic-

ity.)

Definition (Contraction Mapping): Let D ⊆ Rn and f : D → D. We say that f is a contraction

mapping on D if there exists a constant 0 ≤ C < 1 such that

d(f(x), f(y)) ≤ Cd(x, y) for all x, y ∈ D . (4)

The smallest C for which the above inequality holds is called the contraction factor of f .

In other words, a contraction mapping f maps any two distinct points x and y closer to each other.

Examples: Most of our discussion of IFS will be limited to R and R2, so the following examples

should be sufficient.

242

Page 2: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

1. The maps f1 and f2 examined earlier are contraction mappings on R. Consider f1(x) =1

3x: For

any x, y ∈ R,

d(f1(x), f1(y)) = |f1(x)− f1(y)|

=

1

3x− 1

3y

=1

3|x− y|

=1

3d(x, y) . (5)

The distance between f1(x) and f2(y) is always one-third the distance between x and y. The

contraction factor for f1(x) =1

3x is C = 1

3.

Now consider f2(x) =1

3x+ 2

3: For any x, y ∈ R,

d(f2(x), f2(y)) = |f2(x)− f2(y)|

=

(

1

3x+

2

3

)

−(

1

3y +

2

3

)∣

=1

3|x− y|

=1

3d(x, y) . (6)

The contraction factor for f2(x) =1

3x+ 2

3is also C = 1

3.

It will be useful to work with smaller subsets D ⊂ R over which these maps are contractions.

With reference to the Cantor set example studied earlier, we can establish that each of the fi

maps the interval [0, 1] to itself:

(a) For x ∈ [0, 1], f1(x) =1

3x ∈ [0, 1]. Therefore, f1 maps [0,1] to itself.

(b) For x ∈ [0, 1], f2(x) =1

3x+ 2

3∈ [0, 1]. Therefore, f2 maps [0,1] to itself.

We could also use the set-valued versions of these maps to arrive at these results:

(a) f̂1 : [0, 1] → [0, 13] ⊂ [0, 1]. Therefore f1 maps [0,1] to itself.

(b) f̂1 : [0, 1] → [23, 1] ⊂ [0, 1]. Therefore f2 maps [0,1] to itself.

Therefore f1 and f2 are contraction maps on the set D = [0, 1].

In general, the following affine map on R,

f(x) = ax+ b , (7)

243

Page 3: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

is a contraction mapping on R if |a| < 1 in which case its contraction factor is C = |a|. For all

x, y ∈ R,

|f(x)− f(y)| = |a| |x− y| , (8)

i.e., the distance between f(x) and f(y) is exactly |a| times the distance between x and y. For

nonlinear maps, such an equality will not exist and the best we can do is to find an inequality

the distances. More on this later.

That being said, in most of the examples and applications examined for the remainder of this

secction, affine maps will be employed.

2. Consider the following class of affine transformations in the plane R2,

f(x) = Ax+ b , (9)

or, in coordinate form,

f(x1, x2) =

(

a11 a12

a21 a22

)(

x1

x2

)

+

(

b1

b2

)

. (10)

Note that for x,y ∈ R2,

f(x)− f(y) = A(x− y) . (11)

This implies that

d(f(x), f(y)) = ‖A(x − y)‖2≤ ‖A‖2 d(x,y) , (12)

where ‖A‖2 denotes the (Euclidean) matrix norm of A, which you may have encountered

in a course on linear algebra. It suffices to state here that a sufficient condition for the affine

mapping f to be contractive is if |λ1| < 1 and |λ2| < 1, where the λi are the eigenvalues of A.

Definition (Iterated function system): Let f = {f1, f2, · · · , fN} denote a set of N contraction

mappings on a closed and bounded subset D ⊂ Rn, i.e., for each k ∈ {1, 2, · · ·N}, fk : D → D and

there exists a constant 0 ≤ Ck < such that

d(fk(x), fk(y)) ≤ Ck d(x, y) for all x, y ∈ D . (13)

Associated with this set of contraction mappings is the “parallel set-valued mapping” f̂ , defined as

follows: For any subset S ⊂ D,

f̂(S) =

n⋃

k=1

f̂k(S) , (14)

where the f̂k denote the set-valued mappings associated with the mappings fk. The set of maps f

with parallel operator f̂ define an N -map iterated function system on the set D ⊂ Rn.

244

Page 4: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

We now state the main result associated regarding N -map Iterated Function Systems as defined above.

Theorem: There exists a unique set A ⊂ D which is the “fixed point” of the parallel IFS operator f̂ ,

i.e.,

A = f̂(A) =

N⋃

k=1

f̂k(A) . (15)

Moreover, if you start with any set S0 ∈ D (even a single point x0 ∈ D), and form the iteration

sequence,

Sn+1 =ˆ̄f(Sn) , (16)

then the sequence of sets {Sn} converges to the fixed-point set A ⊂ D. For this reason, A is known

as the attractor of the IFS.

From Eq. (15) is that the set A is self-similar, i.e., A is the union of N geometrically-contracted

copies of itself. We shall be examining this property in a number of examples below.

There are actually two practical consequences of the above result, depending upon one’s perspective:

1. If you have a set of contraction maps f = {fk}Nk=1, you can quickly construct “pictures” of the

attractor set A. We’ll discuss this in more detail a little later.

2. Given a self-similar, possibly fractal, set S, one may be able to determine the maps {fk} com-

prising the IFS f for which S is the attractor. This will then allow us to construct the “pictures”

of S mentioned in 1. above without having to use “generators.”

Examples of IFS and their (fractal) attractors

Cantor-like sets on [0,1]

At this point, it is instructive to look at some examples. We have already shown how the Cantor set

C can be viewed as the attractor of a two-map IFS on [0,1], namely, the two maps,

f1(x) =1

3x , f2(x) =

1

3x+

2

3. (17)

Recall that both of these maps are contraction maps on [0, 1] with contractivity factors equal to1

3.

This is due to the factor1

3which multiplies x in each map. What would happen if we changed this

factor? For example, consider the two maps,

f1(x) =1

4x , f2(x) =

1

4x+

3

4. (18)

245

Page 5: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Once again, we have f1(0) = 0 and f2(1) = 1. The action of the IFS parallel map f̂ composed of these

two maps on the interval I0 = [0, 1] is as follows:

f̂([0, 1]) = f̂1([0, 1])⋃

f̂2([0, 1])

=

[

0,1

4

]

[

3

4, 1

]

= I1 . (19)

The action of f̂ on an interval is to remove the middle one-half of the interval:

1

I0

1

4

1

4I1

f̂1(I0) f̂2(I0)

An application of the IFS parallel map on I1 yields the following result,

f̂(I1) = f̂1(I1)⋃

f̂2(I1)

=

[

0,1

16

]

[

3

16,1

4

]

[

3

4,13

16

]

[

15

16, 1

]

= I2 . (20)

Repeated action of this IFS produces a nested set of sets In which, in the limit n → ∞, converges to

a Cantor-like set – we’ll call it C1/4 – that lies in [0, 1].

This limiting set is clearly not the ternary Cantor set – which would be called C1/3 to be

consistent – but it is a Cantor-like set. In fact, using the methods from the previous on fractal

dimension, it should not be too difficult to see that the fractal dimension of this set is

D =log (no. of copies)

log (1/scaling factor)=

log 2

log 4=

1

2. (21)

We may easily generalize this dissection procedure by considering the following two maps on [0, 1],

f1(x) = rx , f2(x) = r(x− 1) + 1 = rx+ (1− r) , (22)

where

0 < r <1

2. (23)

246

Page 6: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Note that

f1(0) = 0 , f2(1) = 1 . (24)

The action of the associated IFS parallel map f̂ on I0 = [0, 1] is as follows,

f̂([0, 1]) = f̂1([0, 1])⋃

f̂2([0, 1])

= [0, r]⋃

[1− r, 1]

= I1 . (25)

This IFS produces a dissection of [0,1] composed of two intervals of length r, as shown below.

1

I0

r r

I1

f̂1(I0) f̂2(I0)

We saw this iteration procedure earlier in the course in connection with shifted Tent Maps on [0,1].

It produces a set of nested sets In which, in the limit n → ∞, converge to a Cantor-like set Cr in

[0,1]. Once again using the methods of the previous section on fractal dimension, the dimension of

this Cantor set is found to be

D =log (no. of copies)

log 1

r

=log 2

log 1

r

. (26)

Special cases: Note that if we allow the parameter r to be 1

2, then no dissection is performed, i.e.,

f̂([0, 1]) = f̂1([0, 1])⋃

f̂2([0, 1])

=

[

0,1

2

]

[

1

2, 1

]

= [0, 1] . (27)

In other words, we have simply regenerated the interval [0, 1]. As such, I0 is a fixed point of the IFS.

From Eq. (26), the (fractal) dimension of this set is

D =log 2

log 1

1/2

= 1 , (28)

which is consistent with the fact that the attractor is the interval [0, 1].

247

Page 7: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Let us now examine the case, 1

2< r < 1. Here, the two sets

f̂1([0, 1]) = [0, r] , f̂2([0, 1]) = [1− r, 1] , (29)

intersect. There is no dissection of the interval [0, 1]. The union of these sets produces [0, 1]. Unfor-

tunately, the formula for the fractal dimension is not applicable here since the two sets overlap. The

dimension for the attractor set [0, 1] is always 1.

The dissection procedure discussed above can be further generalized by allowing the contraction factors

of the two maps to be independent, i.e.,

f1(x) = rx , f2(x) = s(x− 1) + 1 = sx+ (1− s) , (30)

where

0 < r < 1 , 0 < s < 1 and r + s < 1 . (31)

Note that

f1(0) = 0 , f2(1) = 1 . (32)

The action of the associated IFS parallel map f̂ on I0 = [0, 1] is as follows,

f̂([0, 1]) = f̂1([0, 1])⋃

f̂2([0, 1])

= [0, r]⋃

[1− s, 1]

= I1 . (33)

This IFS produces a dissection of [0,1] composed of two intervals, one of length r and the other of

length s as sketched below.

1

I0

r s

I1

f̂1(I0) f̂2(I0)

Repeated application of this IFS map will produce a nested set of intervals In which, in the limit

n → ∞, converges to a Cantor-like set Crs ⊂ [0, 1]. From our previous discussion of the fractal

dimension, in particular, the generalized von Koch curves, the fractal dimension Drs of the Cantor-

like set Crs will be the unique solution of the equation,

rD + sD = 1 . (34)

248

Page 8: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Unfortunately, no closed form expression for this solution exists, except in the special case that 0 <

r = s < 1

2, in which case the above equation becomes

2rD = 1 =⇒ D =log 2

log 1

r

, (35)

which is the result obtained earlier.

The von Koch curve

In a previous lecture, we showed that the von Koch curve, shown below,

von Koch curve

may be produced by the repeated action of the following generator G,

1

3ll

1

3l

1

3l

1

3l

starting with the set I0 = [0, 1]. We now show how this fractal curve can be generated by an IFS. We

assume, once again, that the leftmost point of the curve is situated at (0,0) and rightmost point at (1,0).

Recall once again that the von Koch curve C is self-similar, i.e., it may be expressed as a union of

four contracted copies of itself. Each contracted copy is one-fourth the size of C. This self-similarity

could be seen to be built into the set from the generator G that was used to construct it.

The first copy, which starts at the point (0,0), is obtained by shrinking the von Koch curve by a factor

249

Page 9: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

of 1

3toward (0,0) using the following affine map in R

2:

f1(x, y) =

(

1

30

0 1

3

)(

x

y

)

. (36)

The second copy, which starts at the point (13, 0) and ends at (1

2,√3

6), may be obtained by first

shrinking the von Koch curve toward (0,0) with contraction factor 1

3, then rotating it by an angle π

3

and finally translating it in the x-direction by 1

3.

Note: At this point, it might be helpful to recall the form of a rotation matrix in R2. The matrix R

which rotates all points in the plane by an angle θ ( counterclockwise for θ > 0) with respect to the

center point (0, 0) is as follows,

R =

(

cos θ − sin θ

sin θ cos θ

)

. (37)

As a check, when θ = 0, R = I, the identity matrix.

Returning to our main discussion, the following affine map will produce the second copy of the

von Koch curve,

f2(x, y) =1

3

(

1

2−

√3

2√3

2

1

2

)(

x

y

)

+

(

1

3

0

)

. (38)

The third copy of the von Koch curve, which starts at the point (12,√3

6), may be obtained by once

again shrinking the von Koch curve toward (0,0) with contraction factor 1

3, then rotating it by angle

−π3and finally translating it to (1

2,√3

6). all of these actions are accomplished with the following affine

map,

f3(x, y) =1

3

(

1

2

√3

2

−√3

2

1

2

)(

x

y

)

+

(

1

2√3

6

)

. (39)

The fourth copy, which ends at (1,0), may be obtained by once again shrinking the von Koch curve

toward (0,0) with contraction factor 1

3and then simply translating it to (2

3, 0). This is accomplished

with the following affine map,

f4(x, y) =

(

1

30

0 1

3

)(

x

y

)

+

(

2

3

0

)

. (40)

We have achieved our goal: The four contraction maps fi, 1 ≤ i ≤ 4, produce four copies f̂i(C) the

von Koch curve C such that the union of these copies reproduces C, i.e.,

C =

4⋃

i=1

f̂i(C) . (41)

The results are summarized below.

250

Page 10: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

von Koch curve

f1(x, y) =

(

1

30

0 1

3

)(

x

y

)

+

(

0

0

)

Contraction factor r = 1

3, no rotation, no translation.

f2(x, y) =

(

1

6−

√3

6√3

6

1

6

)(

x

y

)

+

(

1

3

0

)

Contraction factor r = 1

3, rotation π/3, translation.

f3(x, y) =

(

1

6

√3

6

−√3

6

1

6

)(

x

y

)

+

(

1

2√3

6

)

Contraction factor r = 1

3, rotation −π/3, translation.

f4(x, y) =

(

1

30

0 1

3

)(

x

y

)

+

(

2

3

0

)

.

Contraction factor r = 1

3, translation.

251

Page 11: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Lecture 31

Iterated function systems (IFS) and the construction of fractal sets

(cont’d)

Sierpinski gasket

The Sierpinski gasket, seen before and shown below, is a union of three copies of itself, so we shall

need three maps in the IFS. Each IFS map fi will be a 1

2contraction with no rotation. The affine

maps are quite easy to determine and are listed below.

Sierpinski gasket

f1(x, y) =

(

1

20

0 1

2

)(

x

y

)

+

(

0

0

)

Contraction factor r = 1

2, rotation 0.

f2(x, y) =

(

1

20

0 1

2

)(

x

y

)

+

(

1

4√3

4

)

Contraction factor r = 1

2, rotation 0, translation.

f3(x, y) =

(

1

20

0 1

2

)(

x

y

)

+

(

1

2

0

)

Contraction factor r = 1

2, rotation 0, translation.

252

Page 12: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Modified Sierpinski gasket - “Cantor tree”

If we alter the contraction factors of the three affine maps so that the first and third copies touch

the second copy, but do not touch each other, we produce the following fractal set. A horizontal line

intersecting the set will generally intersect it over a Cantor-like set.

Modified Sierpinski gasket - “Cantor tree”

f1(x, y) =

(

2

50

0 2

5

)(

x

y

)

+

(

0

0

)

f2(x, y) =

(

3

50

0 3

5

)(

x

y

)

+

(

1

5√3

5

)

f3(x, y) =

(

2

50

0 2

5

)(

x

y

)

+

(

3

5

0

)

253

Page 13: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Another modified Sierpinski gasket - “Cantor dust”

If the contraction factors of the three affine maps are adjusted so that there is no intersection of copies,

then the result is a “3D Cantor dust”.

Modified Sierpinski gasket - “3D Cantor dust”

f1(x, y) =

(

2

50

0 2

5

)(

x

y

)

+

(

0

0

)

f2(x, y) =

(

2

50

0 2

5

)(

x

y

)

+

(

3

10√3

10

)

f3(x, y) =

(

2

50

0 2

5

)(

x

y

)

+

(

3

5

0

)

254

Page 14: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

And yet another modified Sierpinski gasket

The contraction factors of the three maps have now been increased beyond 1

2so that there is overlap-

ping between the copies f̂i(S). This overlapping will occur in a self-similar manner throughout the

set.

Modified Sierpinski gasket with overlap

f1(x, y) =

(

3

50

0 3

5

)(

x

y

)

+

(

0

0

)

f2(x, y) =

(

3

50

0 3

5

)(

x

y

)

+

(

1

5√3

5

)

f3(x, y) =

(

3

50

0 3

5

)(

x

y

)

+

(

2

5

0

)

255

Page 15: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Modified Sierpinski gasket with rotations

In the following examples, the maps have the general form,

fi(x, y) = r

(

cos θ − sin θ

sin θ cos θ

)(

x

y

)

+

(

b1

b2

)

Modified Sierpinski gasket with one rotation map

f1(x, y) : r = 0.5 , θ =π

20.

f2(x, y) : r = 0.5 , θ = 0 .

f3(x, y) : r = 0.5 , θ = 0 .

256

Page 16: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Modified Sierpinski gasket with rotations

fi(x, y) = r

(

cos θ − sin θ

sin θ cos θ

)(

x

y

)

+

(

b1

b2

)

Modified Sierpinski gasket with two rotation maps

f1(x, y) : r = 0.5 , θ =π

20.

f2(x, y) : r = 0.5 , θ = 0 .

f3(x, y) : r = 0.5 , θ =π

20.

257

Page 17: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Modified Sierpinski gasket with rotations

fi(x, y) = r

(

cos θ − sin θ

sin θ cos θ

)(

x

y

)

+

(

b1

b2

)

Modified Sierpinski gasket with two rotation maps

f1(x, y) : r = 0.5 , θ =π

20.

f2(x, y) : r = 0.5 , θ = 0 .

f3(x, y) : r = 0.5 , θ = − π

20.

258

Page 18: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Sierpinski gasket with an additional map

We now add a fourth map to the Sierpinski gasket IFS. This fourth map is a contraction map with

fixed point at the center of the gasket. The contraction factor is 1

5– small enough to map the entire

triangle into the formerly empty space. Note that these copies are propagated to all “formerly empty”

spots.

Sierpinski gasket with extra map in middle

f1(x, y) =

(

1

20

0 1

2

)(

x

y

)

+

(

0

0

)

f2(x, y) =

(

1

20

0 1

2

)(

x

y

)

+

(

1

4√3

4

)

f3(x, y) =

(

1

20

0 1

2

)(

x

y

)

+

(

1

2

0

)

f4(x, y) =

(

1

50

0 1

5

)(

x

y

)

+

(

2

5√3

4− 1

5

)

259

Page 19: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

“Twin dragon” attractor in R2

This is a well-known set that is the attractor of a two-map IFS in the plane. The fixed points of the

two maps lie on the real line.

f1(x, y) =1√2

(

1√2

− 1√2

1√2

1√2

)(

x

y

)

+

(

1

0

)

f2(x, y) =1√2

(

1√2

− 1√2

1√2

1√2

)(

x

y

)

−(

1

0

)

Both affine maps involve (i) a contraction of1√2and (ii) a rotation by

π

4followed by a translation.

The twin-dragon attractor tiles the plane R2. There are no holes in this set. The two copies of this set

f1(A) and f2(A) have been shaded differently so that they can be viewed more easily as contracted

(by 1/√2) and rotated (by π/4) copies of A.

260

Page 20: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Tree-like sets

The following 3-map IFS shows how tree-like objects can be generated. The map f3 generates the

main stem. The other two maps take the stem, translate it upwards and then rotate it in opposite

directions. Of course, these maps operate not only on the stem but on the entire set. For this reason,

the self-similar tree-like attractor set is produced.

Simple tree

f1(x, y) =

(

0.353 −0.353

0.353 0.353

)(

x

y

)

+

(

0.0

0.5

)

Contraction factor r = 0.5, rotation π/4.

f2(x, y) =

(

0.353 0.353

−0.353 0.353

)(

x

y

)

+

(

0.0

0.5

)

Contraction factor r = 0.5, rotation −π/4.

f3(x, y) =

(

0.0 0.0

0.0 0.55

)(

x

y

)

+

(

0.0

0.5

)

“Terminator”: Squash onto y-axis.

261

Page 21: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Tree-like sets (cont’d)

A slight modification of the previous IFS – the maps f1 and f2 no longer have pure rotations, but

shear transformations.

Less simple tree

f1(x, y) =

(

0.4 −0.433

0.433 0.4

)(

x

y

)

+

(

0.0

0.5

)

Shear transformation

f2(x, y) =

(

0.4 0.433

−0.433 0.4

)(

x

y

)

+

(

0.0

0.5

)

Shear transformation

f3(x, y) =

(

0.0 0.0

0.0 0.55

)(

x

y

)

+

(

0.0

0.47

)

“Terminator”: Squash onto y-axis.

262

Page 22: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

The epitome of fractal attractors – at least in 1986

This is Prof. Michael Barnsley’s celebrated “spleenwort fern,” the attractor of a four-map IFS.

Barnsley’s Spleenwort Fern

f1(x, y) =

(

0.0 0.0

0.0 0.16

)(

x

y

)

+

(

0.50

0.0

)

“Terminator”: Squash onto y-axis to make stem.

f2(x, y) =

(

0.20 −0.26

0.23 0.22

)(

x

y

)

+

(

0.40

0.05

)

f3(x, y) =

(

−0.15 0.28

0.26 0.24

)(

x

y

)

+

(

0.57

−0.12

)

f4(x, y) =

(

0.85 0.04

−0.04 0.85

)(

x

y

)

+

(

0.08

0.18

)

263

Page 23: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

In the figure below, the action of the IFS maps, denoted as wi, is shown, in order to give the reader an

idea of how the smaller and smaller copies are produced in a geometric cascade as we move upwards

toward the peak of the fern. This cascade is produced by map w4. Map w1, the “terminator”, produces

the stem. Map w2 produces the lower right copy of the fern and map w3 flips the fern and produces

the lower left copy. Some playing around of the map parameters is necessary in order to ensure that

the bottoms of the stems of the two copies produced by w2 and w3 touch the main stem – otherwise,

there would be gaps that propagate throughout the fern. For the same reason, it is important that

the upward translation produced by w4 also touch the main stem.

264

Page 24: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Using IFS to model the real world?

Professor Barnsley’s beautiful discovery gave rise to the question – and the tremendous amount of

research it inspired – of how well IFS could be used to generate sets that approximated real-world

objects. This eventually led to the idea of fractal image coding – a way of representing photos

and videos as attractors of special types of IFS. Early in this research (mid 1980’s and early 1990’s),

fractal image coding was shown to be an effective method of image compression, i.e., reducing the

amount of computer memory required to store a representation of an image, from which it could be

regenerated. (JPEG compression has, and continues to be, a standard method of image compression,

although much more powerful methods of compression exist today. JPEG compression, by the way,

is based on the method of Fourier series representation of functions. It is actually a version of the

discrete cosine transform for discrete – in this case, digital – data sets.)

265

Page 25: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

Lecture 32

Iterated function systems (IFS) and the construction of fractal sets

(cont’d)

Methods to generate fractal attractor sets of IFS

We have not yet addressed one important matter regarding IFS and their attractor sets: Once we have

a set of N contraction maps f = {f1, f2, · · · , fn}, how do we produce a picture of the attractor A of

the parallel IFS operator f̂ which they define, i.e., the set which possesses the following self-similarity

property,

A =

N⋃

i=1

f̂i(A) ? (42)

In what follows, we describe briefly two principal methods that can be used to construct the fractal

attractor set A of an N -map IFS in Rn.

Random iteration algorithm: This method, the main ideas of which were described in a question

in Problem Set No. 5, was used to generate the pictures of IFS attractors presented earlier. Start

with a seed point x0 ∈ Rn and perform the following random iteration procedure:

xn+1 = fσn(xn) , n ≥ 0 . (43)

At each step n, the index σn is chosen randomly and independently from the set of indices {1, 2, · · · , N}.If the seed point x0 happens to lie in A, then all future xn will also be in A. However, just to be

safe, one should wait for a sufficient number of iterations, say 10 or, better yet, 50, before plotting

the points xn. If x0 ∈ A, then, because of the contractivity of the fi maps, future iterates xn will be

attracted to A.

Because the maps fi are chosen at random, the iterates xn will be travelling or (or at least near)

the attractor A in a random way. It is necessary to plot a sufficiently large number of xn so that all

regions of the attractor A are visited.

Here we mention that all of the IFS attractors presented in these lecture notes, e.g., von Koch

curve, Sierpinski trianglet, were computed using the random iteration algorithm.

There still remains the question of what probabilities to employ in the selection of the maps fi.

For many of the “standard” IFS attractors, e.g., von Koch curve, Sierpinski gasket, good results will

be obtained if the maps are chosen with equal probability, i.e., the probability pi of choosing index

i ∈ {1, 2, · · · , N} is pi = 1

N , 1 ≤ i ≤ N . For more complicated sets such as the Spleenwort fern,

however, it is necessary to employ nonuniform probabilities. To see this, note that the “terminator”

266

Page 26: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

map f1 (or w1) for the Spleenwort fern maps the entire attractor A onto the tiny stem situated at the

base of the fern. On the other hand, map f4 (or w4) is responsible for generating about 90% of the

fern in terms of the geometric cascade. As such, it is important that a much higher probability be

assigned to map w4. A rough rule of thumb is that the probability pi of choosing map fi should be in

some way related to the ratio of the area of the copy fi(A) to the area of A.

The idea of probabilities pi being associated with the IFS maps leads to another important con-

cept, that of invariant measures that “live” on their attractors A. Unfortunately, there is no time

to discuss this topic in this course.

Deterministic algorithm: Recall how the IFS “parallel operator” f̂ associated with a set of N

contraction maps fi acts on sets to produce sets: For a set S ∈ D (where D ⊂ Rk is our region of

interest),

f̂(S) =

N⋃

i=1

f̂i(S) . (44)

In other words, f̂ produces N contracted copies of S. Also recall that for any S ∈ D, the iterates of

S under the action of f̂ converge to the attractor/fixed point A of the IFS:

limn→∞

f̂n(S) = A = f̂(A) =N⋃

i=1

fi(A) . (45)

We can start the deterministic algorithm with a seed point x0 ∈ Rn, which defines our set S0. The

action of the IFS parallel operator on S0 is to produce a set S1 consisting of N (or possibly less)

points, i.e.,

S1 = f̂(S0) = f̂(x0) =N⋃

i=1

fi(x0) . (46)

Applying the IFS parallel operator f̂ on S1 will produce a set S2 consisting of N2 (or possibly less)

points, i.e.,

S2 = f̂(S1) =

N⋃

j=1

fj

(

N⋃

i=1

fi(x0)

)

=

N⋃

j=1

N⋃

i=1

(fj ◦ fi)(x0) . (47)

The reader should see the pattern. N applications of the IFS parallel operator f̂ will produce a set

Sn consisting of (at most) Nn points,

Sn = {(fi1 ◦ fi2 ◦ · · · ◦ fin)(x0) , i1, i2, · · · , in ∈ {1, 2, · · · , N} } . (48)

267

Page 27: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

If the point x0 ∈ A, the attractor of the IFS, then all points in S1, S2, etc., will lie in A. So if

you happen to know a point in A, it’s good to use it. But even if x0 is not in A, the contractivity

of the fi will bring the points in Sn closer and closer to A as n increases. What is more important,

however, is to generate a sufficient number of points to ensure that most of the set A is visited by them.

The above algorithm corresponds to taking all possible Nn paths from x0 down an N -tree to the

Nn points which comprise the set Sn. This can be done by means of a recursive calling of a subroutine,

provided that the computer language you are using supports recursion.

Using IFS attractors to approximate sets, including natural objects

We return to the idea of using IFS attractors to approximate sets, in particular, sets that look like

natural objects. As motivation, we revisit Prof. Michael Barnsley’s “spleenwort fern” attractor, shown

in the previous lecture and presented again below.

Spleenwort Fern – the attractor of a four-map IFS in R2.

As mentioned later in that lecture, with the creation of these fern-type attractors in 1984 came the

idea of using IFS to approximate other shapes and figures occuring in nature and, ultimately, images in

general. The IFS was seen to be a possible method of data compression. A high-resolution picture

of a shaded fern normally requires on the order of one megabyte of computer memory for storage.

Current compression methods might be able to cut this number by a factor of ten or so. However,

as an attractor of a four map IFS with probabilities, this fern may be described totally in terms of

only 28 IFS parameters! This is a staggering amount of data compression. Not only are the storage

requirements reduced but you can also send this small amount of data quickly over communications

268

Page 28: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

lines to others who could then “decompress” it and reconstruct the fern by simply iterating the IFS

“parallel” operator f̂ .

However, not all objects in nature – in fact, very few – exhibit the special self-similarity of the

spleenwort fern. Nevertheless, as a starting point there remains the interesting general problem to

determine to determine how well sets and images can be approximated by the attractors of IFS. We

pose the so-called inverse problem for geometric approximation with IFS as follows:

Given a “target” set S, can one find an IFS f = {f1, f2, · · · , fN} whose attractor A

approximates S to some desired degree of accuracy in an appropriate metric “D” which

measures distances between sets?

At first, this appears to be a rather formidable problem. How does one start? By selecting an

initial set of maps {f1, f2, · · · , fN}, iterating the associated parallel operator f̂ to produce its attractor

A and then comparing it to the target set S? And then perhaps altering some or all of the maps in

some ways, looking at the effects of the changes on the resulting attractors, hopefully zeroing in on

some final IFS?

If we step back a little, we can come up with a strategy. In fact, it won’t appear that strange after

we outline it, since you are already accustomed to looking at the self-similarity of IFS attractors, e.g.,

the Sierpinski triangle in this way. Here is the strategy.

Given a target set S, we are looking for the attractor A of an N -map IFS f which approximates

it well, i.e.,

S ≈ A . (49)

By “≈”, we mean that the S and A are “close” – for the moment “visually close” will be sufficient.

Now recall that A is the attractor of the IFS f so that

A =N⋃

k=1

f̂k(A) . (50)

Substitution into Eq. (49) yields

S ≈N⋃

k=1

f̂k(A) . (51)

But we now use Eq. (49) to replace A on the RHS and arrive at the final result,

S ≈N⋃

k=1

f̂k(S) . (52)

In other words, in order to find an IFS with attractor A which approximates S, we look for an IFS,

i.e., a set of maps f = {f1, f2, · · · fn}, which, under the parallel action of the IFS operator

269

Page 29: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

f̂ , map the target set S as close as possible to itself. In this way, we are expressing the

target set S as closely as possible as a union of contracted copies of itself.

This idea should not seem that strange. After all, if the set S is self-similar, e.g., the attractor of

an IFS, then the approximation in Eq. (52) becomes an equality.

The basic idea is illustrated in the figure below. At the left, a leaf – enclosed with a solid curve

– is viewed as an approximate union of four contracted copies of itself. Each smaller copy is ob-

tained by an appropriate contractive IFS map fi. If we restrict ourselves to affine IFS maps in the

plane, i.e. fi(x) = Ax + b, then the coefficients of each matrix A and associated column vector

b – a total of six unknown coefficients – can be obtained from a knowledge of where three points

of the original leaf S are mapped in the contracted copy f̂i(S). We then expect that the attractor

A of the resulting IFS f̂ lies close to the target leaf S. The attractor A of the IFS is shown on the right.

Left: Approximating a leaf as a “collage”, i.e. a union of contracted copies of itself. Right: The

attractor A of the four-map IFS obtained from the “collage” procedure on the left.

In general, the determination of optimal IFS maps by looking for approximate geometric self-

similarities in a set is a very difficult problem with no simple solutions, especially if one wishes

to automate the process. Fortunately, we can proceed by another route by realizing that there is

much more to a picture than just geometric shapes. There is also shading. For example, a real fern

has veins which may be darker than the outer extremeties of the fronds. Thus it is more natural to

think of a picture as defining a function: At each point or pixel (x, y) in a photograph or a computer

display (represented, for convenience, by the region X = [0, 1]2) there is an associated grey level or

greyscale value u(x, y), which may assume a finite nonnegative value. (In practical applications, i.e.

digitized images, each pixel can assume one of only a finite number of discrete values.) This leads

to the consideration of an IFS-type method which operates on image functions: Given an image

270

Page 30: Lecture 30 Iterated function systems (IFS) and the ...links.uwaterloo.ca/amath343docs/week11.pdf · This IFS produces a dissection of [0,1] composed of two intervals of length r,

function u(x, y) that we wish to approximate by the attractor of an IFS-type method, find a set of

maps – involving both the spatial coordinates (x, y) as well as the greyscale values y = u(x, y) – which

approximates the function u(x, y) as a union of geometrically-contracted and greyscale-modified copies

of itself. This is the idea of fractal image coding.

For a “gentle” introduction to fractal image coding, the reader is invited to consult the instructor’s

article, A Hitchhiker’s Guide to “Fractal-Based” Function Approximation and Image Compression, a

slightly expanded version of two articles which appeared in the February and August 1995 issues of

the UW Faculty of Mathematics Alumni newpaper, Math Ties. It may be downloaded from the in-

structor’s webpage.

271